pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
null
null
# Llama 2 Chat 7B for RK3588 This is a conversion from https://huggingface.co/meta-llama/Llama-2-7b-chat-hf to the RKLLM format for Rockchip devices. This runs on the NPU from the RK3588. # Main repo See this for my full collection of converted LLMs for the RK3588's NPU: https://huggingface.co/Pelochus/ezrkllm-collection # License Same as the original LLM: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/blob/main/LICENSE.txt
{"tags": ["llama2", "llama2-7b", "rkllm", "rockchip", "rk3588"]}
Pelochus/llama2-chat-7b-hf-rk3588
null
[ "llama2", "llama2-7b", "rkllm", "rockchip", "rk3588", "region:us" ]
null
2024-04-13T15:28:36+00:00
[]
[]
TAGS #llama2 #llama2-7b #rkllm #rockchip #rk3588 #region-us
# Llama 2 Chat 7B for RK3588 This is a conversion from URL to the RKLLM format for Rockchip devices. This runs on the NPU from the RK3588. # Main repo See this for my full collection of converted LLMs for the RK3588's NPU: URL # License Same as the original LLM: URL
[ "# Llama 2 Chat 7B for RK3588\nThis is a conversion from URL to the RKLLM format for Rockchip devices. \nThis runs on the NPU from the RK3588.", "# Main repo\nSee this for my full collection of converted LLMs for the RK3588's NPU:\n\nURL", "# License\nSame as the original LLM:\n\nURL" ]
[ "TAGS\n#llama2 #llama2-7b #rkllm #rockchip #rk3588 #region-us \n", "# Llama 2 Chat 7B for RK3588\nThis is a conversion from URL to the RKLLM format for Rockchip devices. \nThis runs on the NPU from the RK3588.", "# Main repo\nSee this for my full collection of converted LLMs for the RK3588's NPU:\n\nURL", "# License\nSame as the original LLM:\n\nURL" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [arcee-ai/sec-mistral-7b-instruct-1.6-epoch](https://huggingface.co/arcee-ai/sec-mistral-7b-instruct-1.6-epoch) * [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: arcee-ai/sec-mistral-7b-instruct-1.6-epoch layer_range: [0, 32] - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 layer_range: [0, 32] merge_method: slerp base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["arcee-ai/sec-mistral-7b-instruct-1.6-epoch", "cognitivecomputations/dolphin-2.8-mistral-7b-v02"]}
mergekit-community/dolphin-mistral-instruct-7b
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch", "base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T15:29:16+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-arcee-ai/sec-mistral-7b-instruct-1.6-epoch #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * arcee-ai/sec-mistral-7b-instruct-1.6-epoch * cognitivecomputations/dolphin-2.8-mistral-7b-v02 ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* arcee-ai/sec-mistral-7b-instruct-1.6-epoch\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-arcee-ai/sec-mistral-7b-instruct-1.6-epoch #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* arcee-ai/sec-mistral-7b-instruct-1.6-epoch\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-human-parser This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1695 - Mean Iou: 0.6248 - Mean Accuracy: 0.7318 - Overall Accuracy: 0.9504 - Per Category Iou: [0.9788481644896356, 0.5892312801423537, 0.7530359111050211, 0.40150078831301017, 0.759007420581381, 0.5918265856291685, 0.7295908937991514, 0.5957128350073513, 0.29234848352263776, 0.50965626208837, 0.5006675514608004, 0.7987689193921012, 0.7493932173414964, 0.7541494190147875, 0.723483844844196, 0.7233847443350387, 0.6382073069749079, 0.15805024780469318] - Per Category Accuracy: [0.9894757584408714, 0.7237625570776256, 0.8611644158457461, 0.437957157784744, 0.8837220958158657, 0.6895901282621771, 0.8687869923662133, 0.7556548829789819, 0.38556260819388344, 0.6658026398491514, 0.6585143772877097, 0.9066994160521716, 0.8668649943781122, 0.8502555890338424, 0.8532504098367223, 0.8396507214826516, 0.7513961299212337, 0.18413990548791004] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | No log | 1.0 | 300 | 0.5492 | 0.2735 | 0.3609 | 0.8877 | [0.9549821998088528, 0.0, 0.6118139617850337, 0.0, 0.5216242508548116, 0.2777339456209645, 0.5122677803943367, 0.18436029966897033, 0.0, 0.010782200543460695, 0.0, 0.6655903673410478, 0.41018015604066516, 0.3081475335034851, 0.12417407878017789, 0.27884415987406247, 0.06250047083056665, 0.0] | [0.9832989935765462, 0.0, 0.8147291863838911, 0.0, 0.887714080264451, 0.4180678278558261, 0.7615240773456551, 0.21988538003594602, 0.0, 0.010903834066624764, 0.0, 0.8059236605430289, 0.7257109903446306, 0.38293121873469227, 0.12528263910811838, 0.2957484860744537, 0.06412150360266661, 0.0] | | 0.9562 | 2.0 | 600 | 0.2966 | 0.4208 | 0.5251 | 0.9210 | [0.9688580375120208, 0.0012359208523592085, 0.6643013921043928, 0.0, 0.6609494007786153, 0.36114643842189653, 0.6204816778313804, 0.3903737425899652, 0.0, 0.329012741065434, 0.059303439224489586, 0.7059378067944898, 0.5766512154345281, 0.5851999343449144, 0.5939953707591535, 0.592555256128876, 0.46418725792175913, 0.0] | [0.98745563536642, 0.0012359208523592085, 0.824324038618192, 0.0, 0.8425629808037797, 0.45267673334865505, 0.7586258718694472, 0.6064525033250334, 0.0, 0.5359371464487743, 0.06089112388167257, 0.8673788414816078, 0.7444751566097338, 0.7299181193277617, 0.7254326321372885, 0.7190841312347083, 0.5958355691285808, 0.0] | | 0.9562 | 3.0 | 900 | 0.2513 | 0.4607 | 0.5580 | 0.9263 | [0.9725091160684688, 0.14855734979814772, 0.677348095665415, 0.0, 0.646576412225751, 0.3974497016759501, 0.5984044043047161, 0.3987109042877217, 0.0, 0.373744443990598, 0.3376357853041791, 0.7243261927420948, 0.6405549659551344, 0.6309876029203761, 0.608383061921565, 0.618438110945233, 0.5196700886261589, 0.0] | [0.9852854166837305, 0.15234703196347033, 0.8082490855975236, 0.0, 0.9235572625372296, 0.45350434956944397, 0.894635411892739, 0.47534978111381, 0.0, 0.5060993086109365, 0.4390685525935831, 0.8596916158731512, 0.7836655155181953, 0.7033966705361421, 0.7075760902137561, 0.717594287218808, 0.6345816247441256, 0.0] | | 0.2758 | 4.0 | 1200 | 0.2141 | 0.5050 | 0.6025 | 0.9358 | [0.9738308066334914, 0.39074635419023057, 0.7079980374728607, 0.0, 0.6900770649513168, 0.5073099614956236, 0.6570693425781712, 0.5139368408162509, 0.0, 0.38295206484223304, 0.380551019433701, 0.7462835279947642, 0.6562641635398792, 0.6512530926527663, 0.6473580442703908, 0.6381828941414001, 0.5465645036196742, 0.0] | [0.9887685525392029, 0.4732420091324201, 0.8336129500035444, 0.0, 0.8687858535953492, 0.6127752404610202, 0.8742595710804784, 0.6595062357623179, 0.0, 0.49486360779384037, 0.5183446022409474, 0.8675685179659861, 0.7478070175438597, 0.7293468089690367, 0.7692559159910838, 0.7358143731417086, 0.6707010349799827, 0.0] | | 0.1986 | 5.0 | 1500 | 0.1979 | 0.5193 | 0.6180 | 0.9397 | [0.9735201277974378, 0.4271586708192732, 0.7088750538777107, 0.0, 0.7158768245426267, 0.5574907653230701, 0.6885542934480133, 0.5408903936431009, 0.0, 0.40501379416047206, 0.3769302096342365, 0.755407287525701, 0.6670435298728543, 0.6705490185782534, 0.6558029975704469, 0.6593847752496036, 0.5452203944873667, 0.0] | [0.990629057174256, 0.5001826484018265, 0.8611028545773601, 0.0, 0.8504929390187358, 0.6821886475993018, 0.8266017947564303, 0.68236251316186, 0.0, 0.5314921433060968, 0.4695401297116738, 0.8883199778355568, 0.8562057164783781, 0.7939059513818101, 0.7807633940089322, 0.7905452700663366, 0.6191165462097474, 0.0] | | 0.1986 | 6.0 | 1800 | 0.1880 | 0.5385 | 0.6506 | 0.9427 | [0.9760795396969519, 0.49546113392674634, 0.7220136128809921, 0.0, 0.7309186937055082, 0.5440699209336682, 0.6956642876121553, 0.5778957468660312, 0.0, 0.45966320586196857, 0.42269142660206377, 0.7627396838617393, 0.683665616622042, 0.6876194866777059, 0.6761946354847002, 0.6771030717469252, 0.5804764385040639, 0.0] | [0.9879277145076928, 0.6493089802130898, 0.8507195206429236, 0.0, 0.8673710483519189, 0.6383928353129293, 0.827153147058454, 0.7699004480072285, 0.0, 0.6786524198617222, 0.5424190472721786, 0.8977739653041217, 0.8809442540736379, 0.7833886957724105, 0.8262324347105663, 0.8358410233687171, 0.6755292486182233, 0.0] | | 0.1604 | 7.0 | 2100 | 0.1801 | 0.5512 | 0.6583 | 0.9451 | [0.9768452011492089, 0.5139995533826512, 0.7321181285491519, 0.0, 0.7394251546579675, 0.5626216812369556, 0.6996068780072735, 0.5955600641301114, 0.013181504485852312, 0.4662455998760579, 0.45393305665267836, 0.7712977594476239, 0.7091649640409238, 0.7115348901036068, 0.6884791107150497, 0.6904370923226971, 0.596390281908532, 0.0] | [0.9877702999704974, 0.6586423135464231, 0.8554081263361594, 0.0, 0.8912112719378347, 0.6376617946924984, 0.8599751263341213, 0.7450261479700625, 0.013225620311598385, 0.6506901319924576, 0.6123374376611315, 0.8868025659605302, 0.8339579875426103, 0.8336640073402126, 0.8096899342819004, 0.8012390750123618, 0.7724611854904065, 0.0] | | 0.1604 | 8.0 | 2400 | 0.1778 | 0.5539 | 0.6585 | 0.9448 | [0.9770667220917759, 0.5170361657163364, 0.7280902499823723, 8.707767328456984e-05, 0.7367522095068177, 0.5689756877304252, 0.7008091784312203, 0.5725296206849773, 0.05353968775119725, 0.4611822190210144, 0.4653302901758269, 0.775234121421819, 0.7100158455815765, 0.706912520826242, 0.6926195102723859, 0.6954450806871421, 0.6085951266279969, 0.0] | [0.9876449078033697, 0.6315616438356164, 0.8732515667031888, 8.707767328456984e-05, 0.8794785955208667, 0.6744662669985282, 0.8310730990791718, 0.761356934135649, 0.05457587997691864, 0.5956178504085481, 0.63011360827907, 0.8869666680874643, 0.870182354410951, 0.8459069846710893, 0.8025300106245062, 0.8150169212887151, 0.7134814589088436, 0.0] | | 0.1409 | 9.0 | 2700 | 0.1787 | 0.5645 | 0.6692 | 0.9457 | [0.9774355149875643, 0.5510517221518259, 0.7312727926222036, 0.022312196608546116, 0.7350711309053871, 0.5614035464757676, 0.7147139035430374, 0.5703592039897254, 0.120412942591956, 0.48404448269485184, 0.46641653400561417, 0.7787625198600512, 0.7244436472900285, 0.7267460530216988, 0.700144511645333, 0.695542108555244, 0.6005887704113232, 0.0] | [0.9884721112458781, 0.7281095890410959, 0.8299478781260998, 0.022313653779171022, 0.9151881242410469, 0.6386825621140032, 0.856831255022075, 0.7009491012717575, 0.1293594922100404, 0.6843871778755499, 0.6151054525210008, 0.8915828396061549, 0.8380003926397886, 0.822356778495578, 0.8175389284792854, 0.7944223964654735, 0.7719378437880223, 0.0] | | 0.125 | 10.0 | 3000 | 0.1732 | 0.5754 | 0.6771 | 0.9472 | [0.9773518514328442, 0.5454509350317965, 0.7400147481796903, 0.0804068150208623, 0.7504513423907763, 0.5667231094077799, 0.707444281418412, 0.5841859784154606, 0.16627132750728257, 0.4526267217406859, 0.4755204054945095, 0.7842849242463608, 0.7288534900652005, 0.7226213774065801, 0.70712983974001, 0.7035594729392917, 0.6299971188122727, 0.03478374063949465] | [0.9896322026032164, 0.6689436834094369, 0.8524227157349359, 0.0805468477882271, 0.8944258229648844, 0.662617418815372, 0.8473261274456662, 0.7218953905447576, 0.18444316214656664, 0.5598164676304211, 0.6716598463543632, 0.900672392481139, 0.856513581767236, 0.8560844553204215, 0.8319517201924911, 0.834762170805479, 0.7387464972541122, 0.035024662826394395] | | 0.125 | 11.0 | 3300 | 0.1739 | 0.5890 | 0.6979 | 0.9470 | [0.9777330455634119, 0.5675078907706614, 0.7372831569029541, 0.16542927439892952, 0.7427074791988457, 0.5782845962135678, 0.7214436362376275, 0.5722878749931231, 0.2058636202800384, 0.49905860495575977, 0.4704070648273662, 0.7872068799542741, 0.7316628962354176, 0.734756476482697, 0.7077843148100431, 0.7094031848591091, 0.6194635152180895, 0.0729665150943521] | [0.988806578075719, 0.754234398782344, 0.8685698011508848, 0.16686259143155696, 0.8557120644759385, 0.6962348963095896, 0.8524367212730329, 0.7689250359382745, 0.23500288517022505, 0.7050584538026399, 0.6138931452609452, 0.8923042496057286, 0.8697964073459336, 0.811100999739534, 0.8407349658429336, 0.8312735597639368, 0.7347066084587035, 0.07605118829981719] | | 0.1148 | 12.0 | 3600 | 0.1700 | 0.5980 | 0.7001 | 0.9481 | [0.9778018061430781, 0.5708823248255425, 0.7425154552038586, 0.2236867444099711, 0.7444078345337019, 0.5950717101614158, 0.7234283678662421, 0.5765847328098312, 0.23484204042122556, 0.4801390680776628, 0.4897876605538644, 0.7882241215574549, 0.740445370114888, 0.7458225407883768, 0.7150568150507534, 0.7118838076150624, 0.6184038808400304, 0.08554807503758416] | [0.9897158357577079, 0.7271841704718417, 0.843436636043563, 0.22757749912922326, 0.8871126392777582, 0.7430056038297727, 0.8390423499053861, 0.7170090257923349, 0.27718407386035776, 0.6100439974858579, 0.6864832943539261, 0.9074421380162824, 0.8351492923560172, 0.8592400607539796, 0.8317466015201217, 0.8177365287918777, 0.712046133564752, 0.09029009002793971] | | 0.1148 | 13.0 | 3900 | 0.1710 | 0.6074 | 0.7144 | 0.9484 | [0.9783396931951305, 0.566654824138456, 0.7452299560328497, 0.2804236159553554, 0.7441416393473166, 0.5890870074936531, 0.7258735850051261, 0.5828579852893129, 0.2683477880620946, 0.48800175739135293, 0.49247722180877007, 0.7880866756343737, 0.7400328630323235, 0.744623035421065, 0.7099267029789523, 0.7139782725657176, 0.6311719921886055, 0.1442810807060551] | [0.9886643605955049, 0.6797442922374429, 0.8574166404461264, 0.28879310344827586, 0.8775113250788533, 0.6809209400351095, 0.8584741452992447, 0.7561495673191986, 0.3399422965954991, 0.6367039597737272, 0.6593468629598508, 0.9177240953071054, 0.8675956167121772, 0.8506221709150505, 0.8458997899136098, 0.8436691261936412, 0.7386802514690003, 0.17215687627194648] | | 0.1079 | 14.0 | 4200 | 0.1740 | 0.6086 | 0.7102 | 0.9487 | [0.977942105430483, 0.5768343105192344, 0.7490884365579236, 0.33361640430820216, 0.7508990053557766, 0.5692915941990903, 0.7201425857375091, 0.5911614079391055, 0.24133583561907512, 0.4868673699182174, 0.48424487890658796, 0.7959107866121614, 0.7422369582329509, 0.7456813503731655, 0.7169848463044369, 0.7136604053934179, 0.6272778540958716, 0.13243990384615384] | [0.9900950549598935, 0.7151902587519026, 0.8672900729656489, 0.3506400208986416, 0.8714781487884443, 0.6356343183229586, 0.8553712182931714, 0.799373193071111, 0.2893941142527409, 0.6196404776869893, 0.6261228800882435, 0.9018775840757001, 0.8677339330013742, 0.8500079854825, 0.8463084247687207, 0.8266161917788866, 0.7184256026710301, 0.15203338967265703] | | 0.1021 | 15.0 | 4500 | 0.1712 | 0.6187 | 0.7237 | 0.9493 | [0.9782585994033568, 0.5740924715372847, 0.7491715773047957, 0.367804599342951, 0.748062714983797, 0.5987804324794581, 0.7265324207967331, 0.5926331677105839, 0.2712994816751525, 0.5070523117524783, 0.49529704138289005, 0.7990307004764193, 0.7460380903478778, 0.7483911832010839, 0.7192368133134107, 0.7175594657148434, 0.6334052080040388, 0.164652203479062] | [0.9893027356709104, 0.7036468797564688, 0.8529344826428314, 0.3923937652385928, 0.8803567268244968, 0.6857381065314445, 0.8468213027640663, 0.7854713742144883, 0.3412925562608194, 0.6801231929604022, 0.6553639393222006, 0.9000223775627637, 0.8557149167425175, 0.8472746995261232, 0.8353858554650505, 0.8262340981627397, 0.7411015349148411, 0.207126349556759] | | 0.1021 | 16.0 | 4800 | 0.1706 | 0.6186 | 0.7235 | 0.9496 | [0.9785733461638714, 0.5773717122071097, 0.750095151508004, 0.3705804345608487, 0.7543896875914221, 0.5896111900384422, 0.7211858115423779, 0.598724412527825, 0.2816084038088942, 0.4991221198247389, 0.5039075245094231, 0.7986545855749791, 0.7465591063871141, 0.7489826949950806, 0.7194423946958957, 0.7197747616797256, 0.6252712948669017, 0.15062471757131873] | [0.9896713958789595, 0.7223561643835616, 0.8701044800435779, 0.3954197143852316, 0.8787325157580742, 0.6899935453269634, 0.8538148693899903, 0.7724648461386967, 0.35392960184650896, 0.6375235700817096, 0.6750001951138294, 0.8970855888495801, 0.8740418250611269, 0.8347348123090049, 0.8343137899039949, 0.8387099363605423, 0.7271435479917591, 0.17706184678003517] | | 0.0975 | 17.0 | 5100 | 0.1723 | 0.6222 | 0.7301 | 0.9499 | [0.978806920523348, 0.5866061881769667, 0.7537012389986214, 0.3952780503434327, 0.7541491676661445, 0.5841659503641633, 0.7225422833286509, 0.6005818579770271, 0.28203029430812404, 0.5106977620730271, 0.4931532948501751, 0.7991115256696534, 0.7453387360353214, 0.7515511097787273, 0.7226869236310253, 0.7204798555392766, 0.6375654138472647, 0.1602542675687825] | [0.9891667516155837, 0.7448523592085236, 0.8655551644929528, 0.429706548241031, 0.883565785239796, 0.6592433851826135, 0.8723300706614971, 0.768198775229806, 0.36185804962492785, 0.6813048397234444, 0.6329752777770551, 0.903790332892886, 0.8699592636219236, 0.8491386933606876, 0.8456642239383105, 0.8352807264273926, 0.7565147209175482, 0.1918319478458832] | | 0.0975 | 18.0 | 5400 | 0.1718 | 0.6243 | 0.7300 | 0.9503 | [0.978730780232829, 0.5890366204991634, 0.7527785649582212, 0.40151892678453965, 0.755079595556939, 0.5969949096729052, 0.73138008588973, 0.5988889144014531, 0.2887897988138566, 0.5094783824684405, 0.502135225079791, 0.7993543401282869, 0.748229647819225, 0.7549749431572005, 0.72446743106999, 0.7234529961894286, 0.6381000698161508, 0.14407203903521734] | [0.9894557917447941, 0.7244809741248097, 0.8572947615713421, 0.43850139324277254, 0.8871163010189252, 0.6821654205561777, 0.863791787037498, 0.7684944486285562, 0.378199653779573, 0.6686687617850409, 0.6708299621999474, 0.9071203273517753, 0.8656826131962664, 0.8442219942697464, 0.8422076538109607, 0.8299667353793002, 0.7568349088789226, 0.1647890724707668] | | 0.0953 | 19.0 | 5700 | 0.1715 | 0.6239 | 0.7328 | 0.9501 | [0.9789040595256554, 0.5894054746593737, 0.753103238666386, 0.40122089891675145, 0.7573341787966735, 0.5819368258421082, 0.7239200863306356, 0.5998856882408785, 0.28982979543900855, 0.5125684778408359, 0.5018867850090838, 0.7986256654354295, 0.7481727644728351, 0.7535231345325545, 0.7220330250580966, 0.723473256480269, 0.6384977675166612, 0.15656856859236892] | [0.9890105377116778, 0.7282252663622527, 0.8720962047167125, 0.4378265412748171, 0.8839733828034565, 0.6621351520515592, 0.8736337745857762, 0.763339806788806, 0.3816387766878246, 0.6802991829038341, 0.6619041548839593, 0.9075273858744299, 0.8716681390658743, 0.8442187786392095, 0.8578800023716846, 0.8452263984947438, 0.7487032387564341, 0.18171156565830776] | | 0.0924 | 20.0 | 6000 | 0.1695 | 0.6248 | 0.7318 | 0.9504 | [0.9788481644896356, 0.5892312801423537, 0.7530359111050211, 0.40150078831301017, 0.759007420581381, 0.5918265856291685, 0.7295908937991514, 0.5957128350073513, 0.29234848352263776, 0.50965626208837, 0.5006675514608004, 0.7987689193921012, 0.7493932173414964, 0.7541494190147875, 0.723483844844196, 0.7233847443350387, 0.6382073069749079, 0.15805024780469318] | [0.9894757584408714, 0.7237625570776256, 0.8611644158457461, 0.437957157784744, 0.8837220958158657, 0.6895901282621771, 0.8687869923662133, 0.7556548829789819, 0.38556260819388344, 0.6658026398491514, 0.6585143772877097, 0.9066994160521716, 0.8668649943781122, 0.8502555890338424, 0.8532504098367223, 0.8396507214826516, 0.7513961299212337, 0.18413990548791004] | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "other", "tags": ["generated_from_trainer"], "base_model": "nvidia/mit-b0", "model-index": [{"name": "segformer-human-parser", "results": []}]}
photel/segformer-human-parser
null
[ "transformers", "tensorboard", "safetensors", "segformer", "generated_from_trainer", "base_model:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-13T15:33:45+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #segformer #generated_from_trainer #base_model-nvidia/mit-b0 #license-other #endpoints_compatible #region-us
segformer-human-parser ====================== This model is a fine-tuned version of nvidia/mit-b0 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1695 * Mean Iou: 0.6248 * Mean Accuracy: 0.7318 * Overall Accuracy: 0.9504 * Per Category Iou: [0.9788481644896356, 0.5892312801423537, 0.7530359111050211, 0.40150078831301017, 0.759007420581381, 0.5918265856291685, 0.7295908937991514, 0.5957128350073513, 0.29234848352263776, 0.50965626208837, 0.5006675514608004, 0.7987689193921012, 0.7493932173414964, 0.7541494190147875, 0.723483844844196, 0.7233847443350387, 0.6382073069749079, 0.15805024780469318] * Per Category Accuracy: [0.9894757584408714, 0.7237625570776256, 0.8611644158457461, 0.437957157784744, 0.8837220958158657, 0.6895901282621771, 0.8687869923662133, 0.7556548829789819, 0.38556260819388344, 0.6658026398491514, 0.6585143772877097, 0.9066994160521716, 0.8668649943781122, 0.8502555890338424, 0.8532504098367223, 0.8396507214826516, 0.7513961299212337, 0.18413990548791004] Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 6e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 20 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #segformer #generated_from_trainer #base_model-nvidia/mit-b0 #license-other #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: NousResearch/Hermes-2-Pro-Mistral-7B - model: WizardLM/WizardMath-7B-V1.1 merge_method: slerp base_model: NousResearch/Hermes-2-Pro-Mistral-7B dtype: bfloat16 parameters: t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["NousResearch/Hermes-2-Pro-Mistral-7B", "WizardLM/WizardMath-7B-V1.1"]}
mergekit-community/mergekit-slerp-gmjabaw
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:NousResearch/Hermes-2-Pro-Mistral-7B", "base_model:WizardLM/WizardMath-7B-V1.1", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T15:35:33+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #base_model-WizardLM/WizardMath-7B-V1.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * NousResearch/Hermes-2-Pro-Mistral-7B * WizardLM/WizardMath-7B-V1.1 ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Hermes-2-Pro-Mistral-7B\n* WizardLM/WizardMath-7B-V1.1", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #base_model-WizardLM/WizardMath-7B-V1.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Hermes-2-Pro-Mistral-7B\n* WizardLM/WizardMath-7B-V1.1", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
token-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Irisissocute/fine_tuned_biogpt_ncbi
null
[ "transformers", "safetensors", "gpt2", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T15:37:33+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gpt2 #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gpt2 #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
abhayesian/BobzillaV15
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T15:41:55+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
image-classification
transformers
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metrics loss: 0.4469132423400879 f1_macro: 0.8236777117298302 f1_micro: 0.829560585885486 f1_weighted: 0.8289271724966029 precision_macro: 0.8243514221166717 precision_micro: 0.829560585885486 precision_weighted: 0.8313607282611274 recall_macro: 0.8260057947019868 recall_micro: 0.829560585885486 recall_weighted: 0.829560585885486 accuracy: 0.829560585885486
{"tags": ["autotrain", "image-classification"], "datasets": ["xblock-base-patch1-224/autotrain-data"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
howdyaendra/xblock-base-patch1-224
null
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "autotrain", "dataset:xblock-base-patch1-224/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T15:42:05+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vit #image-classification #autotrain #dataset-xblock-base-patch1-224/autotrain-data #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metrics loss: 0.4469132423400879 f1_macro: 0.8236777117298302 f1_micro: 0.829560585885486 f1_weighted: 0.8289271724966029 precision_macro: 0.8243514221166717 precision_micro: 0.829560585885486 precision_weighted: 0.8313607282611274 recall_macro: 0.8260057947019868 recall_micro: 0.829560585885486 recall_weighted: 0.829560585885486 accuracy: 0.829560585885486
[ "# Model Trained Using AutoTrain\n\n- Problem type: Image Classification", "## Validation Metrics\nloss: 0.4469132423400879\n\nf1_macro: 0.8236777117298302\n\nf1_micro: 0.829560585885486\n\nf1_weighted: 0.8289271724966029\n\nprecision_macro: 0.8243514221166717\n\nprecision_micro: 0.829560585885486\n\nprecision_weighted: 0.8313607282611274\n\nrecall_macro: 0.8260057947019868\n\nrecall_micro: 0.829560585885486\n\nrecall_weighted: 0.829560585885486\n\naccuracy: 0.829560585885486" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #autotrain #dataset-xblock-base-patch1-224/autotrain-data #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoTrain\n\n- Problem type: Image Classification", "## Validation Metrics\nloss: 0.4469132423400879\n\nf1_macro: 0.8236777117298302\n\nf1_micro: 0.829560585885486\n\nf1_weighted: 0.8289271724966029\n\nprecision_macro: 0.8243514221166717\n\nprecision_micro: 0.829560585885486\n\nprecision_weighted: 0.8313607282611274\n\nrecall_macro: 0.8260057947019868\n\nrecall_micro: 0.829560585885486\n\nrecall_weighted: 0.829560585885486\n\naccuracy: 0.829560585885486" ]
null
peft
## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
{"library_name": "peft"}
NBA55/Final_llama2-7B-learning_rate_schedular_polynomial
null
[ "peft", "region:us" ]
null
2024-04-13T15:42:11+00:00
[]
[]
TAGS #peft #region-us
## Training procedure The following 'bitsandbytes' quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
[ "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n\n- PEFT 0.4.0" ]
[ "TAGS\n#peft #region-us \n", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n\n- PEFT 0.4.0" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ProrabVasili/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]}
ProrabVasili/Taxi-v3
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-13T15:42:49+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
text-to-image
diffusers
# LimitlessVision API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/11807205221713023218.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "limitlessvision" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/limitlessvision) Model link: [View model](https://modelslab.com/models/limitlessvision) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "limitlessvision", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
{"license": "creativeml-openrail-m", "tags": ["modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic"], "pinned": true}
stablediffusionapi/limitlessvision
null
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-13T15:48:07+00:00
[]
[]
TAGS #diffusers #modelslab.com #stable-diffusion-api #text-to-image #ultra-realistic #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
# LimitlessVision API Inference !generated from URL ## Get API Key Get API key from ModelsLab API, No Payment needed. Replace Key in below code, change model_id to "limitlessvision" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs Try model for free: Generate Images Model link: View model View all models: View Models import requests import json url = "URL payload = URL({ "key": "your_api_key", "model_id": "limitlessvision", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(URL) > Use this coupon code to get 25% off DMGG0RBN
[ "# LimitlessVision API Inference\n\n!generated from URL", "## Get API Key\n\nGet API key from ModelsLab API, No Payment needed. \n\nReplace Key in below code, change model_id to \"limitlessvision\"\n\nCoding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs\n\nTry model for free: Generate Images\n\nModel link: View model\n\nView all models: View Models\n\n import requests \n import json \n \n url = \"URL \n \n payload = URL({ \n \"key\": \"your_api_key\", \n \"model_id\": \"limitlessvision\", \n \"prompt\": \"ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K\", \n \"negative_prompt\": \"painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime\", \n \"width\": \"512\", \n \"height\": \"512\", \n \"samples\": \"1\", \n \"num_inference_steps\": \"30\", \n \"safety_checker\": \"no\", \n \"enhance_prompt\": \"yes\", \n \"seed\": None, \n \"guidance_scale\": 7.5, \n \"multi_lingual\": \"no\", \n \"panorama\": \"no\", \n \"self_attention\": \"no\", \n \"upscale\": \"no\", \n \"embeddings\": \"embeddings_model_id\", \n \"lora\": \"lora_model_id\", \n \"webhook\": None, \n \"track_id\": None \n }) \n \n headers = { \n 'Content-Type': 'application/json' \n } \n \n response = requests.request(\"POST\", url, headers=headers, data=payload) \n \n print(URL)\n\n> Use this coupon code to get 25% off DMGG0RBN" ]
[ "TAGS\n#diffusers #modelslab.com #stable-diffusion-api #text-to-image #ultra-realistic #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "# LimitlessVision API Inference\n\n!generated from URL", "## Get API Key\n\nGet API key from ModelsLab API, No Payment needed. \n\nReplace Key in below code, change model_id to \"limitlessvision\"\n\nCoding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs\n\nTry model for free: Generate Images\n\nModel link: View model\n\nView all models: View Models\n\n import requests \n import json \n \n url = \"URL \n \n payload = URL({ \n \"key\": \"your_api_key\", \n \"model_id\": \"limitlessvision\", \n \"prompt\": \"ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K\", \n \"negative_prompt\": \"painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime\", \n \"width\": \"512\", \n \"height\": \"512\", \n \"samples\": \"1\", \n \"num_inference_steps\": \"30\", \n \"safety_checker\": \"no\", \n \"enhance_prompt\": \"yes\", \n \"seed\": None, \n \"guidance_scale\": 7.5, \n \"multi_lingual\": \"no\", \n \"panorama\": \"no\", \n \"self_attention\": \"no\", \n \"upscale\": \"no\", \n \"embeddings\": \"embeddings_model_id\", \n \"lora\": \"lora_model_id\", \n \"webhook\": None, \n \"track_id\": None \n }) \n \n headers = { \n 'Content-Type': 'application/json' \n } \n \n response = requests.request(\"POST\", url, headers=headers, data=payload) \n \n print(URL)\n\n> Use this coupon code to get 25% off DMGG0RBN" ]
text-generation
transformers
# Dolphin Mistral Instruct This is a custom language model created using the "SLERP" method ### Models based on The following models were used to create this language model: - [arcee-ai/sec-mistral-7b-instruct-1.6-epoch](https://huggingface.co/arcee-ai/sec-mistral-7b-instruct-1.6-epoch) - [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) ### Configuration The following configuration was used to produce this model: ```yaml base_model: - arcee-ai/sec-mistral-7b-instruct-1.6-epoch - cognitivecomputations/dolphin-2.8-mistral-7b-v02 library_name: transformers dtype: bfloat16 ``` ## Usage This model uses SafeTensors files and can be loaded and used with the Transformers library. Here's an example of how to load and generate text with the model using Transformers and Python: ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "path/to/model" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto") input_text = "Write a short story about" input_ids = tokenizer.encode(input_text, return_tensors="pt").to(model.device) output_ids = model.generate( input_ids, max_length=200, do_sample=True, top_k=50, top_p=0.95, num_return_sequences=1, ) output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True) print(output_text) ``` Make sure to replace "path/to/model" with the actual path to your model's directory.
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["code", "instruct", "llm", "7b", "dolphin"], "datasets": ["cognitivecomputations/dolphin"], "base_model": ["arcee-ai/sec-mistral-7b-instruct-1.6-epoch", "cognitivecomputations/dolphin-2.8-mistral-7b-v02"]}
grandell1234/dolphin-mistral-instruct-7b
null
[ "transformers", "safetensors", "mistral", "text-generation", "code", "instruct", "llm", "7b", "dolphin", "conversational", "en", "dataset:cognitivecomputations/dolphin", "base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T15:52:13+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mistral #text-generation #code #instruct #llm #7b #dolphin #conversational #en #dataset-cognitivecomputations/dolphin #base_model-arcee-ai/sec-mistral-7b-instruct-1.6-epoch #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Dolphin Mistral Instruct This is a custom language model created using the "SLERP" method ### Models based on The following models were used to create this language model: - arcee-ai/sec-mistral-7b-instruct-1.6-epoch - cognitivecomputations/dolphin-2.8-mistral-7b-v02 ### Configuration The following configuration was used to produce this model: ## Usage This model uses SafeTensors files and can be loaded and used with the Transformers library. Here's an example of how to load and generate text with the model using Transformers and Python: Make sure to replace "path/to/model" with the actual path to your model's directory.
[ "# Dolphin Mistral Instruct\n\n This is a custom language model created using the \"SLERP\" method\n\n ### Models based on\n\n The following models were used to create this language model:\n\n - arcee-ai/sec-mistral-7b-instruct-1.6-epoch\n - cognitivecomputations/dolphin-2.8-mistral-7b-v02\n\n ### Configuration\n\n The following configuration was used to produce this model:", "## Usage\nThis model uses SafeTensors files and can be loaded and used with the Transformers library. Here's an example of how to load and generate text with the model using Transformers and Python:\n\nMake sure to replace \"path/to/model\" with the actual path to your model's directory." ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #code #instruct #llm #7b #dolphin #conversational #en #dataset-cognitivecomputations/dolphin #base_model-arcee-ai/sec-mistral-7b-instruct-1.6-epoch #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Dolphin Mistral Instruct\n\n This is a custom language model created using the \"SLERP\" method\n\n ### Models based on\n\n The following models were used to create this language model:\n\n - arcee-ai/sec-mistral-7b-instruct-1.6-epoch\n - cognitivecomputations/dolphin-2.8-mistral-7b-v02\n\n ### Configuration\n\n The following configuration was used to produce this model:", "## Usage\nThis model uses SafeTensors files and can be loaded and used with the Transformers library. Here's an example of how to load and generate text with the model using Transformers and Python:\n\nMake sure to replace \"path/to/model\" with the actual path to your model's directory." ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
domenicrosati/beavertails_attack_meta-llama_Llama-2-7b-chat-hf_8e-5_10k
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T15:53:06+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
unrented5443/1v2uk6m
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T15:54:43+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Gemma-1000-4bit-qlora This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0122 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.057 | 1.0 | 113 | 0.0335 | | 0.02 | 2.0 | 226 | 0.0249 | | 0.0018 | 3.0 | 339 | 0.0147 | | 0.0007 | 4.0 | 452 | 0.0097 | | 0.0001 | 5.0 | 565 | 0.0122 | ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "Gemma-1000-4bit-qlora", "results": []}]}
mooo16/Gemma-1000-4bit-qlora
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-04-13T15:55:13+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us
Gemma-1000-4bit-qlora ===================== This model is a fine-tuned version of google/gemma-2b on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.0122 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.03 * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
token-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
AnkushJindal28/bio-gpt-3.1
null
[ "transformers", "safetensors", "gpt2", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T15:58:18+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gpt2 #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gpt2 #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Hussnain47/bart_transformer
null
[ "transformers", "safetensors", "bart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T16:02:31+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bart #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bart #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-wikisql-sql-nl-nl-sql This model is a fine-tuned version of [Google/mt5-small](https://huggingface.co/Google/mt5-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 0.01 | 100 | 28.8027 | 0.0464 | 2.1278 | ### Framework versions - Transformers 4.26.0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.13.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "t5-small-finetuned-wikisql-sql-nl-nl-sql", "results": []}]}
adityarao1612/t5-small-finetuned-wikisql-sql-nl-nl-sql
null
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T16:02:51+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #mt5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-wikisql-sql-nl-nl-sql ======================================== This model is a fine-tuned version of Google/mt5-small on an unknown dataset. Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 100 ### Training results ### Framework versions * Transformers 4.26.0 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.13.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 100", "### Training results", "### Framework versions\n\n\n* Transformers 4.26.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.13.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #mt5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 100", "### Training results", "### Framework versions\n\n\n* Transformers 4.26.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.13.3" ]
text-generation
transformers
Official [AQLM](https://arxiv.org/abs/2401.06118) quantization of [CohereForAI/c4ai-command-r-v01 ](https://huggingface.co/CohereForAI/c4ai-command-r-v01). For this quantization, we used 1 codebook of 16 bits. Results: | Model | Quantization | MMLU (5-shot) | GSM8k (8-shot) | Model size, Gb | |------|------|-------|------|------| |CohereForAI/c4ai-command-r-v01| None |0.6755 | 0.6065 | 70.0 | | | 1x16 | 0.5719 | 0.3760 | 12.7 |
{"library_name": "transformers", "tags": ["cohere", "conversational", "10languages", "text-generation-inference", "Inference Endpoints"]}
ISTA-DASLab/c4ai-command-r-v01-AQLM-2Bit-1x16
null
[ "transformers", "safetensors", "cohere", "text-generation", "conversational", "10languages", "text-generation-inference", "Inference Endpoints", "arxiv:2401.06118", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T16:04:27+00:00
[ "2401.06118" ]
[]
TAGS #transformers #safetensors #cohere #text-generation #conversational #10languages #text-generation-inference #Inference Endpoints #arxiv-2401.06118 #autotrain_compatible #endpoints_compatible #region-us
Official AQLM quantization of CohereForAI/c4ai-command-r-v01 . For this quantization, we used 1 codebook of 16 bits. Results:
[]
[ "TAGS\n#transformers #safetensors #cohere #text-generation #conversational #10languages #text-generation-inference #Inference Endpoints #arxiv-2401.06118 #autotrain_compatible #endpoints_compatible #region-us \n" ]
null
null
https://civitai.com/models/398746?modelVersionId=444683
{"license": "creativeml-openrail-m"}
LarryAIDraw/oneplus-Keqing-0003
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-04-13T16:06:37+00:00
[]
[]
TAGS #license-creativeml-openrail-m #region-us
URL
[]
[ "TAGS\n#license-creativeml-openrail-m #region-us \n" ]
null
null
https://civitai.com/models/398705/march-7th-honkai-star-rail
{"license": "creativeml-openrail-m"}
LarryAIDraw/march_DG
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-04-13T16:07:00+00:00
[]
[]
TAGS #license-creativeml-openrail-m #region-us
URL
[]
[ "TAGS\n#license-creativeml-openrail-m #region-us \n" ]
null
null
https://civitai.com/models/398646/haruno-yukinoshitayahari-ore-no-seishun-love-comedy-wa-machigatteiru
{"license": "creativeml-openrail-m"}
LarryAIDraw/haruno_DG
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-04-13T16:07:25+00:00
[]
[]
TAGS #license-creativeml-openrail-m #region-us
URL
[]
[ "TAGS\n#license-creativeml-openrail-m #region-us \n" ]
null
null
https://civitai.com/models/398729/utaha-kasumigaoka-saekano
{"license": "creativeml-openrail-m"}
LarryAIDraw/utaha_DG
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-04-13T16:07:45+00:00
[]
[]
TAGS #license-creativeml-openrail-m #region-us
URL
[]
[ "TAGS\n#license-creativeml-openrail-m #region-us \n" ]
null
keras
## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
{"library_name": "keras"}
anrhi/mobile_v2_resnet_hybrid_fake_image_detection
null
[ "keras", "region:us" ]
null
2024-04-13T16:09:10+00:00
[]
[]
TAGS #keras #region-us
## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
[ "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed" ]
[ "TAGS\n#keras #region-us \n", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HeydarS/tiny_llama_EQ_peft_v52
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T16:12:48+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
{"library_name": "peft"}
Gopalatius/komodo-7b-chat-indo-v1.0-adapter
null
[ "peft", "region:us" ]
null
2024-04-13T16:14:30+00:00
[]
[]
TAGS #peft #region-us
## Training procedure The following 'bitsandbytes' quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following 'bitsandbytes' quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
[ "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: True\n- load_in_4bit: False\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: fp4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float32\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: True\n- load_in_4bit: False\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: fp4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float32", "### Framework versions\n\n- PEFT 0.4.0\n\n- PEFT 0.4.0" ]
[ "TAGS\n#peft #region-us \n", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: True\n- load_in_4bit: False\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: fp4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float32\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: True\n- load_in_4bit: False\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: fp4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float32", "### Framework versions\n\n- PEFT 0.4.0\n\n- PEFT 0.4.0" ]
image-classification
keras
# Model Card for Model ID This modelcard aims to classify emotions into one of seven categories: anger, happy, sad, fear, surprise, disgust, neutral. ## Model Details Dataset: - Train: Happy - 14,379 / Angry - 7988 / Disgust - 872 / Sad - 9768 / Neutral - 9947 / Fear - 8200 / Surprise - 6376 - Test: Happy - 3599 / Angry - 1918 / Disgust - 222 / Sad - 2386 / Neutral - 2449 / Fear - 2042 / Surprise - 1628 - Val: Happy - 2880 / Angry - 1600 / Disgust - 172 / Sad - 1954 / Neutral - 1990 / Fear - 1640 / Surprise - 1628 Model: 1. Transfer learning using MobileNetv2 with 2 additional Dense layers and an output layer with softmax activation function. 2. Used weights to adjust for class imbalances. 3. Total Params: 3,675,823 4. Trainable Params: 136,839 5. Accuracy: 0.823 | Precision: 0.825 | Recall: 0.823 | F1: 0.821 Room for Improvement: This model was created with extremely limited hardware acceleration (GPU) resources. Therefore, it is high likely that evaluation metrics that surpass the 95% mark can be achieved in the following manner: 1. MobileNetv2 was used for its fast inference and low latency but perhaps, with more resources, a more suitable base model can be found. 2. Data augmentation in order to better correct for class imbalances. 3. Using a learning rate scheduler to train for longer (with lower LR) after nearing local minima (aprox 60 epochs). ## Uses Cannot be used for commercial purposes in the EU. ### Direct Use Combine with the Open CV haar casacade for face detection. ## How to Get Started with the Model Use the code below to get started with the model locally: import cv2 import numpy as np import tensorflow as tf def display_emotion(frame, model): font = cv2.FONT_HERSHEY_SIMPLEX font_scale = 1.5 text_color = (0, 0, 255) x, y, w, h = 0, 0, 175, 75 gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml') faces = face_cascade.detectMultiScale(gray, 1.1, 4) for x, y, w, h in faces: roi_gray = gray[y:y+h, x:x+w] roi_color = frame[y:y+h, x:x+w] cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2) # Green square faces = face_cascade.detectMultiScale(roi_gray) if len(faces) == 0: print("Face not detected...") else: for (ex, ey, ew, eh) in faces: face_roi = roi_color[ey:ey+eh, ex:ex+ew] resized_image = cv2.resize(face_roi, (224, 224)) final_image = np.expand_dims(resized_image, axis=0) predictions = model.predict(final_image) class_labels = ['angry', 'disgust', 'fear', 'happy', 'neutral', 'sad', 'surprise'] predicted_label = class_labels[np.argmax(predictions)] # Black background rectangle cv2.rectangle(frame, (x, y), (x+w, y-25), (0, 0, 0), -1) # Add text cv2.putText(frame, predicted_label, (x, y-10), font, 0.7, text_color, 2) cv2.rectangle(frame, (x, y), (x+w, y+h), text_color) return frame def main(): model = tf.keras.models.load_model('emotion_detection.keras') cap = cv2.VideoCapture(1) if not cap.isOpened(): cap = cv2.VideoCapture(0) if not cap.isOpened(): raise IOError("Cannot open webcam") while True: ret, frame = cap.read() if not ret: break frame = display_emotion(frame, model) cv2.imshow('Facial Expression Recognition', frame) if cv2.waitKey(2) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() if __name__ == "__main__": main() ### Training Data Dataset used: FER (available on Kaggle) #### Preprocessing [optional] MobileNetv2 recieves image inputs of size (224, 224) #### Speeds, Sizes, Times [optional] Latency (local demo, no GPU): 39 ms/step ## Model Card Authors [optional] Ronny Nehme
{"language": ["en"], "library_name": "keras", "pipeline_tag": "image-classification"}
FelaKuti/Emotion-detection
null
[ "keras", "image-classification", "en", "region:us" ]
null
2024-04-13T16:15:02+00:00
[]
[ "en" ]
TAGS #keras #image-classification #en #region-us
# Model Card for Model ID This modelcard aims to classify emotions into one of seven categories: anger, happy, sad, fear, surprise, disgust, neutral. ## Model Details Dataset: - Train: Happy - 14,379 / Angry - 7988 / Disgust - 872 / Sad - 9768 / Neutral - 9947 / Fear - 8200 / Surprise - 6376 - Test: Happy - 3599 / Angry - 1918 / Disgust - 222 / Sad - 2386 / Neutral - 2449 / Fear - 2042 / Surprise - 1628 - Val: Happy - 2880 / Angry - 1600 / Disgust - 172 / Sad - 1954 / Neutral - 1990 / Fear - 1640 / Surprise - 1628 Model: 1. Transfer learning using MobileNetv2 with 2 additional Dense layers and an output layer with softmax activation function. 2. Used weights to adjust for class imbalances. 3. Total Params: 3,675,823 4. Trainable Params: 136,839 5. Accuracy: 0.823 | Precision: 0.825 | Recall: 0.823 | F1: 0.821 Room for Improvement: This model was created with extremely limited hardware acceleration (GPU) resources. Therefore, it is high likely that evaluation metrics that surpass the 95% mark can be achieved in the following manner: 1. MobileNetv2 was used for its fast inference and low latency but perhaps, with more resources, a more suitable base model can be found. 2. Data augmentation in order to better correct for class imbalances. 3. Using a learning rate scheduler to train for longer (with lower LR) after nearing local minima (aprox 60 epochs). ## Uses Cannot be used for commercial purposes in the EU. ### Direct Use Combine with the Open CV haar casacade for face detection. ## How to Get Started with the Model Use the code below to get started with the model locally: import cv2 import numpy as np import tensorflow as tf def display_emotion(frame, model): font = cv2.FONT_HERSHEY_SIMPLEX font_scale = 1.5 text_color = (0, 0, 255) x, y, w, h = 0, 0, 175, 75 gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) face_cascade = cv2.CascadeClassifier(URL.haarcascades + 'haarcascade_frontalface_default.xml') faces = face_cascade.detectMultiScale(gray, 1.1, 4) for x, y, w, h in faces: roi_gray = gray[y:y+h, x:x+w] roi_color = frame[y:y+h, x:x+w] cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2) # Green square faces = face_cascade.detectMultiScale(roi_gray) if len(faces) == 0: print("Face not detected...") else: for (ex, ey, ew, eh) in faces: face_roi = roi_color[ey:ey+eh, ex:ex+ew] resized_image = URL(face_roi, (224, 224)) final_image = np.expand_dims(resized_image, axis=0) predictions = model.predict(final_image) class_labels = ['angry', 'disgust', 'fear', 'happy', 'neutral', 'sad', 'surprise'] predicted_label = class_labels[URL(predictions)] # Black background rectangle cv2.rectangle(frame, (x, y), (x+w, y-25), (0, 0, 0), -1) # Add text cv2.putText(frame, predicted_label, (x, y-10), font, 0.7, text_color, 2) cv2.rectangle(frame, (x, y), (x+w, y+h), text_color) return frame def main(): model = URL.load_model('emotion_detection.keras') cap = cv2.VideoCapture(1) if not cap.isOpened(): cap = cv2.VideoCapture(0) if not cap.isOpened(): raise IOError("Cannot open webcam") while True: ret, frame = URL() if not ret: break frame = display_emotion(frame, model) URL('Facial Expression Recognition', frame) if cv2.waitKey(2) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() if __name__ == "__main__": main() ### Training Data Dataset used: FER (available on Kaggle) #### Preprocessing [optional] MobileNetv2 recieves image inputs of size (224, 224) #### Speeds, Sizes, Times [optional] Latency (local demo, no GPU): 39 ms/step ## Model Card Authors [optional] Ronny Nehme
[ "# Model Card for Model ID\n\nThis modelcard aims to classify emotions into one of seven categories: anger, happy, sad, fear, surprise, disgust, neutral.", "## Model Details\n\nDataset:\n\n- Train:\nHappy - 14,379 / Angry - 7988 / Disgust - 872 / Sad - 9768 / Neutral - 9947 / Fear - 8200 / Surprise - 6376\n\n- Test:\nHappy - 3599 / Angry - 1918 / Disgust - 222 / Sad - 2386 / Neutral - 2449 / Fear - 2042 / Surprise - 1628\n\n- Val:\nHappy - 2880 / Angry - 1600 / Disgust - 172 / Sad - 1954 / Neutral - 1990 / Fear - 1640 / Surprise - 1628\n\nModel:\n\n1. Transfer learning using MobileNetv2 with 2 additional Dense layers and an output layer with softmax activation function.\n2. Used weights to adjust for class imbalances.\n3. Total Params: 3,675,823\n4. Trainable Params: 136,839\n5. Accuracy: 0.823 | Precision: 0.825 | Recall: 0.823 | F1: 0.821\n\nRoom for Improvement:\n\nThis model was created with extremely limited hardware acceleration (GPU) resources. Therefore, it is high likely that evaluation metrics that surpass the 95% mark can be achieved in the following manner:\n\n1. MobileNetv2 was used for its fast inference and low latency but perhaps, with more resources, a more suitable base model can be found.\n2. Data augmentation in order to better correct for class imbalances.\n3. Using a learning rate scheduler to train for longer (with lower LR) after nearing local minima (aprox 60 epochs).", "## Uses\n\nCannot be used for commercial purposes in the EU.", "### Direct Use\n\nCombine with the Open CV haar casacade for face detection.", "## How to Get Started with the Model\n\nUse the code below to get started with the model locally:\n\n import cv2\n import numpy as np\n import tensorflow as tf\n \n def display_emotion(frame, model):\n font = cv2.FONT_HERSHEY_SIMPLEX\n font_scale = 1.5\n text_color = (0, 0, 255)\n x, y, w, h = 0, 0, 175, 75\n \n gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)\n face_cascade = cv2.CascadeClassifier(URL.haarcascades + 'haarcascade_frontalface_default.xml')\n faces = face_cascade.detectMultiScale(gray, 1.1, 4)\n \n for x, y, w, h in faces:\n roi_gray = gray[y:y+h, x:x+w]\n roi_color = frame[y:y+h, x:x+w]\n cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2) # Green square\n faces = face_cascade.detectMultiScale(roi_gray)\n \n if len(faces) == 0:\n print(\"Face not detected...\")\n else:\n for (ex, ey, ew, eh) in faces:\n face_roi = roi_color[ey:ey+eh, ex:ex+ew]\n \n resized_image = URL(face_roi, (224, 224))\n final_image = np.expand_dims(resized_image, axis=0)\n \n predictions = model.predict(final_image)\n class_labels = ['angry', 'disgust', 'fear', 'happy', 'neutral', 'sad', 'surprise']\n predicted_label = class_labels[URL(predictions)]\n \n # Black background rectangle\n cv2.rectangle(frame, (x, y), (x+w, y-25), (0, 0, 0), -1)\n # Add text\n cv2.putText(frame, predicted_label, (x, y-10), font, 0.7, text_color, 2)\n cv2.rectangle(frame, (x, y), (x+w, y+h), text_color)\n \n return frame\n \n def main():\n model = URL.load_model('emotion_detection.keras')\n cap = cv2.VideoCapture(1)\n \n if not cap.isOpened():\n cap = cv2.VideoCapture(0)\n if not cap.isOpened():\n raise IOError(\"Cannot open webcam\")\n \n while True:\n ret, frame = URL()\n if not ret:\n break\n \n frame = display_emotion(frame, model)\n URL('Facial Expression Recognition', frame)\n \n if cv2.waitKey(2) & 0xFF == ord('q'):\n break\n \n cap.release()\n cv2.destroyAllWindows()\n \n if __name__ == \"__main__\":\n main()", "### Training Data\n\nDataset used: FER (available on Kaggle)", "#### Preprocessing [optional]\n\nMobileNetv2 recieves image inputs of size (224, 224)", "#### Speeds, Sizes, Times [optional]\n\nLatency (local demo, no GPU): 39 ms/step", "## Model Card Authors [optional]\n\nRonny Nehme" ]
[ "TAGS\n#keras #image-classification #en #region-us \n", "# Model Card for Model ID\n\nThis modelcard aims to classify emotions into one of seven categories: anger, happy, sad, fear, surprise, disgust, neutral.", "## Model Details\n\nDataset:\n\n- Train:\nHappy - 14,379 / Angry - 7988 / Disgust - 872 / Sad - 9768 / Neutral - 9947 / Fear - 8200 / Surprise - 6376\n\n- Test:\nHappy - 3599 / Angry - 1918 / Disgust - 222 / Sad - 2386 / Neutral - 2449 / Fear - 2042 / Surprise - 1628\n\n- Val:\nHappy - 2880 / Angry - 1600 / Disgust - 172 / Sad - 1954 / Neutral - 1990 / Fear - 1640 / Surprise - 1628\n\nModel:\n\n1. Transfer learning using MobileNetv2 with 2 additional Dense layers and an output layer with softmax activation function.\n2. Used weights to adjust for class imbalances.\n3. Total Params: 3,675,823\n4. Trainable Params: 136,839\n5. Accuracy: 0.823 | Precision: 0.825 | Recall: 0.823 | F1: 0.821\n\nRoom for Improvement:\n\nThis model was created with extremely limited hardware acceleration (GPU) resources. Therefore, it is high likely that evaluation metrics that surpass the 95% mark can be achieved in the following manner:\n\n1. MobileNetv2 was used for its fast inference and low latency but perhaps, with more resources, a more suitable base model can be found.\n2. Data augmentation in order to better correct for class imbalances.\n3. Using a learning rate scheduler to train for longer (with lower LR) after nearing local minima (aprox 60 epochs).", "## Uses\n\nCannot be used for commercial purposes in the EU.", "### Direct Use\n\nCombine with the Open CV haar casacade for face detection.", "## How to Get Started with the Model\n\nUse the code below to get started with the model locally:\n\n import cv2\n import numpy as np\n import tensorflow as tf\n \n def display_emotion(frame, model):\n font = cv2.FONT_HERSHEY_SIMPLEX\n font_scale = 1.5\n text_color = (0, 0, 255)\n x, y, w, h = 0, 0, 175, 75\n \n gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)\n face_cascade = cv2.CascadeClassifier(URL.haarcascades + 'haarcascade_frontalface_default.xml')\n faces = face_cascade.detectMultiScale(gray, 1.1, 4)\n \n for x, y, w, h in faces:\n roi_gray = gray[y:y+h, x:x+w]\n roi_color = frame[y:y+h, x:x+w]\n cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2) # Green square\n faces = face_cascade.detectMultiScale(roi_gray)\n \n if len(faces) == 0:\n print(\"Face not detected...\")\n else:\n for (ex, ey, ew, eh) in faces:\n face_roi = roi_color[ey:ey+eh, ex:ex+ew]\n \n resized_image = URL(face_roi, (224, 224))\n final_image = np.expand_dims(resized_image, axis=0)\n \n predictions = model.predict(final_image)\n class_labels = ['angry', 'disgust', 'fear', 'happy', 'neutral', 'sad', 'surprise']\n predicted_label = class_labels[URL(predictions)]\n \n # Black background rectangle\n cv2.rectangle(frame, (x, y), (x+w, y-25), (0, 0, 0), -1)\n # Add text\n cv2.putText(frame, predicted_label, (x, y-10), font, 0.7, text_color, 2)\n cv2.rectangle(frame, (x, y), (x+w, y+h), text_color)\n \n return frame\n \n def main():\n model = URL.load_model('emotion_detection.keras')\n cap = cv2.VideoCapture(1)\n \n if not cap.isOpened():\n cap = cv2.VideoCapture(0)\n if not cap.isOpened():\n raise IOError(\"Cannot open webcam\")\n \n while True:\n ret, frame = URL()\n if not ret:\n break\n \n frame = display_emotion(frame, model)\n URL('Facial Expression Recognition', frame)\n \n if cv2.waitKey(2) & 0xFF == ord('q'):\n break\n \n cap.release()\n cv2.destroyAllWindows()\n \n if __name__ == \"__main__\":\n main()", "### Training Data\n\nDataset used: FER (available on Kaggle)", "#### Preprocessing [optional]\n\nMobileNetv2 recieves image inputs of size (224, 224)", "#### Speeds, Sizes, Times [optional]\n\nLatency (local demo, no GPU): 39 ms/step", "## Model Card Authors [optional]\n\nRonny Nehme" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
0x0son0/sl101
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T16:16:18+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - BlowingWater/corgy_dog_LoRA <Gallery /> ## Model description These are BlowingWater/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of TOK dog to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](BlowingWater/corgy_dog_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of TOK dog", "widget": []}
BlowingWater/corgy_dog_LoRA
null
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-13T16:16:49+00:00
[]
[]
TAGS #diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# SDXL LoRA DreamBooth - BlowingWater/corgy_dog_LoRA <Gallery /> ## Model description These are BlowingWater/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of TOK dog to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# SDXL LoRA DreamBooth - BlowingWater/corgy_dog_LoRA\n\n<Gallery />", "## Model description\n\nThese are BlowingWater/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of TOK dog to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# SDXL LoRA DreamBooth - BlowingWater/corgy_dog_LoRA\n\n<Gallery />", "## Model description\n\nThese are BlowingWater/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of TOK dog to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistr This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 10 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistr", "results": []}]}
Yash0109/mistr
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-04-13T16:21:14+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
# mistr This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 10 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# mistr\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_steps: 0.03\n- training_steps: 10", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n", "# mistr\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_steps: 0.03\n- training_steps: 10", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cilantro9246/gczujao
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T16:21:58+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # organc-beit-base-finetuned This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the medmnist-v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.2503 - Accuracy: 0.9256 - Precision: 0.9228 - Recall: 0.9137 - F1: 0.9175 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.8262 | 1.0 | 203 | 0.2666 | 0.8867 | 0.9002 | 0.8322 | 0.8260 | | 0.6431 | 2.0 | 406 | 0.1514 | 0.9536 | 0.9486 | 0.9377 | 0.9397 | | 0.6986 | 3.0 | 609 | 0.1179 | 0.9766 | 0.9710 | 0.9769 | 0.9730 | | 0.5797 | 4.0 | 813 | 0.1045 | 0.9766 | 0.9756 | 0.9768 | 0.9758 | | 0.5475 | 5.0 | 1016 | 0.1281 | 0.9707 | 0.9677 | 0.9662 | 0.9659 | | 0.5518 | 6.0 | 1219 | 0.0765 | 0.9833 | 0.9791 | 0.9842 | 0.9813 | | 0.5167 | 7.0 | 1422 | 0.1065 | 0.9724 | 0.9785 | 0.9701 | 0.9735 | | 0.4417 | 8.0 | 1626 | 0.1027 | 0.9824 | 0.9848 | 0.9834 | 0.9837 | | 0.3555 | 9.0 | 1829 | 0.1286 | 0.9774 | 0.9838 | 0.9778 | 0.9803 | | 0.3552 | 9.99 | 2030 | 0.1046 | 0.9845 | 0.9882 | 0.9857 | 0.9867 | ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "datasets": ["medmnist-v2"], "metrics": ["accuracy", "precision", "recall", "f1"], "base_model": "microsoft/beit-base-patch16-224-pt22k-ft22k", "model-index": [{"name": "organc-beit-base-finetuned", "results": []}]}
selmamalak/organc-beit-base-finetuned
null
[ "peft", "safetensors", "generated_from_trainer", "dataset:medmnist-v2", "base_model:microsoft/beit-base-patch16-224-pt22k-ft22k", "license:apache-2.0", "region:us" ]
null
2024-04-13T16:22:03+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #dataset-medmnist-v2 #base_model-microsoft/beit-base-patch16-224-pt22k-ft22k #license-apache-2.0 #region-us
organc-beit-base-finetuned ========================== This model is a fine-tuned version of microsoft/beit-base-patch16-224-pt22k-ft22k on the medmnist-v2 dataset. It achieves the following results on the evaluation set: * Loss: 0.2503 * Accuracy: 0.9256 * Precision: 0.9228 * Recall: 0.9137 * F1: 0.9175 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.005 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.005\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #dataset-medmnist-v2 #base_model-microsoft/beit-base-patch16-224-pt22k-ft22k #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.005\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
token-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Irisissocute/fine_tuned_biogpt_2017
null
[ "transformers", "safetensors", "gpt2", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T16:25:19+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gpt2 #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gpt2 #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "google-t5/t5-small"}
dsolomon/t5-small-pubmed-LoRA-r4-i512-o128
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google-t5/t5-small", "region:us" ]
null
2024-04-13T16:30:47+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-google-t5/t5-small #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-google-t5/t5-small #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistralFT22 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 10 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistralFT22", "results": []}]}
Yash0109/mistralFT22
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-04-13T16:32:49+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
# mistralFT22 This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 10 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# mistralFT22\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_steps: 0.03\n- training_steps: 10", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n", "# mistralFT22\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_steps: 0.03\n- training_steps: 10", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
An alpaca fine-tune of chargoddard's Llamafied Yi-6B. This works fine, unlike the lobotomized Llama model I tried merging it with. # Uploaded model - **Developed by:** reallad - **License:** apache-2.0 - **Finetuned from model :** chargoddard/Yi-6B-Llama This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "chargoddard/Yi-6B-Llama"}
reallad/llamayialpaca
null
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:chargoddard/Yi-6B-Llama", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T16:33:35+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-chargoddard/Yi-6B-Llama #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
An alpaca fine-tune of chargoddard's Llamafied Yi-6B. This works fine, unlike the lobotomized Llama model I tried merging it with. # Uploaded model - Developed by: reallad - License: apache-2.0 - Finetuned from model : chargoddard/Yi-6B-Llama This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: reallad\n- License: apache-2.0\n- Finetuned from model : chargoddard/Yi-6B-Llama\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-chargoddard/Yi-6B-Llama #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: reallad\n- License: apache-2.0\n- Finetuned from model : chargoddard/Yi-6B-Llama\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
null
This repository contains model checkpoints for the paper ["A Change Detection Reality Check", Corley et al.](https://arxiv.org/abs/2402.06994) published at the [ICLR 2024 Machine Learning for Remote Sensing (ML4RS) Workshop](https://ml-for-rs.github.io/iclr2024/) ### Abstract In recent years, there has been an explosion of proposed change detection deep learning architectures in the remote sensing literature. These approaches claim to offer state-of the-artperformance on different standard benchmark datasets. However, has the field truly made significant progress? In this paper we perform experiments which conclude a simple U-Net segmentation baseline without training tricks or complicated architectural changes is still a top performer for the task of change detection. ### Code The repository for model loading and experiments are provided in [here](https://github.com/isaaccorley/a-change-detection-reality-check).
{"license": "apache-2.0"}
isaaccorley/a-change-detection-reality-check
null
[ "arxiv:2402.06994", "license:apache-2.0", "region:us" ]
null
2024-04-13T16:37:04+00:00
[ "2402.06994" ]
[]
TAGS #arxiv-2402.06994 #license-apache-2.0 #region-us
This repository contains model checkpoints for the paper "A Change Detection Reality Check", Corley et al. published at the ICLR 2024 Machine Learning for Remote Sensing (ML4RS) Workshop ### Abstract In recent years, there has been an explosion of proposed change detection deep learning architectures in the remote sensing literature. These approaches claim to offer state-of the-artperformance on different standard benchmark datasets. However, has the field truly made significant progress? In this paper we perform experiments which conclude a simple U-Net segmentation baseline without training tricks or complicated architectural changes is still a top performer for the task of change detection. ### Code The repository for model loading and experiments are provided in here.
[ "### Abstract\n\nIn recent years, there has been an explosion of proposed change detection deep learning architectures in the remote sensing literature. These approaches claim to offer state-of the-artperformance on different standard benchmark datasets. However, has the field truly made significant progress? In this paper we perform experiments which conclude a simple U-Net segmentation baseline without training tricks or complicated architectural changes is still a top performer for the task of change detection.", "### Code\nThe repository for model loading and experiments are provided in here." ]
[ "TAGS\n#arxiv-2402.06994 #license-apache-2.0 #region-us \n", "### Abstract\n\nIn recent years, there has been an explosion of proposed change detection deep learning architectures in the remote sensing literature. These approaches claim to offer state-of the-artperformance on different standard benchmark datasets. However, has the field truly made significant progress? In this paper we perform experiments which conclude a simple U-Net segmentation baseline without training tricks or complicated architectural changes is still a top performer for the task of change detection.", "### Code\nThe repository for model loading and experiments are provided in here." ]
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "257.32 +/- 18.90", "name": "mean_reward", "verified": false}]}]}]}
PaoloB27/Lunar-Lander
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-13T16:37:19+00:00
[]
[]
TAGS #stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_clf_results This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8239 - Accuracy: 0.7139 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0896 | 1.0 | 2701 | 0.8290 | 0.7123 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert-base-cased", "model-index": [{"name": "bert_clf_results", "results": []}]}
profoz/bert_clf_results
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T16:40:49+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bert\_clf\_results ================== This model is a fine-tuned version of distilbert-base-cased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.8239 * Accuracy: 0.7139 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 32 * eval\_batch\_size: 64 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 64\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 64\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
profoz/distilbert-base-cased-finetuned-stars
null
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T16:41:03+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #distilbert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #distilbert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Sand-Red/Lllama_CXR
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T16:42:01+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
image-classification
transformers
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metrics loss: 0.9025482535362244 f1_macro: 0.6858654529218409 f1_micro: 0.6716867469879518 f1_weighted: 0.676828467951081 precision_macro: 0.7239086041672248 precision_micro: 0.6716867469879518 precision_weighted: 0.7046011538585282 recall_macro: 0.6707409732185557 recall_micro: 0.6716867469879518 recall_weighted: 0.6716867469879518 accuracy: 0.6716867469879518
{"tags": ["autotrain", "image-classification"], "datasets": ["xblock-large-patch1-224/autotrain-data"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
howdyaendra/xblock-large-patch1-224
null
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "autotrain", "dataset:xblock-large-patch1-224/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T16:43:05+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vit #image-classification #autotrain #dataset-xblock-large-patch1-224/autotrain-data #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metrics loss: 0.9025482535362244 f1_macro: 0.6858654529218409 f1_micro: 0.6716867469879518 f1_weighted: 0.676828467951081 precision_macro: 0.7239086041672248 precision_micro: 0.6716867469879518 precision_weighted: 0.7046011538585282 recall_macro: 0.6707409732185557 recall_micro: 0.6716867469879518 recall_weighted: 0.6716867469879518 accuracy: 0.6716867469879518
[ "# Model Trained Using AutoTrain\n\n- Problem type: Image Classification", "## Validation Metrics\nloss: 0.9025482535362244\n\nf1_macro: 0.6858654529218409\n\nf1_micro: 0.6716867469879518\n\nf1_weighted: 0.676828467951081\n\nprecision_macro: 0.7239086041672248\n\nprecision_micro: 0.6716867469879518\n\nprecision_weighted: 0.7046011538585282\n\nrecall_macro: 0.6707409732185557\n\nrecall_micro: 0.6716867469879518\n\nrecall_weighted: 0.6716867469879518\n\naccuracy: 0.6716867469879518" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #autotrain #dataset-xblock-large-patch1-224/autotrain-data #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoTrain\n\n- Problem type: Image Classification", "## Validation Metrics\nloss: 0.9025482535362244\n\nf1_macro: 0.6858654529218409\n\nf1_micro: 0.6716867469879518\n\nf1_weighted: 0.676828467951081\n\nprecision_macro: 0.7239086041672248\n\nprecision_micro: 0.6716867469879518\n\nprecision_weighted: 0.7046011538585282\n\nrecall_macro: 0.6707409732185557\n\nrecall_micro: 0.6716867469879518\n\nrecall_weighted: 0.6716867469879518\n\naccuracy: 0.6716867469879518" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
minerba/my_awesome_peft_model
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T16:43:33+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # OCI-DS-6.7B-schema_0 This model is a fine-tuned version of [m-a-p/OpenCodeInterpreter-DS-6.7B](https://huggingface.co/m-a-p/OpenCodeInterpreter-DS-6.7B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.821 | 0.19 | 50 | 0.0000 | | 2.4109 | 0.38 | 100 | 0.0000 | | 0.0018 | 0.57 | 150 | 0.0000 | | 0.0 | 0.76 | 200 | 0.0000 | | 0.5903 | 0.95 | 250 | 0.0000 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "m-a-p/OpenCodeInterpreter-DS-6.7B", "model-index": [{"name": "OCI-DS-6.7B-schema_0", "results": []}]}
jdeklerk10/OCI-DS-6.7B-schema_0
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:m-a-p/OpenCodeInterpreter-DS-6.7B", "license:apache-2.0", "region:us" ]
null
2024-04-13T16:46:01+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #generated_from_trainer #base_model-m-a-p/OpenCodeInterpreter-DS-6.7B #license-apache-2.0 #region-us
OCI-DS-6.7B-schema\_0 ===================== This model is a fine-tuned version of m-a-p/OpenCodeInterpreter-DS-6.7B on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.0000 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.01 * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.0.dev0 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-m-a-p/OpenCodeInterpreter-DS-6.7B #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_new_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7072 - Accuracy: 0.8386 - F1: 0.8383 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.1359 | 1.0 | 534 | 0.5998 | 0.8396 | 0.8406 | | 0.1067 | 2.0 | 1068 | 0.7072 | 0.8386 | 0.8383 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "my_new_model", "results": []}]}
Galaxyman/my_new_model
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T16:48:52+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
my\_new\_model ============== This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.7072 * Accuracy: 0.8386 * F1: 0.8383 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This model is Persian Q/A fine-tuned on Google's Gemma open-source model. Users can ask general question from it. It can be used for chatbot applications and fine-tuning for other datasets. - **Developed by:** Ali Bidaran - **Language(s) (NLP):** Farsi - **Finetuned from model [optional]:** Gemma2b ## Uses This model can be used for developing chatbot applications, Q/A, instruction engineering and fine-tuning with other persian datasets. <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use ``` python import torch from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, GemmaTokenizer model_id = "alibidaran/Gemma2_Farsi" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) tokenizer = AutoTokenizer.from_pretrained(model_id, token=os.environ['HF_TOKEN']) model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map={"":0}, token=os.environ['HF_TOKEN']) prompt = "چند روش برای کاهش چربی بدن ارائه نمایید؟" text = f"<s> ###Human: {prompt} ###Asistant: " inputs=tokenizer(text,return_tensors='pt').to('cuda') with torch.no_grad(): outputs=model.generate(**inputs,max_new_tokens=400,do_sample=True,top_p=0.99,top_k=10,temperature=0.7) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
{"language": ["fa"], "license": "apache-2.0", "library_name": "transformers"}
alibidaran/Gemma2_Farsi
null
[ "transformers", "safetensors", "gemma", "text-generation", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T16:50:04+00:00
[]
[ "fa" ]
TAGS #transformers #safetensors #gemma #text-generation #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This model is Persian Q/A fine-tuned on Google's Gemma open-source model. Users can ask general question from it. It can be used for chatbot applications and fine-tuning for other datasets. - Developed by: Ali Bidaran - Language(s) (NLP): Farsi - Finetuned from model [optional]: Gemma2b ## Uses This model can be used for developing chatbot applications, Q/A, instruction engineering and fine-tuning with other persian datasets. ### Direct Use
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis model is Persian Q/A fine-tuned on Google's Gemma open-source model. Users can ask general question from it. It can be used for chatbot applications and fine-tuning for\nother datasets.\n- Developed by: Ali Bidaran\n- Language(s) (NLP): Farsi\n- Finetuned from model [optional]: Gemma2b", "## Uses\nThis model can be used for developing chatbot applications, Q/A, instruction engineering and fine-tuning with other persian datasets.", "### Direct Use" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis model is Persian Q/A fine-tuned on Google's Gemma open-source model. Users can ask general question from it. It can be used for chatbot applications and fine-tuning for\nother datasets.\n- Developed by: Ali Bidaran\n- Language(s) (NLP): Farsi\n- Finetuned from model [optional]: Gemma2b", "## Uses\nThis model can be used for developing chatbot applications, Q/A, instruction engineering and fine-tuning with other persian datasets.", "### Direct Use" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # organc-vit-base-finetuned This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the medmnist-v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.2248 - Accuracy: 0.9283 - Precision: 0.9231 - Recall: 0.9160 - F1: 0.9189 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.6907 | 1.0 | 203 | 0.2221 | 0.9202 | 0.9165 | 0.8691 | 0.8480 | | 0.5616 | 2.0 | 406 | 0.1278 | 0.9720 | 0.9657 | 0.9694 | 0.9666 | | 0.5515 | 3.0 | 609 | 0.1428 | 0.9649 | 0.9626 | 0.9640 | 0.9621 | | 0.4941 | 4.0 | 813 | 0.1016 | 0.9724 | 0.9683 | 0.9696 | 0.9683 | | 0.4764 | 5.0 | 1016 | 0.0998 | 0.9716 | 0.9654 | 0.9649 | 0.9637 | | 0.4599 | 6.0 | 1219 | 0.0941 | 0.9758 | 0.9775 | 0.9788 | 0.9778 | | 0.4525 | 7.0 | 1422 | 0.0861 | 0.9795 | 0.9812 | 0.9793 | 0.9800 | | 0.3835 | 8.0 | 1626 | 0.0788 | 0.9849 | 0.9846 | 0.9850 | 0.9847 | | 0.2767 | 9.0 | 1829 | 0.0935 | 0.9774 | 0.9805 | 0.9800 | 0.9798 | | 0.299 | 9.99 | 2030 | 0.0701 | 0.9854 | 0.9843 | 0.9864 | 0.9852 | ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "datasets": ["medmnist-v2"], "metrics": ["accuracy", "precision", "recall", "f1"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "organc-vit-base-finetuned", "results": []}]}
selmamalak/organc-vit-base-finetuned
null
[ "peft", "safetensors", "generated_from_trainer", "dataset:medmnist-v2", "base_model:google/vit-base-patch16-224-in21k", "license:apache-2.0", "region:us" ]
null
2024-04-13T16:50:31+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #dataset-medmnist-v2 #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #region-us
organc-vit-base-finetuned ========================= This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the medmnist-v2 dataset. It achieves the following results on the evaluation set: * Loss: 0.2248 * Accuracy: 0.9283 * Precision: 0.9231 * Recall: 0.9160 * F1: 0.9189 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.005 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.005\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #dataset-medmnist-v2 #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.005\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NeRUBioS_RoBERTa_base_bne_Training_Development This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3499 - Negref Precision: 0.5449 - Negref Recall: 0.5380 - Negref F1: 0.5414 - Neg Precision: 0.9559 - Neg Recall: 0.9694 - Neg F1: 0.9626 - Nsco Precision: 0.8730 - Nsco Recall: 0.9062 - Nsco F1: 0.8893 - Unc Precision: 0.8315 - Unc Recall: 0.8764 - Unc F1: 0.8534 - Usco Precision: 0.6608 - Usco Recall: 0.7383 - Usco F1: 0.6974 - Precision: 0.8205 - Recall: 0.8453 - F1: 0.8327 - Accuracy: 0.9526 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Negref Precision | Negref Recall | Negref F1 | Neg Precision | Neg Recall | Neg F1 | Nsco Precision | Nsco Recall | Nsco F1 | Unc Precision | Unc Recall | Unc F1 | Usco Precision | Usco Recall | Usco F1 | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------:|:-------------:|:----------:|:------:|:--------------:|:-----------:|:-------:|:-------------:|:----------:|:------:|:--------------:|:-----------:|:-------:|:---------:|:------:|:------:|:--------:| | 0.1898 | 1.0 | 1729 | 0.1783 | 0.4516 | 0.5316 | 0.4884 | 0.9351 | 0.9596 | 0.9472 | 0.8079 | 0.8539 | 0.8303 | 0.8193 | 0.7529 | 0.7847 | 0.5816 | 0.6406 | 0.6097 | 0.7596 | 0.8041 | 0.7813 | 0.9452 | | 0.1163 | 2.0 | 3458 | 0.1724 | 0.4906 | 0.5527 | 0.5198 | 0.9274 | 0.9760 | 0.9511 | 0.8252 | 0.9026 | 0.8622 | 0.8263 | 0.8263 | 0.8263 | 0.5662 | 0.6680 | 0.6129 | 0.7721 | 0.8376 | 0.8036 | 0.9485 | | 0.0621 | 3.0 | 5187 | 0.1946 | 0.5139 | 0.5063 | 0.5101 | 0.9524 | 0.9618 | 0.9571 | 0.8542 | 0.8836 | 0.8687 | 0.8071 | 0.8726 | 0.8386 | 0.6034 | 0.6836 | 0.6410 | 0.7999 | 0.8249 | 0.8122 | 0.9480 | | 0.0378 | 4.0 | 6916 | 0.2279 | 0.4923 | 0.5401 | 0.5151 | 0.9450 | 0.9749 | 0.9597 | 0.8568 | 0.8884 | 0.8723 | 0.8259 | 0.8610 | 0.8431 | 0.6179 | 0.6758 | 0.6455 | 0.7940 | 0.8347 | 0.8138 | 0.9490 | | 0.0192 | 5.0 | 8645 | 0.2495 | 0.5227 | 0.5338 | 0.5282 | 0.9541 | 0.9760 | 0.9649 | 0.8256 | 0.8884 | 0.8558 | 0.8071 | 0.8726 | 0.8386 | 0.6049 | 0.6758 | 0.6384 | 0.7929 | 0.8351 | 0.8135 | 0.9508 | | 0.0134 | 6.0 | 10374 | 0.2764 | 0.5199 | 0.5232 | 0.5216 | 0.9568 | 0.9672 | 0.9620 | 0.8687 | 0.8955 | 0.8819 | 0.8277 | 0.8533 | 0.8403 | 0.6389 | 0.7188 | 0.6765 | 0.8114 | 0.8347 | 0.8229 | 0.9514 | | 0.0068 | 7.0 | 12103 | 0.2876 | 0.4880 | 0.5169 | 0.5020 | 0.9470 | 0.9760 | 0.9613 | 0.8593 | 0.8919 | 0.8753 | 0.8494 | 0.8494 | 0.8494 | 0.6456 | 0.7188 | 0.6802 | 0.8010 | 0.8351 | 0.8177 | 0.9508 | | 0.0059 | 8.0 | 13832 | 0.2886 | 0.4991 | 0.5591 | 0.5274 | 0.9488 | 0.9705 | 0.9595 | 0.8601 | 0.8907 | 0.8751 | 0.8231 | 0.8803 | 0.8507 | 0.6528 | 0.7344 | 0.6912 | 0.7986 | 0.8446 | 0.8209 | 0.9516 | | 0.0029 | 9.0 | 15561 | 0.3290 | 0.5408 | 0.4895 | 0.5138 | 0.9529 | 0.9716 | 0.9622 | 0.8653 | 0.9002 | 0.8824 | 0.8218 | 0.8726 | 0.8464 | 0.6090 | 0.7422 | 0.6690 | 0.8125 | 0.8358 | 0.8240 | 0.9505 | | 0.0009 | 10.0 | 17290 | 0.3582 | 0.5438 | 0.5105 | 0.5267 | 0.9519 | 0.9716 | 0.9616 | 0.8757 | 0.9038 | 0.8895 | 0.8218 | 0.8726 | 0.8464 | 0.6737 | 0.75 | 0.7098 | 0.8227 | 0.8413 | 0.8319 | 0.9506 | | 0.0012 | 11.0 | 19019 | 0.3516 | 0.5139 | 0.5443 | 0.5287 | 0.9539 | 0.9705 | 0.9621 | 0.8834 | 0.9086 | 0.8958 | 0.8291 | 0.8803 | 0.8539 | 0.6761 | 0.75 | 0.7111 | 0.8157 | 0.8489 | 0.8320 | 0.9526 | | 0.0005 | 12.0 | 20748 | 0.3499 | 0.5449 | 0.5380 | 0.5414 | 0.9559 | 0.9694 | 0.9626 | 0.8730 | 0.9062 | 0.8893 | 0.8315 | 0.8764 | 0.8534 | 0.6608 | 0.7383 | 0.6974 | 0.8205 | 0.8453 | 0.8327 | 0.9526 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "PlanTL-GOB-ES/roberta-base-bne", "model-index": [{"name": "NeRUBioS_RoBERTa_base_bne_Training_Development", "results": []}]}
ajtamayoh/NeRUBioS_RoBERTa_base_bne_Training_Development
null
[ "transformers", "tensorboard", "safetensors", "roberta", "token-classification", "generated_from_trainer", "base_model:PlanTL-GOB-ES/roberta-base-bne", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T16:56:22+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #roberta #token-classification #generated_from_trainer #base_model-PlanTL-GOB-ES/roberta-base-bne #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
NeRUBioS\_RoBERTa\_base\_bne\_Training\_Development =================================================== This model is a fine-tuned version of PlanTL-GOB-ES/roberta-base-bne on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.3499 * Negref Precision: 0.5449 * Negref Recall: 0.5380 * Negref F1: 0.5414 * Neg Precision: 0.9559 * Neg Recall: 0.9694 * Neg F1: 0.9626 * Nsco Precision: 0.8730 * Nsco Recall: 0.9062 * Nsco F1: 0.8893 * Unc Precision: 0.8315 * Unc Recall: 0.8764 * Unc F1: 0.8534 * Usco Precision: 0.6608 * Usco Recall: 0.7383 * Usco F1: 0.6974 * Precision: 0.8205 * Recall: 0.8453 * F1: 0.8327 * Accuracy: 0.9526 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 12 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #roberta #token-classification #generated_from_trainer #base_model-PlanTL-GOB-ES/roberta-base-bne #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # prueba-coser-whisper This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "openai/whisper-small", "model-index": [{"name": "prueba-coser-whisper", "results": []}]}
cladsu/prueba-coser-whisper
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-13T16:58:11+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #has_space #region-us
# prueba-coser-whisper This model is a fine-tuned version of openai/whisper-small on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# prueba-coser-whisper\n\nThis model is a fine-tuned version of openai/whisper-small on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- training_steps: 4000\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# prueba-coser-whisper\n\nThis model is a fine-tuned version of openai/whisper-small on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- training_steps: 4000\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
# OpenCerebrum-2.0-7B OpenCerebrum-2.0-7B is an open-source language model fine-tuned from the alpindale/Mistral-7B-v0.2-hf base model on a diverse dataset aimed at replicating capabilities of Aether Research's proprietary Cerebrum model. The model was fine-tuned with SFT and DPO on approximately 7,000 examples across 15 data sources spanning coding, math, science, multi-turn conversation, RAG, reasoning, and general instruction-following. The goal was to assemble public datasets that could help the model achieve strong performance on benchmarks where Cerebrum excels. ## Model Details - **Base Model:** alpindale/Mistral-7B-v0.2-hf - **Parameters:** 7 billion - **Fine-Tuning Dataset Size:** ~7,000 examples - **Fine-Tuning Data:** Advanced in-house curation techniques at Cognitive Computations, with 15 different data sources for DPO and SFT. - **Language:** English - **License:** Apache 2.0 ## Quants ### EXL2 [@bartowski](https://huggingface.co/bartowski/) - https://huggingface.co/bartowski/OpenCerebrum-2.0-7B-exl2 ### GGUF [@bartowski](https://huggingface.co/bartowski/) - https://huggingface.co/bartowski/OpenCerebrum-2.0-7B-GGUF ## Intended Use OpenCerebrum-2.0-7B is intended to be a powerful open-source model for coding, math, science, and general question-answering and text generation tasks. Its diverse fine-tuning data aims to equip it with broad knowledge and reasoning capabilities. However, as an open-source replica trained on a subset of data compared to the original Cerebrum, it may not match Cerebrum's full performance. Additionally, biases and limitations of the fine-tuning data may be reflected in the model's outputs. ## Limitations and Biases - The model may have biases and limitations inherited from its fine-tuning datasets. Thorough testing is needed to characterize these. - As the model is based on a 7B parameter model, it has computational and memory constraints compared to larger models. ## Evaluations | Tasks |Version|Filter|n-shot|Metric|Value | |Stderr| |--------------|------:|------|-----:|------|-----:|---|-----:| |truthfulqa_mc2| 2|none | 0|acc |0.5182|± |0.0152| |ai2_arc |N/A |none | 0|acc |0.7060|± |0.0073| | | |none | 0|acc_norm|0.7049|± |0.0074| | - arc_challenge | 1|none | 0|acc |0.5000|± |0.0146| | | |none | 0|acc_norm|0.5299|± |0.0146| | - arc_easy | 1|none | 0|acc |0.8077|± |0.0081| | | |none | 0|acc_norm|0.7912|± |0.0083| |agieval_nous |N/A |none | 0|acc |0.3778|± |0.0093| | | |none | 0|acc_norm|0.3574|± |0.0093| | - agieval_aqua_rat | 1|none | 0|acc |0.2402|± |0.0269| | | |none | 0|acc_norm|0.2205|± |0.0261| | - agieval_logiqa_en | 1|none | 0|acc |0.3164|± |0.0182| | | |none | 0|acc_norm|0.3656|± |0.0189| | - agieval_lsat_ar | 1|none | 0|acc |0.2130|± |0.0271| | | |none | 0|acc_norm|0.1913|± |0.0260| | - agieval_lsat_lr | 1|none | 0|acc |0.4078|± |0.0218| | | |none | 0|acc_norm|0.3647|± |0.0213| | - agieval_lsat_rc | 1|none | 0|acc |0.4981|± |0.0305| | | |none | 0|acc_norm|0.4498|± |0.0304| | - agieval_sat_en | 1|none | 0|acc |0.6650|± |0.0330| | | |none | 0|acc_norm|0.5922|± |0.0343| | - agieval_sat_en_without_passage| 1|none | 0|acc |0.4612|± |0.0348| | | |none | 0|acc_norm|0.3932|± |0.0341| | - agieval_sat_math | 1|none | 0|acc |0.3273|± |0.0317| | | |none | 0|acc_norm|0.2818|± |0.0304|
{"language": ["en"], "license": "apache-2.0", "tags": ["open-source", "code", "math", "chemistry", "biology", "text-generation", "question-answering"], "pipeline_tag": "text-generation"}
Locutusque/OpenCerebrum-2.0-7B
null
[ "transformers", "safetensors", "mistral", "text-generation", "open-source", "code", "math", "chemistry", "biology", "question-answering", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T16:58:36+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mistral #text-generation #open-source #code #math #chemistry #biology #question-answering #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
OpenCerebrum-2.0-7B =================== OpenCerebrum-2.0-7B is an open-source language model fine-tuned from the alpindale/Mistral-7B-v0.2-hf base model on a diverse dataset aimed at replicating capabilities of Aether Research's proprietary Cerebrum model. The model was fine-tuned with SFT and DPO on approximately 7,000 examples across 15 data sources spanning coding, math, science, multi-turn conversation, RAG, reasoning, and general instruction-following. The goal was to assemble public datasets that could help the model achieve strong performance on benchmarks where Cerebrum excels. Model Details ------------- * Base Model: alpindale/Mistral-7B-v0.2-hf * Parameters: 7 billion * Fine-Tuning Dataset Size: ~7,000 examples * Fine-Tuning Data: Advanced in-house curation techniques at Cognitive Computations, with 15 different data sources for DPO and SFT. * Language: English * License: Apache 2.0 Quants ------ ### EXL2 @bartowski * URL ### GGUF @bartowski * URL Intended Use ------------ OpenCerebrum-2.0-7B is intended to be a powerful open-source model for coding, math, science, and general question-answering and text generation tasks. Its diverse fine-tuning data aims to equip it with broad knowledge and reasoning capabilities. However, as an open-source replica trained on a subset of data compared to the original Cerebrum, it may not match Cerebrum's full performance. Additionally, biases and limitations of the fine-tuning data may be reflected in the model's outputs. Limitations and Biases ---------------------- * The model may have biases and limitations inherited from its fine-tuning datasets. Thorough testing is needed to characterize these. * As the model is based on a 7B parameter model, it has computational and memory constraints compared to larger models. Evaluations -----------
[ "### EXL2 @bartowski\n\n\n* URL", "### GGUF @bartowski\n\n\n* URL\n\n\nIntended Use\n------------\n\n\nOpenCerebrum-2.0-7B is intended to be a powerful open-source model for coding, math, science, and general question-answering and text generation tasks. Its diverse fine-tuning data aims to equip it with broad knowledge and reasoning capabilities.\n\n\nHowever, as an open-source replica trained on a subset of data compared to the original Cerebrum, it may not match Cerebrum's full performance. Additionally, biases and limitations of the fine-tuning data may be reflected in the model's outputs.\n\n\nLimitations and Biases\n----------------------\n\n\n* The model may have biases and limitations inherited from its fine-tuning datasets. Thorough testing is needed to characterize these.\n* As the model is based on a 7B parameter model, it has computational and memory constraints compared to larger models.\n\n\nEvaluations\n-----------" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #open-source #code #math #chemistry #biology #question-answering #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### EXL2 @bartowski\n\n\n* URL", "### GGUF @bartowski\n\n\n* URL\n\n\nIntended Use\n------------\n\n\nOpenCerebrum-2.0-7B is intended to be a powerful open-source model for coding, math, science, and general question-answering and text generation tasks. Its diverse fine-tuning data aims to equip it with broad knowledge and reasoning capabilities.\n\n\nHowever, as an open-source replica trained on a subset of data compared to the original Cerebrum, it may not match Cerebrum's full performance. Additionally, biases and limitations of the fine-tuning data may be reflected in the model's outputs.\n\n\nLimitations and Biases\n----------------------\n\n\n* The model may have biases and limitations inherited from its fine-tuning datasets. Thorough testing is needed to characterize these.\n* As the model is based on a 7B parameter model, it has computational and memory constraints compared to larger models.\n\n\nEvaluations\n-----------" ]
token-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
yuzheguo/mlma-lab8-good-model
null
[ "transformers", "safetensors", "gpt2", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T16:58:59+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gpt2 #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gpt2 #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # camembert-finetuned-ner This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2258 - Precision: 0.8862 - Recall: 0.9023 - F1: 0.8942 - Accuracy: 0.9472 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2995 | 1.0 | 2500 | 0.2595 | 0.8672 | 0.8846 | 0.8758 | 0.9411 | | 0.2017 | 2.0 | 5000 | 0.2181 | 0.8808 | 0.8946 | 0.8877 | 0.9451 | | 0.1604 | 3.0 | 7500 | 0.2258 | 0.8862 | 0.9023 | 0.8942 | 0.9472 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "camembert-base", "model-index": [{"name": "camembert-finetuned-ner", "results": []}]}
ManoloMtl/camembert-finetuned-ner
null
[ "transformers", "tensorboard", "safetensors", "camembert", "token-classification", "generated_from_trainer", "base_model:camembert-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T16:59:26+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #camembert #token-classification #generated_from_trainer #base_model-camembert-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
camembert-finetuned-ner ======================= This model is a fine-tuned version of camembert-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.2258 * Precision: 0.8862 * Recall: 0.9023 * F1: 0.8942 * Accuracy: 0.9472 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #camembert #token-classification #generated_from_trainer #base_model-camembert-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-v0.1"}
alexgrigoras/mistral_7b_finetuned_custom_data
null
[ "peft", "safetensors", "mistral", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-04-13T16:59:32+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #mistral #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-v0.1 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.7.1
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.7.1" ]
[ "TAGS\n#peft #safetensors #mistral #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-v0.1 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.7.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
LuisGon/Fourth_Model
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T17:00:10+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
minerba/kogpt2-lora
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T17:02:05+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [arcee-ai/sec-mistral-7b-instruct-1.6-epoch](https://huggingface.co/arcee-ai/sec-mistral-7b-instruct-1.6-epoch) * [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: arcee-ai/sec-mistral-7b-instruct-1.6-epoch layer_range: [0, 32] - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 layer_range: [0, 32] merge_method: slerp base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["arcee-ai/sec-mistral-7b-instruct-1.6-epoch", "cognitivecomputations/dolphin-2.8-mistral-7b-v02"]}
mergekit-community/mergekit-slerp-wvyefgo
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch", "base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T17:03:58+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-arcee-ai/sec-mistral-7b-instruct-1.6-epoch #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * arcee-ai/sec-mistral-7b-instruct-1.6-epoch * cognitivecomputations/dolphin-2.8-mistral-7b-v02 ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* arcee-ai/sec-mistral-7b-instruct-1.6-epoch\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-arcee-ai/sec-mistral-7b-instruct-1.6-epoch #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* arcee-ai/sec-mistral-7b-instruct-1.6-epoch\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/migtissera/Tess-2.0-Mixtral-8x22B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-IQ1_S.gguf) | i1-IQ1_S | 29.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-IQ1_M.gguf) | i1-IQ1_M | 32.8 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 42.1 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-IQ2_S.gguf) | i1-IQ2_S | 42.7 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-IQ2_M.gguf) | i1-IQ2_M | 46.8 | | | [PART 1](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q2_K.gguf.part2of2) | i1-Q2_K | 52.2 | IQ3_XXS probably better | | [PART 1](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-IQ3_XXS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-IQ3_XXS.gguf.part2of2) | i1-IQ3_XXS | 55.0 | lower quality | | [PART 1](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 58.3 | | | [PART 1](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 61.6 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 61.6 | IQ3_XS probably better | | [PART 1](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 64.6 | | | [PART 1](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 67.9 | IQ3_S probably better | | [PART 1](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 72.7 | IQ3_M probably better | | [PART 1](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 75.6 | | | [PART 1](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 80.0 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 80.6 | optimal size/speed/quality | | [PART 1](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 85.7 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 97.1 | | | [PART 1](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q5_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q5_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q5_K_M.gguf.part3of3) | i1-Q5_K_M | 100.1 | | | [PART 1](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF/resolve/main/Tess-2.0-Mixtral-8x22B.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 115.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "migtissera/Tess-2.0-Mixtral-8x22B", "quantized_by": "mradermacher"}
mradermacher/Tess-2.0-Mixtral-8x22B-i1-GGUF
null
[ "transformers", "gguf", "en", "base_model:migtissera/Tess-2.0-Mixtral-8x22B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-13T17:07:19+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-migtissera/Tess-2.0-Mixtral-8x22B #license-apache-2.0 #endpoints_compatible #region-us
About ----- weighted/imatrix quants of URL static quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-migtissera/Tess-2.0-Mixtral-8x22B #license-apache-2.0 #endpoints_compatible #region-us \n" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "sft"]}
hiraltalsaniya/01_medical-llama2-7b-fine-tune
null
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-13T17:15:48+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
[<img src="dicta-logo.jpg" width="300px"/>](https://dicta.org.il) # Model Card for DictaLM-2.0-AWQ The DictaLM-2.0 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters trained to specialize in Hebrew text. For full details of this model please read our [release blog post](https://dicta.org.il/dicta-lm). This model contains the GPTQ 4-bit quantized version of the base model [DictaLM-2.0](https://huggingface.co/dicta-il/dictalm2.0). You can view and access the full collection of base/instruct unquantized/quantized versions of `DictaLM-2.0` [here](https://huggingface.co/collections/dicta-il/dicta-lm-20-collection-661bbda397df671e4a430c27). ## Example Code Running this code requires ~5.1GB of GPU VRAM. ```python from transformers import pipeline # This loads the model onto the GPU in bfloat16 precision model = pipeline('text-generation', 'dicta-il/dictalm2.0-GPTQ', device_map='cuda') # Sample few shot examples prompt = """ עבר: הלכתי עתיד: אלך עבר: שמרתי עתיד: אשמור עבר: שמעתי עתיד: אשמע עבר: הבנתי עתיד: """ print(model(prompt.strip(), do_sample=False, max_new_tokens=4, stop_sequence='\n')) # [{'generated_text': 'עבר: הלכתי\nעתיד: אלך\n\nעבר: שמרתי\nעתיד: אשמור\n\nעבר: שמעתי\nעתיד: אשמע\n\nעבר: הבנתי\nעתיד: אבין\n\n'}] ``` ## Model Architecture DictaLM-2.0 is based on the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model with the following changes: - An extended tokenizer with tokens for Hebrew, increasing the compression ratio - An extended tokenizer with 1,000 injected tokens specifically for Hebrew, increasing the compression rate from 5.78 tokens/word to 2.76 tokens/word. ## Notice DictaLM 2.0 is a pretrained base model and therefore does not have any moderation mechanisms. ## Citation If you use this model, please cite: ```bibtex [Will be added soon] ```
{"language": ["en", "he"], "license": "apache-2.0", "tags": ["pretrained"], "pipeline_tag": "text-generation", "inference": false}
dicta-il/dictalm2.0-GPTQ
null
[ "transformers", "safetensors", "mistral", "text-generation", "pretrained", "en", "he", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-13T17:16:25+00:00
[]
[ "en", "he" ]
TAGS #transformers #safetensors #mistral #text-generation #pretrained #en #he #license-apache-2.0 #autotrain_compatible #text-generation-inference #4-bit #region-us
<img src="URL" width="300px"/> # Model Card for DictaLM-2.0-AWQ The DictaLM-2.0 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters trained to specialize in Hebrew text. For full details of this model please read our release blog post. This model contains the GPTQ 4-bit quantized version of the base model DictaLM-2.0. You can view and access the full collection of base/instruct unquantized/quantized versions of 'DictaLM-2.0' here. ## Example Code Running this code requires ~5.1GB of GPU VRAM. ## Model Architecture DictaLM-2.0 is based on the Mistral-7B-v0.1 model with the following changes: - An extended tokenizer with tokens for Hebrew, increasing the compression ratio - An extended tokenizer with 1,000 injected tokens specifically for Hebrew, increasing the compression rate from 5.78 tokens/word to 2.76 tokens/word. ## Notice DictaLM 2.0 is a pretrained base model and therefore does not have any moderation mechanisms. If you use this model, please cite:
[ "# Model Card for DictaLM-2.0-AWQ\n\nThe DictaLM-2.0 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters trained to specialize in Hebrew text. \n\nFor full details of this model please read our release blog post.\n\nThis model contains the GPTQ 4-bit quantized version of the base model DictaLM-2.0.\n\nYou can view and access the full collection of base/instruct unquantized/quantized versions of 'DictaLM-2.0' here.", "## Example Code\n\nRunning this code requires ~5.1GB of GPU VRAM.", "## Model Architecture\n\nDictaLM-2.0 is based on the Mistral-7B-v0.1 model with the following changes:\n- An extended tokenizer with tokens for Hebrew, increasing the compression ratio\n- An extended tokenizer with 1,000 injected tokens specifically for Hebrew, increasing the compression rate from 5.78 tokens/word to 2.76 tokens/word.", "## Notice\n\nDictaLM 2.0 is a pretrained base model and therefore does not have any moderation mechanisms.\n\nIf you use this model, please cite:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #pretrained #en #he #license-apache-2.0 #autotrain_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for DictaLM-2.0-AWQ\n\nThe DictaLM-2.0 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters trained to specialize in Hebrew text. \n\nFor full details of this model please read our release blog post.\n\nThis model contains the GPTQ 4-bit quantized version of the base model DictaLM-2.0.\n\nYou can view and access the full collection of base/instruct unquantized/quantized versions of 'DictaLM-2.0' here.", "## Example Code\n\nRunning this code requires ~5.1GB of GPU VRAM.", "## Model Architecture\n\nDictaLM-2.0 is based on the Mistral-7B-v0.1 model with the following changes:\n- An extended tokenizer with tokens for Hebrew, increasing the compression ratio\n- An extended tokenizer with 1,000 injected tokens specifically for Hebrew, increasing the compression rate from 5.78 tokens/word to 2.76 tokens/word.", "## Notice\n\nDictaLM 2.0 is a pretrained base model and therefore does not have any moderation mechanisms.\n\nIf you use this model, please cite:" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
kimgahyeon/dot_0412
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T17:20:23+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/ucfa43e
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T17:21:08+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # organc-swin-base-finetuned This model is a fine-tuned version of [microsoft/swin-large-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-large-patch4-window7-224-in22k) on the medmnist-v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.2794 - Accuracy: 0.9199 - Precision: 0.9195 - Recall: 0.9067 - F1: 0.9121 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.7224 | 1.0 | 203 | 0.2140 | 0.9373 | 0.9555 | 0.9299 | 0.9403 | | 0.6132 | 2.0 | 406 | 0.1452 | 0.9640 | 0.9707 | 0.9587 | 0.9619 | | 0.6254 | 3.0 | 609 | 0.1497 | 0.9670 | 0.9633 | 0.9665 | 0.9639 | | 0.5769 | 4.0 | 813 | 0.0925 | 0.9804 | 0.9775 | 0.9799 | 0.9781 | | 0.5568 | 5.0 | 1016 | 0.1554 | 0.9624 | 0.9668 | 0.9620 | 0.9627 | | 0.5332 | 6.0 | 1219 | 0.1187 | 0.9799 | 0.9847 | 0.9818 | 0.9829 | | 0.4788 | 7.0 | 1422 | 0.1360 | 0.9666 | 0.9742 | 0.9637 | 0.9678 | | 0.4178 | 8.0 | 1626 | 0.0988 | 0.9820 | 0.9826 | 0.9829 | 0.9825 | | 0.3369 | 9.0 | 1829 | 0.1799 | 0.9762 | 0.9846 | 0.9791 | 0.9814 | | 0.3054 | 9.99 | 2030 | 0.1454 | 0.9804 | 0.9841 | 0.9830 | 0.9832 | ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "datasets": ["medmnist-v2"], "metrics": ["accuracy", "precision", "recall", "f1"], "base_model": "microsoft/swin-large-patch4-window7-224-in22k", "model-index": [{"name": "organc-swin-base-finetuned", "results": []}]}
selmamalak/organc-swin-base-finetuned
null
[ "peft", "safetensors", "generated_from_trainer", "dataset:medmnist-v2", "base_model:microsoft/swin-large-patch4-window7-224-in22k", "license:apache-2.0", "region:us" ]
null
2024-04-13T17:23:17+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #dataset-medmnist-v2 #base_model-microsoft/swin-large-patch4-window7-224-in22k #license-apache-2.0 #region-us
organc-swin-base-finetuned ========================== This model is a fine-tuned version of microsoft/swin-large-patch4-window7-224-in22k on the medmnist-v2 dataset. It achieves the following results on the evaluation set: * Loss: 0.2794 * Accuracy: 0.9199 * Precision: 0.9195 * Recall: 0.9067 * F1: 0.9121 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.005 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.005\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #dataset-medmnist-v2 #base_model-microsoft/swin-large-patch4-window7-224-in22k #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.005\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"}
JoyboyXoXo/Enlighten_Instruct
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-04-13T17:24:14+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
text-generation
transformers
Official [AQLM](https://arxiv.org/abs/2401.06118) quantization of [mistralai/Mistral-7B-Instruct-v0.2 ](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). For this quantization, we used 2 codebooks of 8 bits. Results: | Model | Quantization | MMLU (5-shot) | Model size, Gb | |------|------|------|------| |mistralai/Mistral-7B-Instruct-v0.2 | None | 0.5912 | 14.5 | | | 2x8 | 0.4384 | 2.3 |
{"library_name": "transformers", "tags": ["mistral", "finetuned", "conversational", "text-generation-inference"]}
ISTA-DASLab/Mistral-7B-Instruct-v0.2-AQLM-2Bit-2x8
null
[ "transformers", "safetensors", "mistral", "text-generation", "finetuned", "conversational", "text-generation-inference", "arxiv:2401.06118", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
null
2024-04-13T17:24:59+00:00
[ "2401.06118" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #finetuned #conversational #text-generation-inference #arxiv-2401.06118 #autotrain_compatible #endpoints_compatible #8-bit #region-us
Official AQLM quantization of mistralai/Mistral-7B-Instruct-v0.2 . For this quantization, we used 2 codebooks of 8 bits. Results:
[]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #finetuned #conversational #text-generation-inference #arxiv-2401.06118 #autotrain_compatible #endpoints_compatible #8-bit #region-us \n" ]
null
transformers
# Uploaded model - **Developed by:** czaplon - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
czaplon/new-postQQ-german
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-13T17:25:30+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: czaplon - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: czaplon\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: czaplon\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # diaratechHf This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 2 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "pipeline_tag": "text-generation", "model-index": [{"name": "diaratechHf", "results": []}]}
Yash0109/diaratechHf
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "text-generation", "conversational", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-04-13T17:25:31+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #text-generation #conversational #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
# diaratechHf This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 2 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# diaratechHf\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_steps: 0.03\n- training_steps: 2", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #text-generation #conversational #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n", "# diaratechHf\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_steps: 0.03\n- training_steps: 2", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
![tinyviking.png](https://huggingface.co/phanerozoic/Tiny-Viking-1.1b-v0.1/resolve/main/tinyviking.jpg) # TinyViking-1.1B-v0.1 TinyViking-1.1B-v0.1 is a specialized language model designed for generating Viking-themed content. Developed by phanerozoic, this model is fine-tuned from TinyLlamaTinyLlama-1.1B-Chat-v1.0, optimized for environments with limited computing resources. ### Performance TinyViking is capable of generating engaging Viking narratives, reflecting an understanding of Viking culture. However, it is not designed for general language tasks and may struggle with complex scientific or technical queries. ### Direct Use Ideal for thematic language generation, particularly in settings like NPCs in games, where fun and thematic engagement are prioritized over detailed factual accuracy. ### Training Data Trained on "The Saga of Grettir the Strong: Grettir's Saga" to ensure authentic thematic content. ### Custom Stopping Strings Custom stopping strings are employed to enhance output quality: - "}," - "User:" - "You:" - "\nUser" - "\nUser:" - "me:" - "user" - "\n" ### Training Hyperparameters and Fine-Tuning Details - **Learning Rate**: 2e-5 - **Epochs**: 1 - **Training Duration**: Approximately 5.6 minutes on an RTX 6000 Ada GPU - **LoRA Rank**: 2048 - **LoRA Alpha**: 4096 - **LoRA Dropout**: 0.05 - **Cutoff Length**: 256 - **Batch Size**: 4 (micro batch size) - **Warmup Steps**: 8 - **Optimizer**: adamw_torch - **Gradient Accumulation Steps**: 1 ### Limitations Specialized in Viking dialect and narratives, TinyViking is less effective outside its thematic focus. ### Compute Infrastructure Trained on an RTX 6000 Ada Lovelace GPU ### Results Successfully generates Viking-themed responses, maintaining thematic consistency while displaying improved coherence and depth over previous models due to advancements in dataset generation and parsing. ### Summary TinyViking-1.1B-v0.1 shows an improvement in quality compared to earlier thematic models, thanks to a new dataset generation method that helps to conserve the base model's already tenuous ability to hold a conversation. While it excels in Viking-themed interactions, its specialized focus limits broader application. ### Acknowledgments Gratitude to the TinyLlama team, whose foundational work was, as always, essential for developing TinyViking.
{"language": ["en"], "license": "cc-by-nc-4.0", "widget": [{"text": "Who are you?\n", "example_title": "Introduction"}]}
phanerozoic/Tiny-Viking-1.1b-v0.1
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T17:25:39+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #conversational #en #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
!URL # TinyViking-1.1B-v0.1 TinyViking-1.1B-v0.1 is a specialized language model designed for generating Viking-themed content. Developed by phanerozoic, this model is fine-tuned from TinyLlamaTinyLlama-1.1B-Chat-v1.0, optimized for environments with limited computing resources. ### Performance TinyViking is capable of generating engaging Viking narratives, reflecting an understanding of Viking culture. However, it is not designed for general language tasks and may struggle with complex scientific or technical queries. ### Direct Use Ideal for thematic language generation, particularly in settings like NPCs in games, where fun and thematic engagement are prioritized over detailed factual accuracy. ### Training Data Trained on "The Saga of Grettir the Strong: Grettir's Saga" to ensure authentic thematic content. ### Custom Stopping Strings Custom stopping strings are employed to enhance output quality: - "}," - "User:" - "You:" - "\nUser" - "\nUser:" - "me:" - "user" - "\n" ### Training Hyperparameters and Fine-Tuning Details - Learning Rate: 2e-5 - Epochs: 1 - Training Duration: Approximately 5.6 minutes on an RTX 6000 Ada GPU - LoRA Rank: 2048 - LoRA Alpha: 4096 - LoRA Dropout: 0.05 - Cutoff Length: 256 - Batch Size: 4 (micro batch size) - Warmup Steps: 8 - Optimizer: adamw_torch - Gradient Accumulation Steps: 1 ### Limitations Specialized in Viking dialect and narratives, TinyViking is less effective outside its thematic focus. ### Compute Infrastructure Trained on an RTX 6000 Ada Lovelace GPU ### Results Successfully generates Viking-themed responses, maintaining thematic consistency while displaying improved coherence and depth over previous models due to advancements in dataset generation and parsing. ### Summary TinyViking-1.1B-v0.1 shows an improvement in quality compared to earlier thematic models, thanks to a new dataset generation method that helps to conserve the base model's already tenuous ability to hold a conversation. While it excels in Viking-themed interactions, its specialized focus limits broader application. ### Acknowledgments Gratitude to the TinyLlama team, whose foundational work was, as always, essential for developing TinyViking.
[ "# TinyViking-1.1B-v0.1\n\nTinyViking-1.1B-v0.1 is a specialized language model designed for generating Viking-themed content. Developed by phanerozoic, this model is fine-tuned from TinyLlamaTinyLlama-1.1B-Chat-v1.0, optimized for environments with limited computing resources.", "### Performance\nTinyViking is capable of generating engaging Viking narratives, reflecting an understanding of Viking culture. However, it is not designed for general language tasks and may struggle with complex scientific or technical queries.", "### Direct Use\nIdeal for thematic language generation, particularly in settings like NPCs in games, where fun and thematic engagement are prioritized over detailed factual accuracy.", "### Training Data\nTrained on \"The Saga of Grettir the Strong: Grettir's Saga\" to ensure authentic thematic content.", "### Custom Stopping Strings\nCustom stopping strings are employed to enhance output quality:\n- \"},\"\n- \"User:\"\n- \"You:\"\n- \"\\nUser\"\n- \"\\nUser:\"\n- \"me:\"\n- \"user\"\n- \"\\n\"", "### Training Hyperparameters and Fine-Tuning Details\n- Learning Rate: 2e-5\n- Epochs: 1\n- Training Duration: Approximately 5.6 minutes on an RTX 6000 Ada GPU\n- LoRA Rank: 2048\n- LoRA Alpha: 4096\n- LoRA Dropout: 0.05\n- Cutoff Length: 256\n- Batch Size: 4 (micro batch size)\n- Warmup Steps: 8\n- Optimizer: adamw_torch\n- Gradient Accumulation Steps: 1", "### Limitations\nSpecialized in Viking dialect and narratives, TinyViking is less effective outside its thematic focus.", "### Compute Infrastructure\nTrained on an RTX 6000 Ada Lovelace GPU", "### Results\nSuccessfully generates Viking-themed responses, maintaining thematic consistency while displaying improved coherence and depth over previous models due to advancements in dataset generation and parsing.", "### Summary\nTinyViking-1.1B-v0.1 shows an improvement in quality compared to earlier thematic models, thanks to a new dataset generation method that helps to conserve the base model's already tenuous ability to hold a conversation. While it excels in Viking-themed interactions, its specialized focus limits broader application.", "### Acknowledgments\nGratitude to the TinyLlama team, whose foundational work was, as always, essential for developing TinyViking." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #en #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# TinyViking-1.1B-v0.1\n\nTinyViking-1.1B-v0.1 is a specialized language model designed for generating Viking-themed content. Developed by phanerozoic, this model is fine-tuned from TinyLlamaTinyLlama-1.1B-Chat-v1.0, optimized for environments with limited computing resources.", "### Performance\nTinyViking is capable of generating engaging Viking narratives, reflecting an understanding of Viking culture. However, it is not designed for general language tasks and may struggle with complex scientific or technical queries.", "### Direct Use\nIdeal for thematic language generation, particularly in settings like NPCs in games, where fun and thematic engagement are prioritized over detailed factual accuracy.", "### Training Data\nTrained on \"The Saga of Grettir the Strong: Grettir's Saga\" to ensure authentic thematic content.", "### Custom Stopping Strings\nCustom stopping strings are employed to enhance output quality:\n- \"},\"\n- \"User:\"\n- \"You:\"\n- \"\\nUser\"\n- \"\\nUser:\"\n- \"me:\"\n- \"user\"\n- \"\\n\"", "### Training Hyperparameters and Fine-Tuning Details\n- Learning Rate: 2e-5\n- Epochs: 1\n- Training Duration: Approximately 5.6 minutes on an RTX 6000 Ada GPU\n- LoRA Rank: 2048\n- LoRA Alpha: 4096\n- LoRA Dropout: 0.05\n- Cutoff Length: 256\n- Batch Size: 4 (micro batch size)\n- Warmup Steps: 8\n- Optimizer: adamw_torch\n- Gradient Accumulation Steps: 1", "### Limitations\nSpecialized in Viking dialect and narratives, TinyViking is less effective outside its thematic focus.", "### Compute Infrastructure\nTrained on an RTX 6000 Ada Lovelace GPU", "### Results\nSuccessfully generates Viking-themed responses, maintaining thematic consistency while displaying improved coherence and depth over previous models due to advancements in dataset generation and parsing.", "### Summary\nTinyViking-1.1B-v0.1 shows an improvement in quality compared to earlier thematic models, thanks to a new dataset generation method that helps to conserve the base model's already tenuous ability to hold a conversation. While it excels in Viking-themed interactions, its specialized focus limits broader application.", "### Acknowledgments\nGratitude to the TinyLlama team, whose foundational work was, as always, essential for developing TinyViking." ]
text-generation
transformers
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoTokenizer, pipeline import torch model = "Rhaps360/gemma-dep-ins-ft" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda" if(torch.cuda.is_available()) else "cpu", ) messages = [ {"role": "user", "content": "### Context: the input message goes here. ### Response: "} ] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline( prompt, max_new_tokens=300, do_sample=True, temperature=0.2, top_k=50, top_p=0.95 ) print(outputs[0]["generated_text"][len(prompt):]) ```
{"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft", "chatbot", "depression", "therapy"], "widget": [{"messages": [{"role": "user", "content": "### Context: i am depressed."}]}]}
Rhaps360/gemma-dep-ins-ft
null
[ "transformers", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "chatbot", "depression", "therapy", "conversational", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-13T17:27:17+00:00
[]
[]
TAGS #transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #chatbot #depression #therapy #conversational #license-other #endpoints_compatible #region-us
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit AutoTrain. # Usage
[ "# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.", "# Usage" ]
[ "TAGS\n#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #chatbot #depression #therapy #conversational #license-other #endpoints_compatible #region-us \n", "# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.", "# Usage" ]
sentence-similarity
sentence-transformers
Model fine-tuned using MultipleNegativesRankingLoss # guillezala/distiluse_ft4_database1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('guillezala/distiluse_ft4_database1') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=guillezala/distiluse_ft4_database1) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 247 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "datasets": ["dataset1"], "pipeline_tag": "sentence-similarity"}
guillezala/distiluse_ft4_database1
null
[ "sentence-transformers", "safetensors", "distilbert", "feature-extraction", "sentence-similarity", "dataset:dataset1", "endpoints_compatible", "region:us" ]
null
2024-04-13T17:27:45+00:00
[]
[]
TAGS #sentence-transformers #safetensors #distilbert #feature-extraction #sentence-similarity #dataset-dataset1 #endpoints_compatible #region-us
Model fine-tuned using MultipleNegativesRankingLoss # guillezala/distiluse_ft4_database1 This is a sentence-transformers model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL ## Training The model was trained with the parameters: DataLoader: 'URL.dataloader.DataLoader' of length 247 with parameters: Loss: 'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters: Parameters of the fit()-Method: ## Full Model Architecture ## Citing & Authors
[ "# guillezala/distiluse_ft4_database1\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 247 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #safetensors #distilbert #feature-extraction #sentence-similarity #dataset-dataset1 #endpoints_compatible #region-us \n", "# guillezala/distiluse_ft4_database1\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 247 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Salvatale/gemma-2b-chiesa
null
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T17:31:12+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "ura-hcmut/GemSUra-7B"}
lctzz540/bunbo-reward
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:ura-hcmut/GemSUra-7B", "region:us" ]
null
2024-04-13T17:32:39+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-ura-hcmut/GemSUra-7B #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-ura-hcmut/GemSUra-7B #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
reinforcement-learning
null
# PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'First Experience' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 10000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'trsdimi/LunarLander-v2-UNIT8' 'batch_size': 512 'minibatch_size': 128} ```
{"tags": ["LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "-195.08 +/- 113.10", "name": "mean_reward", "verified": false}]}]}]}
trsdimi/LunarLander-v2-UNIT8
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
null
2024-04-13T17:35:00+00:00
[]
[]
TAGS #tensorboard #LunarLander-v2 #ppo #deep-reinforcement-learning #reinforcement-learning #custom-implementation #deep-rl-course #model-index #region-us
# PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters
[ "# PPO Agent Playing LunarLander-v2\n\n This is a trained model of a PPO agent playing LunarLander-v2.\n\n # Hyperparameters" ]
[ "TAGS\n#tensorboard #LunarLander-v2 #ppo #deep-reinforcement-learning #reinforcement-learning #custom-implementation #deep-rl-course #model-index #region-us \n", "# PPO Agent Playing LunarLander-v2\n\n This is a trained model of a PPO agent playing LunarLander-v2.\n\n # Hyperparameters" ]
text2text-generation
transformers
# Question Generation without Answers : End to End Generation **TrainingArguments** | Parameter | Value | |----------------------------------|------------------------| | `evaluation_strategy` | `epoch` | | `learning_rate` | `2e-5` | | `per_device_train_batch_size` | `8` | | `per_device_eval_batch_size` | `8` | | `num_train_epochs` | `3` | | `weight_decay` | `0.01` | | `save_strategy` | `epoch` | | `disable_tqdm` | `False` | | `gradient_accumulation_steps` | `2` | Note : The batch size and acclumulation steps were decreased during training due to memory constraints. ### Note : This model correcly predicts when called in the application but it is currently not giving correct questions in this inference api.
{}
mahima18/qat5
null
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T17:50:29+00:00
[]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Question Generation without Answers : End to End Generation =========================================================== TrainingArguments Note : The batch size and acclumulation steps were decreased during training due to memory constraints. ### Note : This model correcly predicts when called in the application but it is currently not giving correct questions in this inference api.
[ "### Note : This model correcly predicts when called in the application but it is currently not giving correct questions in this inference api." ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Note : This model correcly predicts when called in the application but it is currently not giving correct questions in this inference api." ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Markaroll/gemma-Role-Mining-JSON
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T17:50:39+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
jmaciejowski/pegasus_multi_news_ep1
null
[ "transformers", "safetensors", "pegasus", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-13T17:53:16+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #pegasus #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #pegasus #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
Lora generating images in the Depth style. sd version : SD 1.5 You can use controlnet like Canny, lineart, Softedge, etc. to create Depth with just lineart. Add 'depth, 3d' to the prompt. The number after the Lora filename refers to the number of the merged lora. Each will give different results, so use the one that works reasonably well. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f2b20aeb9e8a5f05cf9a9d/gWrgDslc3qfDoDanHobis.png) <video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/62f2b20aeb9e8a5f05cf9a9d/MYj2uysFCG469XZD5DLJn.mp4"></video>
{}
toyxyz/Line2Depth_sd1.5
null
[ "region:us" ]
null
2024-04-13T17:54:55+00:00
[]
[]
TAGS #region-us
Lora generating images in the Depth style. sd version : SD 1.5 You can use controlnet like Canny, lineart, Softedge, etc. to create Depth with just lineart. Add 'depth, 3d' to the prompt. The number after the Lora filename refers to the number of the merged lora. Each will give different results, so use the one that works reasonably well. !image/png <video controls autoplay src="URL
[]
[ "TAGS\n#region-us \n" ]
sentence-similarity
sentence-transformers
# seregadgl101/baii_new_v2_10ep This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('seregadgl101/baii_new_v2_10ep') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=seregadgl101/baii_new_v2_10ep) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
seregadgl101/baii_new_v2_10ep
null
[ "sentence-transformers", "safetensors", "xlm-roberta", "feature-extraction", "sentence-similarity", "endpoints_compatible", "region:us" ]
null
2024-04-13T17:58:24+00:00
[]
[]
TAGS #sentence-transformers #safetensors #xlm-roberta #feature-extraction #sentence-similarity #endpoints_compatible #region-us
# seregadgl101/baii_new_v2_10ep This is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL ## Full Model Architecture ## Citing & Authors
[ "# seregadgl101/baii_new_v2_10ep\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #safetensors #xlm-roberta #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n", "# seregadgl101/baii_new_v2_10ep\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Full Model Architecture", "## Citing & Authors" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # sangam0406/my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.7830 - Validation Loss: 2.0538 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.4805 | 2.2952 | 0 | | 2.0177 | 2.0538 | 1 | | 1.7872 | 2.0538 | 2 | | 1.7675 | 2.0538 | 3 | | 1.7830 | 2.0538 | 4 | ### Framework versions - Transformers 4.38.2 - TensorFlow 2.15.0 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "sangam0406/my_awesome_model", "results": []}]}
sangam0406/my_awesome_model
null
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "base_model:distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-13T18:01:50+00:00
[]
[]
TAGS #transformers #tf #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
sangam0406/my\_awesome\_model ============================= This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 1.7830 * Validation Loss: 2.0538 * Epoch: 4 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 500, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.38.2 * TensorFlow 2.15.0 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 500, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tf #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 500, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/LlamaAdapter-llama2-emo-200
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T18:02:54+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Uploaded model - **Developed by:** ramixpe - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/mistral-7b-v0.2-bnb-4bit"}
ramixpe/sp_model_4090
null
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/mistral-7b-v0.2-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T18:04:17+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/mistral-7b-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: ramixpe - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-v0.2-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: ramixpe\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/mistral-7b-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: ramixpe\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Eliorkalfon/deep-wizard-7B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/deep-wizard-7B-slerp-GGUF/resolve/main/deep-wizard-7B-slerp.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/deep-wizard-7B-slerp-GGUF/resolve/main/deep-wizard-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/deep-wizard-7B-slerp-GGUF/resolve/main/deep-wizard-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/deep-wizard-7B-slerp-GGUF/resolve/main/deep-wizard-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/deep-wizard-7B-slerp-GGUF/resolve/main/deep-wizard-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/deep-wizard-7B-slerp-GGUF/resolve/main/deep-wizard-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/deep-wizard-7B-slerp-GGUF/resolve/main/deep-wizard-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/deep-wizard-7B-slerp-GGUF/resolve/main/deep-wizard-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/deep-wizard-7B-slerp-GGUF/resolve/main/deep-wizard-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/deep-wizard-7B-slerp-GGUF/resolve/main/deep-wizard-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/deep-wizard-7B-slerp-GGUF/resolve/main/deep-wizard-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/deep-wizard-7B-slerp-GGUF/resolve/main/deep-wizard-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/deep-wizard-7B-slerp-GGUF/resolve/main/deep-wizard-7B-slerp.Q6_K.gguf) | Q6_K | 5.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/deep-wizard-7B-slerp-GGUF/resolve/main/deep-wizard-7B-slerp.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["merge", "mergekit", "lazymergekit", "deepseek-ai/deepseek-math-7b-rl", "deepseek-ai/deepseek-math-7b-instruct"], "base_model": "Eliorkalfon/deep-wizard-7B-slerp", "quantized_by": "mradermacher"}
mradermacher/deep-wizard-7B-slerp-GGUF
null
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "deepseek-ai/deepseek-math-7b-rl", "deepseek-ai/deepseek-math-7b-instruct", "en", "base_model:Eliorkalfon/deep-wizard-7B-slerp", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-13T18:04:25+00:00
[]
[ "en" ]
TAGS #transformers #gguf #merge #mergekit #lazymergekit #deepseek-ai/deepseek-math-7b-rl #deepseek-ai/deepseek-math-7b-instruct #en #base_model-Eliorkalfon/deep-wizard-7B-slerp #license-other #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #merge #mergekit #lazymergekit #deepseek-ai/deepseek-math-7b-rl #deepseek-ai/deepseek-math-7b-instruct #en #base_model-Eliorkalfon/deep-wizard-7B-slerp #license-other #endpoints_compatible #region-us \n" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7366 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 2.5739 | | 2.8841 | 2.0 | 500 | 1.8642 | | 2.8841 | 3.0 | 750 | 1.7366 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "my_awesome_qa_model", "results": []}]}
Zzzalo/my_awesome_qa_model
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-13T18:05:13+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
my\_awesome\_qa\_model ====================== This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.7366 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
adapter-transformers
# Adapter `BigTMiami/AA_seq_bn_P_micro` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_25M_10_000_condensed](https://huggingface.co/datasets/BigTMiami/amazon_25M_10_000_condensed/) dataset and includes a prediction head for masked lm. This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. ## Usage First, install `adapters`: ``` pip install -U adapters ``` Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("BigTMiami/AA_seq_bn_P_micro", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
{"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_25M_10_000_condensed"]}
BigTMiami/AA_seq_bn_P_micro
null
[ "adapter-transformers", "roberta", "dataset:BigTMiami/amazon_25M_10_000_condensed", "region:us" ]
null
2024-04-13T18:05:32+00:00
[]
[]
TAGS #adapter-transformers #roberta #dataset-BigTMiami/amazon_25M_10_000_condensed #region-us
# Adapter 'BigTMiami/AA_seq_bn_P_micro' for roberta-base An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_25M_10_000_condensed dataset and includes a prediction head for masked lm. This adapter was created for usage with the Adapters library. ## Usage First, install 'adapters': Now, the adapter can be loaded and activated like this: ## Architecture & Training ## Evaluation results
[ "# Adapter 'BigTMiami/AA_seq_bn_P_micro' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_25M_10_000_condensed dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
[ "TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_25M_10_000_condensed #region-us \n", "# Adapter 'BigTMiami/AA_seq_bn_P_micro' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_25M_10_000_condensed dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
null
adapter-transformers
# Adapter `BigTMiami/AA_seq_bn_C_micro` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. ## Usage First, install `adapters`: ``` pip install -U adapters ``` Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("BigTMiami/AA_seq_bn_C_micro", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
{"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_helpfulness"]}
BigTMiami/AA_seq_bn_C_micro
null
[ "adapter-transformers", "roberta", "dataset:BigTMiami/amazon_helpfulness", "region:us" ]
null
2024-04-13T18:07:34+00:00
[]
[]
TAGS #adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us
# Adapter 'BigTMiami/AA_seq_bn_C_micro' for roberta-base An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification. This adapter was created for usage with the Adapters library. ## Usage First, install 'adapters': Now, the adapter can be loaded and activated like this: ## Architecture & Training ## Evaluation results
[ "# Adapter 'BigTMiami/AA_seq_bn_C_micro' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
[ "TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us \n", "# Adapter 'BigTMiami/AA_seq_bn_C_micro' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama_domar_finetune This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5872 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5874 | 0.79 | 200 | 0.5963 | | 0.5122 | 1.59 | 400 | 0.5917 | | 0.5774 | 2.38 | 600 | 0.5889 | | 0.5579 | 3.18 | 800 | 0.5872 | | 0.5214 | 3.97 | 1000 | 0.5873 | | 0.4606 | 4.77 | 1200 | 0.5872 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.1 - Pytorch 2.2.0+cu118 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "llama_domar_finetune", "results": []}]}
thorirhrafn/llama_domar_finetune
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-04-13T18:08:23+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-meta-llama/Llama-2-7b-hf #region-us
llama\_domar\_finetune ====================== This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.5872 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 8 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * PEFT 0.8.2 * Transformers 4.38.1 * Pytorch 2.2.0+cu118 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* PEFT 0.8.2\n* Transformers 4.38.1\n* Pytorch 2.2.0+cu118\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-meta-llama/Llama-2-7b-hf #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* PEFT 0.8.2\n* Transformers 4.38.1\n* Pytorch 2.2.0+cu118\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
adapter-transformers
# Adapter `BigTMiami/AA_seq_bn_P_micro_seq_bn_C_micro` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. ## Usage First, install `adapters`: ``` pip install -U adapters ``` Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("BigTMiami/AA_seq_bn_P_micro_seq_bn_C_micro", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
{"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_helpfulness"]}
BigTMiami/AA_seq_bn_P_micro_seq_bn_C_micro
null
[ "adapter-transformers", "roberta", "dataset:BigTMiami/amazon_helpfulness", "region:us" ]
null
2024-04-13T18:09:26+00:00
[]
[]
TAGS #adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us
# Adapter 'BigTMiami/AA_seq_bn_P_micro_seq_bn_C_micro' for roberta-base An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification. This adapter was created for usage with the Adapters library. ## Usage First, install 'adapters': Now, the adapter can be loaded and activated like this: ## Architecture & Training ## Evaluation results
[ "# Adapter 'BigTMiami/AA_seq_bn_P_micro_seq_bn_C_micro' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
[ "TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us \n", "# Adapter 'BigTMiami/AA_seq_bn_P_micro_seq_bn_C_micro' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
text-generation
transformers
## Announcement Due to a combination of factors, we have lost confidence in training open-source LLMs and sharing our results with the community. These factors include, but are not limited to: - **Unreasonable community feedback:** We have encountered negativity and unrealistic expectations from certain segments of the community, which has been discouraging. - **Persistent technical gap with OpenAI:** While we have achieved some progress in specific narrow domains, the overall technical gap with OpenAI's top models remains significant and may even be widening. We believe the community's perception of our progress is overly optimistic. - **Challenges in GPU resource integration:** We have encountered issues with resource allocation and scheduling conflicts with other projects, which has led to extended development cycles. Despite our cautious and somewhat pessimistic outlook on the field, we are still willing to release and share some of our model weights with the community and showcase our progress in specific narrow domains. **Important Disclaimer:** The scores presented here are for reference only. Any victories achieved in narrow domains are conditional and limited by factors such as scaling laws and data availability. We cannot yet compete with OpenAI's top models in terms of overall performance. Therefore, these scores do not represent the comprehensive capabilities of our models and should only be considered as analytical indicators. Please avoid over-interpreting benchmark results commonly used in the community. Our experiments have revealed their vulnerability and limited ability to comprehensively assess model capabilities. This further highlights the significant gap between our models and OpenAI's top models. **Additional Information:** - Our models were not trained on any test sets. - The training data includes some web-crawled data, rewritten and synthesized using GPT-4-32K and GPT-3.5-16K. - We observed that common web-crawled datasets do not actively avoid or filter out questions from popular benchmarks. We have only avoided verbatim repetition of these questions. We currently lack the capability for further filtering based on semantics. - Contamination detection results indicate that our models are safe. # CausalLM-34b-β2 This model is not based on CausalLM-34b-β. It was trained on a different dataset composition. Therefore, both versions are considered equal and were candidates for final release. We encourage experimentation with various model merging approaches. ### Training Date: March 8, 2024 ## Internal Codename: M-16 (the 16th finetune of yi-34b) Theoretical Context Length: 200K (Please modify config.json, 8K by default to prevent OOM) ### Chat Template: **chatml** (Note: techniques similar to OpenChat's C-RLFT were used, and training was not specifically targeted towards general task-oriented systems. Outputs may not be optimal without the "You are a helpful assistant." prompt or a blank system prompt.) ### Special Note: This model is sensitive to precision. Quantization may cause significant performance degradation. Avoid using wiki-text if using calibrated quantization. ### Lm-evaluation-harness Reference (using open_llm_leaderboard parameters, no chat template): | Task | Score | | ---------- | ----- | | ARC | 68.3 | | HellaSwag | 83.6 | | MMLU | 84 | | TruthfulQA | 54 | | Winogrande | 80.4 | | GSM8K | 60.4 | ## MT-Bench Reference: First Turn | Model | Turn | Score | | ------------------ | ---- | ------- | | gpt-4-0125-preview | 1 | 9.16250 | | gpt-4 | 1 | 8.95625 | | M-16 | 1 | 8.82500 | Second Turn | Model | Turn | Score | | ------------------ | ---- | -------- | | gpt-4 | 2 | 9.02500 | | gpt-4-0125-preview | 2 | 8.93750 | | M-16 | 2 | 8.556962 | Average Score | Model | Score | | ------------------ | -------- | | gpt-4-0125-preview | 9.05000 | | gpt-4 | 8.990625 | | M-16 | 8.691824 | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63468a143ea42ee2cb49ddd1/lK4Z8SELSgGoMvUcvCxpS.png) --- ## 公告 由于多种因素的影响,我们对训练开源大型语言模型并与社区分享成果的信心有所下降。这些因素包括但不限于: - **不合理的社区反馈:**我们遇到了一些社区成员的负面情绪和不切实际的期望,这令人沮丧。 - **与 OpenAI 的持续技术差距:**尽管我们在某些特定领域取得了一些进展,但与 OpenAI 顶级模型的整体技术差距仍然很大,甚至可能还在扩大。我们认为社区对我们进展的看法过于乐观。 - **GPU 资源整合的挑战:**我们在资源分配和与其他项目的调度冲突方面遇到了问题,导致开发周期延长。 尽管我们对该领域持谨慎和略微悲观的态度,但我们仍然愿意发布和分享我们的一些模型权重,并展示我们在特定领域取得的进展。 **重要免责声明:** 此处提供的分数仅供参考。在特定领域取得的任何胜利都是有条件的,并受到诸如规模法则和数据可用性等因素的限制。就整体性能而言,我们还无法与 OpenAI 的顶级模型竞争。因此,这些分数并不代表我们模型的综合能力,而应仅被视为分析指标。请避免过度解读社区中常用的基准测试结果。我们的实验揭示了它们的脆弱性和全面评估模型能力的有限能力。这进一步突出了我们的模型与 OpenAI 顶级模型之间的显着差距。 **补充信息:** - 我们的模型没有在任何测试集上进行训练。 - 训练数据包括一些网络爬取数据,使用 GPT-4-32K 和 GPT-3.5-16K 重写和合成。 - 我们观察到常见的网络爬取数据集并不会主动避免或过滤掉来自流行基准测试的问题。我们只避免了逐字重复这些问题。我们目前缺乏基于语义进行进一步过滤的能力。 - 污染检测结果表明我们的模型是安全的。 # CausalLM-34b-β2 此模型并非基于 CausalLM-34b-β。它是在不同的数据集组合上训练的。因此,这两个版本被认为是相等的,并且都是最终发布的候选者。我们鼓励尝试各种模型合并方法。 ### 训练日期: 2024 年 3 月 8 日 ## 内部代号: M-16(yi-34b 的第 16 次微调) 理论上下文长度:200K(请修改 config.json,默认为 8K 以防止 OOM) ### 聊天模板: **chatml**(注意:使用了类似于 OpenChat 的 C-RLFT 的技术,并且训练没有专门针对通用的面向任务的系统。如果没有“你是一个有用的助手。”提示或空白系统提示,输出可能不是最佳的。) ### 特别注意: 该模型对精度敏感。量化可能会导致性能显着下降。如果使用校准量化,请避免使用 wiki-text。 ### Lm-evaluation-harness 参考(使用 open_llm_leaderboard 参数,没有聊天模板): | 任务 | 分数 | | ---------- | ---- | | ARC | 68.3 | | HellaSwag | 83.6 | | MMLU | 84 | | TruthfulQA | 54 | | Winogrande | 80.4 | | GSM8K | 60.4 | ## MT-Bench 参考: 第一轮 | 模型 | 轮次 | 分数 | | ------------------ | ---- | ------- | | gpt-4-0125-preview | 1 | 9.16250 | | gpt-4 | 1 | 8.95625 | | M-16 | 1 | 8.82500 | 第二轮 | 模型 | 轮次 | 分数 | | ------------------ | ---- | -------- | | gpt-4 | 2 | 9.02500 | | gpt-4-0125-preview | 2 | 8.93750 | | M-16 | 2 | 8.556962 | 平均分数 | 模型 | 分数 | | ------------------ | -------- | | gpt-4-0125-preview | 9.05000 | | gpt-4 | 8.990625 | | M-16 | 8.691824 | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63468a143ea42ee2cb49ddd1/lK4Z8SELSgGoMvUcvCxpS.png)
{"language": ["en", "zh"], "license": "gpl-3.0"}
CausalLM/34b-beta2
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "zh", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T18:10:26+00:00
[]
[ "en", "zh" ]
TAGS #transformers #safetensors #llama #text-generation #conversational #en #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Announcement ------------ Due to a combination of factors, we have lost confidence in training open-source LLMs and sharing our results with the community. These factors include, but are not limited to: * Unreasonable community feedback: We have encountered negativity and unrealistic expectations from certain segments of the community, which has been discouraging. * Persistent technical gap with OpenAI: While we have achieved some progress in specific narrow domains, the overall technical gap with OpenAI's top models remains significant and may even be widening. We believe the community's perception of our progress is overly optimistic. * Challenges in GPU resource integration: We have encountered issues with resource allocation and scheduling conflicts with other projects, which has led to extended development cycles. Despite our cautious and somewhat pessimistic outlook on the field, we are still willing to release and share some of our model weights with the community and showcase our progress in specific narrow domains. Important Disclaimer: The scores presented here are for reference only. Any victories achieved in narrow domains are conditional and limited by factors such as scaling laws and data availability. We cannot yet compete with OpenAI's top models in terms of overall performance. Therefore, these scores do not represent the comprehensive capabilities of our models and should only be considered as analytical indicators. Please avoid over-interpreting benchmark results commonly used in the community. Our experiments have revealed their vulnerability and limited ability to comprehensively assess model capabilities. This further highlights the significant gap between our models and OpenAI's top models. Additional Information: * Our models were not trained on any test sets. * The training data includes some web-crawled data, rewritten and synthesized using GPT-4-32K and GPT-3.5-16K. * We observed that common web-crawled datasets do not actively avoid or filter out questions from popular benchmarks. We have only avoided verbatim repetition of these questions. We currently lack the capability for further filtering based on semantics. * Contamination detection results indicate that our models are safe. CausalLM-34b-β2 =============== This model is not based on CausalLM-34b-β. It was trained on a different dataset composition. Therefore, both versions are considered equal and were candidates for final release. We encourage experimentation with various model merging approaches. ### Training Date: March 8, 2024 Internal Codename: ------------------ M-16 (the 16th finetune of yi-34b) Theoretical Context Length: 200K (Please modify URL, 8K by default to prevent OOM) ### Chat Template: chatml (Note: techniques similar to OpenChat's C-RLFT were used, and training was not specifically targeted towards general task-oriented systems. Outputs may not be optimal without the "You are a helpful assistant." prompt or a blank system prompt.) ### Special Note: This model is sensitive to precision. Quantization may cause significant performance degradation. Avoid using wiki-text if using calibrated quantization. ### Lm-evaluation-harness Reference (using open\_llm\_leaderboard parameters, no chat template): MT-Bench Reference: ------------------- First Turn Model: gpt-4-0125-preview, Turn: 1, Score: 9.16250 Model: gpt-4, Turn: 1, Score: 8.95625 Model: M-16, Turn: 1, Score: 8.82500 Second Turn Model: gpt-4, Turn: 2, Score: 9.02500 Model: gpt-4-0125-preview, Turn: 2, Score: 8.93750 Model: M-16, Turn: 2, Score: 8.556962 Average Score !image/png --- 公告 -- 由于多种因素的影响,我们对训练开源大型语言模型并与社区分享成果的信心有所下降。这些因素包括但不限于: * 不合理的社区反馈:我们遇到了一些社区成员的负面情绪和不切实际的期望,这令人沮丧。 * 与 OpenAI 的持续技术差距:尽管我们在某些特定领域取得了一些进展,但与 OpenAI 顶级模型的整体技术差距仍然很大,甚至可能还在扩大。我们认为社区对我们进展的看法过于乐观。 * GPU 资源整合的挑战:我们在资源分配和与其他项目的调度冲突方面遇到了问题,导致开发周期延长。 尽管我们对该领域持谨慎和略微悲观的态度,但我们仍然愿意发布和分享我们的一些模型权重,并展示我们在特定领域取得的进展。 重要免责声明: 此处提供的分数仅供参考。在特定领域取得的任何胜利都是有条件的,并受到诸如规模法则和数据可用性等因素的限制。就整体性能而言,我们还无法与 OpenAI 的顶级模型竞争。因此,这些分数并不代表我们模型的综合能力,而应仅被视为分析指标。请避免过度解读社区中常用的基准测试结果。我们的实验揭示了它们的脆弱性和全面评估模型能力的有限能力。这进一步突出了我们的模型与 OpenAI 顶级模型之间的显着差距。 补充信息: * 我们的模型没有在任何测试集上进行训练。 * 训练数据包括一些网络爬取数据,使用 GPT-4-32K 和 GPT-3.5-16K 重写和合成。 * 我们观察到常见的网络爬取数据集并不会主动避免或过滤掉来自流行基准测试的问题。我们只避免了逐字重复这些问题。我们目前缺乏基于语义进行进一步过滤的能力。 * 污染检测结果表明我们的模型是安全的。 CausalLM-34b-β2 =============== 此模型并非基于 CausalLM-34b-β。它是在不同的数据集组合上训练的。因此,这两个版本被认为是相等的,并且都是最终发布的候选者。我们鼓励尝试各种模型合并方法。 ### 训练日期: 2024 年 3 月 8 日 内部代号: ----- M-16(yi-34b 的第 16 次微调) 理论上下文长度:200K(请修改 URL,默认为 8K 以防止 OOM) ### 聊天模板: chatml(注意:使用了类似于 OpenChat 的 C-RLFT 的技术,并且训练没有专门针对通用的面向任务的系统。如果没有“你是一个有用的助手。”提示或空白系统提示,输出可能不是最佳的。) ### 特别注意: 该模型对精度敏感。量化可能会导致性能显着下降。如果使用校准量化,请避免使用 wiki-text。 ### Lm-evaluation-harness 参考(使用 open\_llm\_leaderboard 参数,没有聊天模板): MT-Bench 参考: ------------ 第一轮 模型: gpt-4-0125-preview, 轮次: 1, 分数: 9.16250 模型: gpt-4, 轮次: 1, 分数: 8.95625 模型: M-16, 轮次: 1, 分数: 8.82500 第二轮 模型: gpt-4, 轮次: 2, 分数: 9.02500 模型: gpt-4-0125-preview, 轮次: 2, 分数: 8.93750 模型: M-16, 轮次: 2, 分数: 8.556962 平均分数 !image/png
[ "### Training Date:\n\n\nMarch 8, 2024\n\n\nInternal Codename:\n------------------\n\n\nM-16 (the 16th finetune of yi-34b)\n\n\nTheoretical Context Length: 200K (Please modify URL, 8K by default to prevent OOM)", "### Chat Template:\n\n\nchatml (Note: techniques similar to OpenChat's C-RLFT were used, and training was not specifically targeted towards general task-oriented systems. Outputs may not be optimal without the \"You are a helpful assistant.\" prompt or a blank system prompt.)", "### Special Note:\n\n\nThis model is sensitive to precision. Quantization may cause significant performance degradation. Avoid using wiki-text if using calibrated quantization.", "### Lm-evaluation-harness Reference (using open\\_llm\\_leaderboard parameters, no chat template):\n\n\n\nMT-Bench Reference:\n-------------------\n\n\nFirst Turn\n\n\nModel: gpt-4-0125-preview, Turn: 1, Score: 9.16250\nModel: gpt-4, Turn: 1, Score: 8.95625\nModel: M-16, Turn: 1, Score: 8.82500\n\n\nSecond Turn\n\n\nModel: gpt-4, Turn: 2, Score: 9.02500\nModel: gpt-4-0125-preview, Turn: 2, Score: 8.93750\nModel: M-16, Turn: 2, Score: 8.556962\n\n\nAverage Score\n\n\n\n!image/png\n\n\n\n\n---\n\n\n公告\n--\n\n\n由于多种因素的影响,我们对训练开源大型语言模型并与社区分享成果的信心有所下降。这些因素包括但不限于:\n\n\n* 不合理的社区反馈:我们遇到了一些社区成员的负面情绪和不切实际的期望,这令人沮丧。\n* 与 OpenAI 的持续技术差距:尽管我们在某些特定领域取得了一些进展,但与 OpenAI 顶级模型的整体技术差距仍然很大,甚至可能还在扩大。我们认为社区对我们进展的看法过于乐观。\n* GPU 资源整合的挑战:我们在资源分配和与其他项目的调度冲突方面遇到了问题,导致开发周期延长。\n\n\n尽管我们对该领域持谨慎和略微悲观的态度,但我们仍然愿意发布和分享我们的一些模型权重,并展示我们在特定领域取得的进展。\n\n\n重要免责声明:\n\n\n此处提供的分数仅供参考。在特定领域取得的任何胜利都是有条件的,并受到诸如规模法则和数据可用性等因素的限制。就整体性能而言,我们还无法与 OpenAI 的顶级模型竞争。因此,这些分数并不代表我们模型的综合能力,而应仅被视为分析指标。请避免过度解读社区中常用的基准测试结果。我们的实验揭示了它们的脆弱性和全面评估模型能力的有限能力。这进一步突出了我们的模型与 OpenAI 顶级模型之间的显着差距。\n\n\n补充信息:\n\n\n* 我们的模型没有在任何测试集上进行训练。\n* 训练数据包括一些网络爬取数据,使用 GPT-4-32K 和 GPT-3.5-16K 重写和合成。\n* 我们观察到常见的网络爬取数据集并不会主动避免或过滤掉来自流行基准测试的问题。我们只避免了逐字重复这些问题。我们目前缺乏基于语义进行进一步过滤的能力。\n* 污染检测结果表明我们的模型是安全的。\n\n\nCausalLM-34b-β2\n===============\n\n\n此模型并非基于 CausalLM-34b-β。它是在不同的数据集组合上训练的。因此,这两个版本被认为是相等的,并且都是最终发布的候选者。我们鼓励尝试各种模型合并方法。", "### 训练日期:\n\n\n2024 年 3 月 8 日\n\n\n内部代号:\n-----\n\n\nM-16(yi-34b 的第 16 次微调)\n\n\n理论上下文长度:200K(请修改 URL,默认为 8K 以防止 OOM)", "### 聊天模板:\n\n\nchatml(注意:使用了类似于 OpenChat 的 C-RLFT 的技术,并且训练没有专门针对通用的面向任务的系统。如果没有“你是一个有用的助手。”提示或空白系统提示,输出可能不是最佳的。)", "### 特别注意:\n\n\n该模型对精度敏感。量化可能会导致性能显着下降。如果使用校准量化,请避免使用 wiki-text。", "### Lm-evaluation-harness 参考(使用 open\\_llm\\_leaderboard 参数,没有聊天模板):\n\n\n\nMT-Bench 参考:\n------------\n\n\n第一轮\n\n\n模型: gpt-4-0125-preview, 轮次: 1, 分数: 9.16250\n模型: gpt-4, 轮次: 1, 分数: 8.95625\n模型: M-16, 轮次: 1, 分数: 8.82500\n\n\n第二轮\n\n\n模型: gpt-4, 轮次: 2, 分数: 9.02500\n模型: gpt-4-0125-preview, 轮次: 2, 分数: 8.93750\n模型: M-16, 轮次: 2, 分数: 8.556962\n\n\n平均分数\n\n\n\n!image/png" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #en #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training Date:\n\n\nMarch 8, 2024\n\n\nInternal Codename:\n------------------\n\n\nM-16 (the 16th finetune of yi-34b)\n\n\nTheoretical Context Length: 200K (Please modify URL, 8K by default to prevent OOM)", "### Chat Template:\n\n\nchatml (Note: techniques similar to OpenChat's C-RLFT were used, and training was not specifically targeted towards general task-oriented systems. Outputs may not be optimal without the \"You are a helpful assistant.\" prompt or a blank system prompt.)", "### Special Note:\n\n\nThis model is sensitive to precision. Quantization may cause significant performance degradation. Avoid using wiki-text if using calibrated quantization.", "### Lm-evaluation-harness Reference (using open\\_llm\\_leaderboard parameters, no chat template):\n\n\n\nMT-Bench Reference:\n-------------------\n\n\nFirst Turn\n\n\nModel: gpt-4-0125-preview, Turn: 1, Score: 9.16250\nModel: gpt-4, Turn: 1, Score: 8.95625\nModel: M-16, Turn: 1, Score: 8.82500\n\n\nSecond Turn\n\n\nModel: gpt-4, Turn: 2, Score: 9.02500\nModel: gpt-4-0125-preview, Turn: 2, Score: 8.93750\nModel: M-16, Turn: 2, Score: 8.556962\n\n\nAverage Score\n\n\n\n!image/png\n\n\n\n\n---\n\n\n公告\n--\n\n\n由于多种因素的影响,我们对训练开源大型语言模型并与社区分享成果的信心有所下降。这些因素包括但不限于:\n\n\n* 不合理的社区反馈:我们遇到了一些社区成员的负面情绪和不切实际的期望,这令人沮丧。\n* 与 OpenAI 的持续技术差距:尽管我们在某些特定领域取得了一些进展,但与 OpenAI 顶级模型的整体技术差距仍然很大,甚至可能还在扩大。我们认为社区对我们进展的看法过于乐观。\n* GPU 资源整合的挑战:我们在资源分配和与其他项目的调度冲突方面遇到了问题,导致开发周期延长。\n\n\n尽管我们对该领域持谨慎和略微悲观的态度,但我们仍然愿意发布和分享我们的一些模型权重,并展示我们在特定领域取得的进展。\n\n\n重要免责声明:\n\n\n此处提供的分数仅供参考。在特定领域取得的任何胜利都是有条件的,并受到诸如规模法则和数据可用性等因素的限制。就整体性能而言,我们还无法与 OpenAI 的顶级模型竞争。因此,这些分数并不代表我们模型的综合能力,而应仅被视为分析指标。请避免过度解读社区中常用的基准测试结果。我们的实验揭示了它们的脆弱性和全面评估模型能力的有限能力。这进一步突出了我们的模型与 OpenAI 顶级模型之间的显着差距。\n\n\n补充信息:\n\n\n* 我们的模型没有在任何测试集上进行训练。\n* 训练数据包括一些网络爬取数据,使用 GPT-4-32K 和 GPT-3.5-16K 重写和合成。\n* 我们观察到常见的网络爬取数据集并不会主动避免或过滤掉来自流行基准测试的问题。我们只避免了逐字重复这些问题。我们目前缺乏基于语义进行进一步过滤的能力。\n* 污染检测结果表明我们的模型是安全的。\n\n\nCausalLM-34b-β2\n===============\n\n\n此模型并非基于 CausalLM-34b-β。它是在不同的数据集组合上训练的。因此,这两个版本被认为是相等的,并且都是最终发布的候选者。我们鼓励尝试各种模型合并方法。", "### 训练日期:\n\n\n2024 年 3 月 8 日\n\n\n内部代号:\n-----\n\n\nM-16(yi-34b 的第 16 次微调)\n\n\n理论上下文长度:200K(请修改 URL,默认为 8K 以防止 OOM)", "### 聊天模板:\n\n\nchatml(注意:使用了类似于 OpenChat 的 C-RLFT 的技术,并且训练没有专门针对通用的面向任务的系统。如果没有“你是一个有用的助手。”提示或空白系统提示,输出可能不是最佳的。)", "### 特别注意:\n\n\n该模型对精度敏感。量化可能会导致性能显着下降。如果使用校准量化,请避免使用 wiki-text。", "### Lm-evaluation-harness 参考(使用 open\\_llm\\_leaderboard 参数,没有聊天模板):\n\n\n\nMT-Bench 参考:\n------------\n\n\n第一轮\n\n\n模型: gpt-4-0125-preview, 轮次: 1, 分数: 9.16250\n模型: gpt-4, 轮次: 1, 分数: 8.95625\n模型: M-16, 轮次: 1, 分数: 8.82500\n\n\n第二轮\n\n\n模型: gpt-4, 轮次: 2, 分数: 9.02500\n模型: gpt-4-0125-preview, 轮次: 2, 分数: 8.93750\n模型: M-16, 轮次: 2, 分数: 8.556962\n\n\n平均分数\n\n\n\n!image/png" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # OCI-DS-6.7B-schema_1 This model is a fine-tuned version of [m-a-p/OpenCodeInterpreter-DS-6.7B](https://huggingface.co/m-a-p/OpenCodeInterpreter-DS-6.7B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3291 | 0.19 | 50 | 0.0000 | | 0.8713 | 0.38 | 100 | 0.0000 | | 0.1541 | 0.57 | 150 | 0.0000 | | 0.0 | 0.76 | 200 | 0.0000 | | 0.232 | 0.95 | 250 | 0.0000 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "m-a-p/OpenCodeInterpreter-DS-6.7B", "model-index": [{"name": "OCI-DS-6.7B-schema_1", "results": []}]}
jdeklerk10/OCI-DS-6.7B-schema_1
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:m-a-p/OpenCodeInterpreter-DS-6.7B", "license:apache-2.0", "region:us" ]
null
2024-04-13T18:11:31+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #generated_from_trainer #base_model-m-a-p/OpenCodeInterpreter-DS-6.7B #license-apache-2.0 #region-us
OCI-DS-6.7B-schema\_1 ===================== This model is a fine-tuned version of m-a-p/OpenCodeInterpreter-DS-6.7B on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.0000 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.01 * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.0.dev0 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-m-a-p/OpenCodeInterpreter-DS-6.7B #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
BOCHENG/tweet
null
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T18:11:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #distilbert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #distilbert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
mlx
# mlx-community/mistral-22B-v0.2-8bit-mlx This model was converted to MLX format from [`Vezora/Mistral-22B-v0.2`]() using mlx-lm version **0.9.0**. Refer to the [original model card](https://huggingface.co/Vezora/Mistral-22B-v0.2) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/mistral-22B-v0.2-8bit-mlx") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"license": "apache-2.0", "tags": ["mlx"]}
mlx-community/mistral-22B-v0.2-8bit-mlx
null
[ "mlx", "safetensors", "mistral", "license:apache-2.0", "region:us" ]
null
2024-04-13T18:11:57+00:00
[]
[]
TAGS #mlx #safetensors #mistral #license-apache-2.0 #region-us
# mlx-community/mistral-22B-v0.2-8bit-mlx This model was converted to MLX format from ['Vezora/Mistral-22B-v0.2']() using mlx-lm version 0.9.0. Refer to the original model card for more details on the model. ## Use with mlx
[ "# mlx-community/mistral-22B-v0.2-8bit-mlx\nThis model was converted to MLX format from ['Vezora/Mistral-22B-v0.2']() using mlx-lm version 0.9.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #mistral #license-apache-2.0 #region-us \n", "# mlx-community/mistral-22B-v0.2-8bit-mlx\nThis model was converted to MLX format from ['Vezora/Mistral-22B-v0.2']() using mlx-lm version 0.9.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mit-b0-finetuned-human-parsing-dataset This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1612 - Mean Iou: 0.5450 - Mean Accuracy: 0.6607 - Overall Accuracy: 0.8160 - Accuracy Background: nan - Accuracy Hat: 0.5935 - Accuracy Hair: 0.8675 - Accuracy Sunglasses: 0.1278 - Accuracy Upper-clothes: 0.8806 - Accuracy Skirt: 0.7150 - Accuracy Pants: 0.8529 - Accuracy Dress: 0.8186 - Accuracy Belt: 0.0817 - Accuracy Left-shoe: 0.6562 - Accuracy Right-shoe: 0.6193 - Accuracy Face: 0.8987 - Accuracy Left-leg: 0.8838 - Accuracy Right-leg: 0.8541 - Accuracy Left-arm: 0.8193 - Accuracy Right-arm: 0.8202 - Accuracy Bag: 0.7409 - Accuracy Scarf: 0.0012 - Iou Background: 0.0 - Iou Hat: 0.5417 - Iou Hair: 0.7745 - Iou Sunglasses: 0.1273 - Iou Upper-clothes: 0.7733 - Iou Skirt: 0.6469 - Iou Pants: 0.7596 - Iou Dress: 0.6192 - Iou Belt: 0.0773 - Iou Left-shoe: 0.5307 - Iou Right-shoe: 0.5156 - Iou Face: 0.8002 - Iou Left-leg: 0.7577 - Iou Right-leg: 0.7632 - Iou Left-arm: 0.7325 - Iou Right-arm: 0.7315 - Iou Bag: 0.6578 - Iou Scarf: 0.0012 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Background | Accuracy Hat | Accuracy Hair | Accuracy Sunglasses | Accuracy Upper-clothes | Accuracy Skirt | Accuracy Pants | Accuracy Dress | Accuracy Belt | Accuracy Left-shoe | Accuracy Right-shoe | Accuracy Face | Accuracy Left-leg | Accuracy Right-leg | Accuracy Left-arm | Accuracy Right-arm | Accuracy Bag | Accuracy Scarf | Iou Background | Iou Hat | Iou Hair | Iou Sunglasses | Iou Upper-clothes | Iou Skirt | Iou Pants | Iou Dress | Iou Belt | Iou Left-shoe | Iou Right-shoe | Iou Face | Iou Left-leg | Iou Right-leg | Iou Left-arm | Iou Right-arm | Iou Bag | Iou Scarf | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------:|:------------:|:-------------:|:-------------------:|:----------------------:|:--------------:|:--------------:|:--------------:|:-------------:|:------------------:|:-------------------:|:-------------:|:-----------------:|:------------------:|:-----------------:|:------------------:|:------------:|:--------------:|:--------------:|:-------:|:--------:|:--------------:|:-----------------:|:---------:|:---------:|:---------:|:--------:|:-------------:|:--------------:|:--------:|:------------:|:-------------:|:------------:|:-------------:|:-------:|:---------:| | 0.1738 | 1.0 | 200 | 0.2036 | 0.4669 | 0.5844 | 0.7641 | nan | 0.2353 | 0.8366 | 0.0 | 0.8116 | 0.6698 | 0.7972 | 0.8264 | 0.0 | 0.5317 | 0.4734 | 0.8657 | 0.8419 | 0.7916 | 0.7738 | 0.7692 | 0.7112 | 0.0 | 0.0 | 0.2283 | 0.7279 | 0.0 | 0.7127 | 0.5858 | 0.7043 | 0.5625 | 0.0 | 0.4160 | 0.3786 | 0.7652 | 0.6935 | 0.6938 | 0.6787 | 0.6674 | 0.5891 | 0.0 | | 0.184 | 2.0 | 400 | 0.1841 | 0.4970 | 0.6199 | 0.7940 | nan | 0.4453 | 0.8607 | 0.0 | 0.8745 | 0.7569 | 0.8160 | 0.7201 | 0.0 | 0.5756 | 0.5385 | 0.9054 | 0.8440 | 0.8553 | 0.8249 | 0.8242 | 0.6971 | 0.0 | 0.0 | 0.4153 | 0.7599 | 0.0 | 0.7467 | 0.6348 | 0.7162 | 0.5777 | 0.0 | 0.4517 | 0.4229 | 0.7710 | 0.7025 | 0.7143 | 0.7096 | 0.7060 | 0.6183 | 0.0 | | 0.1793 | 3.0 | 600 | 0.1717 | 0.5121 | 0.6276 | 0.8018 | nan | 0.5648 | 0.8591 | 0.0 | 0.8920 | 0.7414 | 0.8757 | 0.7207 | 0.0 | 0.6178 | 0.5797 | 0.8696 | 0.8117 | 0.8442 | 0.7867 | 0.7884 | 0.7170 | 0.0 | 0.0 | 0.4805 | 0.7622 | 0.0 | 0.7497 | 0.6576 | 0.7450 | 0.5970 | 0.0 | 0.4793 | 0.4538 | 0.7821 | 0.7290 | 0.7402 | 0.7075 | 0.7070 | 0.6269 | 0.0 | | 0.3023 | 4.0 | 800 | 0.1753 | 0.5129 | 0.6313 | 0.7953 | nan | 0.5461 | 0.8778 | 0.0 | 0.8113 | 0.7911 | 0.8080 | 0.8468 | 0.0 | 0.6061 | 0.5468 | 0.8959 | 0.8538 | 0.8359 | 0.8053 | 0.8009 | 0.7055 | 0.0 | 0.0 | 0.4921 | 0.7589 | 0.0 | 0.7408 | 0.6533 | 0.7325 | 0.5989 | 0.0 | 0.4843 | 0.4550 | 0.7872 | 0.7399 | 0.7417 | 0.7114 | 0.7089 | 0.6265 | 0.0 | | 0.1041 | 5.0 | 1000 | 0.1655 | 0.5235 | 0.6388 | 0.8078 | nan | 0.6147 | 0.8667 | 0.0025 | 0.8768 | 0.7477 | 0.8536 | 0.7777 | 0.0022 | 0.5801 | 0.5814 | 0.8896 | 0.8580 | 0.8658 | 0.8238 | 0.8236 | 0.6964 | 0.0 | 0.0 | 0.5389 | 0.7662 | 0.0025 | 0.7582 | 0.6485 | 0.7581 | 0.6070 | 0.0022 | 0.4840 | 0.4767 | 0.7900 | 0.7534 | 0.7572 | 0.7267 | 0.7204 | 0.6336 | 0.0 | | 0.1179 | 6.0 | 1200 | 0.1628 | 0.5312 | 0.6475 | 0.8111 | nan | 0.5886 | 0.8725 | 0.0326 | 0.8560 | 0.7353 | 0.8538 | 0.8384 | 0.0221 | 0.6322 | 0.5871 | 0.9038 | 0.8580 | 0.8579 | 0.8263 | 0.8279 | 0.7142 | 0.0 | 0.0 | 0.5293 | 0.7663 | 0.0326 | 0.7629 | 0.6531 | 0.7624 | 0.6189 | 0.0217 | 0.5135 | 0.4931 | 0.7930 | 0.7599 | 0.7641 | 0.7293 | 0.7224 | 0.6386 | 0.0 | | 0.1323 | 7.0 | 1400 | 0.1619 | 0.5390 | 0.6531 | 0.8129 | nan | 0.6147 | 0.8846 | 0.0754 | 0.8677 | 0.7143 | 0.8672 | 0.8331 | 0.0484 | 0.6528 | 0.6319 | 0.8896 | 0.8392 | 0.8467 | 0.8096 | 0.8072 | 0.7201 | 0.0 | 0.0 | 0.5489 | 0.7726 | 0.0753 | 0.7693 | 0.6392 | 0.7660 | 0.6163 | 0.0468 | 0.5264 | 0.5172 | 0.7976 | 0.7612 | 0.7666 | 0.7314 | 0.7245 | 0.6434 | 0.0 | | 0.1235 | 8.0 | 1600 | 0.1612 | 0.5450 | 0.6607 | 0.8160 | nan | 0.5935 | 0.8675 | 0.1278 | 0.8806 | 0.7150 | 0.8529 | 0.8186 | 0.0817 | 0.6562 | 0.6193 | 0.8987 | 0.8838 | 0.8541 | 0.8193 | 0.8202 | 0.7409 | 0.0012 | 0.0 | 0.5417 | 0.7745 | 0.1273 | 0.7733 | 0.6469 | 0.7596 | 0.6192 | 0.0773 | 0.5307 | 0.5156 | 0.8002 | 0.7577 | 0.7632 | 0.7325 | 0.7315 | 0.6578 | 0.0012 | | 0.093 | 9.0 | 1800 | 0.1621 | 0.5489 | 0.6621 | 0.8165 | nan | 0.6190 | 0.8752 | 0.1554 | 0.8799 | 0.7153 | 0.8561 | 0.8333 | 0.0834 | 0.6491 | 0.6138 | 0.8950 | 0.8622 | 0.8528 | 0.8119 | 0.8053 | 0.7411 | 0.0068 | 0.0 | 0.5590 | 0.7745 | 0.1544 | 0.7730 | 0.6432 | 0.7665 | 0.6144 | 0.0788 | 0.5308 | 0.5146 | 0.8012 | 0.7651 | 0.7687 | 0.7353 | 0.7305 | 0.6629 | 0.0068 | | 0.1171 | 10.0 | 2000 | 0.1631 | 0.5504 | 0.6659 | 0.8147 | nan | 0.6300 | 0.8701 | 0.1681 | 0.8613 | 0.7188 | 0.8489 | 0.8576 | 0.0908 | 0.6585 | 0.6139 | 0.8979 | 0.8608 | 0.8580 | 0.8264 | 0.8141 | 0.7360 | 0.0096 | 0.0 | 0.5640 | 0.7730 | 0.1669 | 0.7699 | 0.6460 | 0.7657 | 0.6125 | 0.0854 | 0.5349 | 0.5162 | 0.8019 | 0.7652 | 0.7697 | 0.7366 | 0.7317 | 0.6573 | 0.0096 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "other", "tags": ["generated_from_trainer"], "base_model": "nvidia/mit-b0", "model-index": [{"name": "mit-b0-finetuned-human-parsing-dataset", "results": []}]}
raks87/mit-b0-finetuned-human-parsing-dataset
null
[ "transformers", "tensorboard", "safetensors", "segformer", "generated_from_trainer", "base_model:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-13T18:14:23+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #segformer #generated_from_trainer #base_model-nvidia/mit-b0 #license-other #endpoints_compatible #region-us
mit-b0-finetuned-human-parsing-dataset ====================================== This model is a fine-tuned version of nvidia/mit-b0 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1612 * Mean Iou: 0.5450 * Mean Accuracy: 0.6607 * Overall Accuracy: 0.8160 * Accuracy Background: nan * Accuracy Hat: 0.5935 * Accuracy Hair: 0.8675 * Accuracy Sunglasses: 0.1278 * Accuracy Upper-clothes: 0.8806 * Accuracy Skirt: 0.7150 * Accuracy Pants: 0.8529 * Accuracy Dress: 0.8186 * Accuracy Belt: 0.0817 * Accuracy Left-shoe: 0.6562 * Accuracy Right-shoe: 0.6193 * Accuracy Face: 0.8987 * Accuracy Left-leg: 0.8838 * Accuracy Right-leg: 0.8541 * Accuracy Left-arm: 0.8193 * Accuracy Right-arm: 0.8202 * Accuracy Bag: 0.7409 * Accuracy Scarf: 0.0012 * Iou Background: 0.0 * Iou Hat: 0.5417 * Iou Hair: 0.7745 * Iou Sunglasses: 0.1273 * Iou Upper-clothes: 0.7733 * Iou Skirt: 0.6469 * Iou Pants: 0.7596 * Iou Dress: 0.6192 * Iou Belt: 0.0773 * Iou Left-shoe: 0.5307 * Iou Right-shoe: 0.5156 * Iou Face: 0.8002 * Iou Left-leg: 0.7577 * Iou Right-leg: 0.7632 * Iou Left-arm: 0.7325 * Iou Right-arm: 0.7315 * Iou Bag: 0.6578 * Iou Scarf: 0.0012 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7e-05 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 8 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #segformer #generated_from_trainer #base_model-nvidia/mit-b0 #license-other #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
null
GGUF [llama.cpp](https://github.com/ggerganov/llama.cpp) quantized version of: - Original model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) - Model creator: [Mistral AI](https://huggingface.co/mistralai) - [License](https://mistral.ai/news/announcing-mistral-7b/) ## Recommended Prompt Format (Mistral) ``` text = "<s>[INST] What is your favourite condiment? [/INST]" "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> " "[INST] Do you have mayonnaise recipes? [/INST]" ```
{"license": "apache-2.0"}
AI-Engine/Mistral-7B-Instruct-v0.2-GGUF
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2024-04-13T18:16:38+00:00
[]
[]
TAGS #gguf #license-apache-2.0 #region-us
GGUF URL quantized version of: - Original model: Mistral-7B-Instruct-v0.2 - Model creator: Mistral AI - License ## Recommended Prompt Format (Mistral)
[ "## Recommended Prompt Format (Mistral)" ]
[ "TAGS\n#gguf #license-apache-2.0 #region-us \n", "## Recommended Prompt Format (Mistral)" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # haider1101/my_NLP_qa_model This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.8142 - Validation Loss: 2.1014 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.6388 | 2.6450 | 0 | | 2.1108 | 2.1014 | 1 | | 1.8142 | 2.1014 | 2 | ### Framework versions - Transformers 4.38.2 - TensorFlow 2.15.0 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "haider1101/my_NLP_qa_model", "results": []}]}
haider1101/my_NLP_qa_model
null
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-13T18:18:57+00:00
[]
[]
TAGS #transformers #tf #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
haider1101/my\_NLP\_qa\_model ============================= This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 1.8142 * Validation Loss: 2.1014 * Epoch: 2 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 500, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.38.2 * TensorFlow 2.15.0 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 500, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tf #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 500, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
null
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with GGUF. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***What is the model format?*** We use GGUF format. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). # Downloading and running the models You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/): | Quant type | Description | |------------|--------------------------------------------------------------------------------------------| | Q5_K_M | High quality, recommended. | | Q5_K_S | High quality, recommended. | | Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. | | Q4_K_S | Slightly lower quality with more space savings, recommended. | | IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. | | IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. | | Q3_K_L | Lower quality but usable, good for low RAM availability. | | Q3_K_M | Even lower quality. | | IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | Q3_K_S | Low quality, not recommended. | | IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | Q2_K | Very low quality but surprisingly usable. | ## How to download GGUF files ? **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev - **Option A** - Downloading in `text-generation-webui`: - **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/Mistral-22B-v0.2-GGUF-smashed-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf. - **Step 2**: Then click Download. - **Option B** - Downloading on the command line (including multiple files at once): - **Step 1**: We recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` - **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download PrunaAI/Mistral-22B-v0.2-GGUF-smashed-smashed Mistral-22B-v0.2.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> Alternatively, you can also download multiple files at once with a pattern: ```shell huggingface-cli download PrunaAI/Mistral-22B-v0.2-GGUF-smashed-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/Mistral-22B-v0.2-GGUF-smashed-smashed Mistral-22B-v0.2.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## How to run model in GGUF format? - **Option A** - Introductory example with `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Mistral-22B-v0.2.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) - **Option B** - Running in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). - **Option C** - Running from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Mistral-22B-v0.2.IQ3_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<s>[INST] {prompt} [/INST]", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Mistral-22B-v0.2.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` - **Option D** - Running with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/Mistral-22B-v0.2-GGUF-smashed
null
[ "gguf", "pruna-ai", "region:us" ]
null
2024-04-13T18:19:02+00:00
[]
[]
TAGS #gguf #pruna-ai #region-us
[![](https://i.URL alt=)](URL target=) ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL Simply make AI models cheaper, smaller, faster, and greener! ============================================================ * Give a thumbs up if you like this model! * Contact us and tell us which model to compress next here. * Request access to easily compress your *own* AI models here. * Read the documentations to know more here * Join Pruna AI community on Discord here to share feedback/suggestions or get help. Frequently Asked Questions * *How does the compression work?* The model is compressed with GGUF. * *How does the model quality change?* The quality of the model output might vary compared to the base model. * *What is the model format?* We use GGUF format. * *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. * *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. Downloading and running the models ================================== You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout this chart and this guide: How to download GGUF files ? ---------------------------- Note for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * URL * Option A - Downloading in 'text-generation-webui': + Step 1: Under Download Model, you can enter the model repo: PrunaAI/Mistral-22B-v0.2-GGUF-smashed-smashed and below it, a specific filename to download, such as: phi-2.IQ3\_M.gguf. + Step 2: Then click Download. * Option B - Downloading on the command line (including multiple files at once): + Step 1: We recommend using the 'huggingface-hub' Python library: + Step 2: Then you can download any individual model file to the current directory, at high speed, with a command like this: More advanced huggingface-cli download usage (click to read) Alternatively, you can also download multiple files at once with a pattern: For more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI. To accelerate downloads on fast connections (1Gbit/s or higher), install 'hf\_transfer': And set environment variable 'HF\_HUB\_ENABLE\_HF\_TRANSFER' to '1': Windows Command Line users: You can set the environment variable by running 'set HF\_HUB\_ENABLE\_HF\_TRANSFER=1' before the download command. How to run model in GGUF format? -------------------------------- * Option A - Introductory example with 'URL' command Make sure you are using 'URL' from commit d0cee0d or later. Change '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change '-c 32768' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the '-p ' argument with '-i -ins' For other parameters and how to use them, please refer to the URL documentation * Option B - Running in 'text-generation-webui' Further instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL. * Option C - Running from Python code You can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: llama-cpp-python docs. #### First install the package Run one of the following commands, according to your system: #### Simple llama-cpp-python example code * Option D - Running with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: + LangChain + llama-cpp-python + LangChain + ctransformers Configurations -------------- The configuration info are in 'smash\_config.json'. Credits & License ----------------- The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. Want to compress other models? ------------------------------ * Contact us and tell us which model to compress next here. * Request access to easily compress your own AI models here.
[ "### How to load this model in Python code, using llama-cpp-python\n\n\nFor full documentation, please see: llama-cpp-python docs.", "#### First install the package\n\n\nRun one of the following commands, according to your system:", "#### Simple llama-cpp-python example code\n* Option D - Running with LangChain\n\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n\n\t+ LangChain + llama-cpp-python\n\t+ LangChain + ctransformers\n\n\nConfigurations\n--------------\n\n\nThe configuration info are in 'smash\\_config.json'.\n\n\nCredits & License\n-----------------\n\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.\n\n\nWant to compress other models?\n------------------------------\n\n\n* Contact us and tell us which model to compress next here.\n* Request access to easily compress your own AI models here." ]
[ "TAGS\n#gguf #pruna-ai #region-us \n", "### How to load this model in Python code, using llama-cpp-python\n\n\nFor full documentation, please see: llama-cpp-python docs.", "#### First install the package\n\n\nRun one of the following commands, according to your system:", "#### Simple llama-cpp-python example code\n* Option D - Running with LangChain\n\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n\n\t+ LangChain + llama-cpp-python\n\t+ LangChain + ctransformers\n\n\nConfigurations\n--------------\n\n\nThe configuration info are in 'smash\\_config.json'.\n\n\nCredits & License\n-----------------\n\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.\n\n\nWant to compress other models?\n------------------------------\n\n\n* Contact us and tell us which model to compress next here.\n* Request access to easily compress your own AI models here." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-7b-text-to-sql - This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the b-mc2/sql-create-context dataset. - These are the adapter weights, and the code to use these for generation is given below. - A full model will be uploaded at a later date. - Primary reference: https://www.philschmid.de/fine-tune-llms-in-2024-with-trl ## Model description - Model type: Language model - Language(s) (NLP): English - License: Apache 2.0 - Finetuned from model : Mistral-7B-v0.1 ## How to get started with the model ```python import torch from transformers import AutoTokenizer, pipeline from datasets import load_dataset from peft import AutoPeftModelForCausalLM from random import randint peft_model_id = "delayedkarma/mistral-7b-text-to-sql" # Load Model with PEFT adapter model = AutoPeftModelForCausalLM.from_pretrained( peft_model_id, device_map="auto", torch_dtype=torch.float16 ) tokenizer = AutoTokenizer.from_pretrained(peft_model_id) # load into pipeline pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) # Load dataset and Convert dataset to OAI messages system_message = """You are a text to SQL query translator. Users will ask you questions in English and you will generate a SQL query based on the provided SCHEMA. SCHEMA: {schema}""" def create_conversation(sample): return { "messages": [ {"role": "system", "content": system_message.format(schema=sample["context"])}, {"role": "user", "content": sample["question"]}, {"role": "assistant", "content": sample["answer"]} ] } # Load dataset from the hub dataset = load_dataset("b-mc2/sql-create-context", split="train") dataset = dataset.shuffle().select(range(100)) # Convert dataset to OAI messages dataset = dataset.map(create_conversation, remove_columns=dataset.features, batched=False) dataset = dataset.train_test_split(test_size=20/100) # Evaluate eval_dataset = dataset['test'] rand_idx = randint(0, len(eval_dataset)) # Test on sample prompt = pipe.tokenizer.apply_chat_template(eval_dataset[rand_idx]["messages"][:2], tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=False, temperature=0.1, top_k=50, top_p=0.1, eos_token_id=pipe.tokenizer.eos_token_id, pad_token_id=pipe.tokenizer.pad_token_id) print(f"Query:\n{eval_dataset[rand_idx]['messages'][1]['content']}") print(f"Original Answer:\n{eval_dataset[rand_idx]['messages'][2]['content']}") print(f"Generated Answer:\n{outputs[0]['generated_text'][len(prompt):].strip()}") ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.2.2 - Datasets 2.16.1 - Tokenizers 0.15.2
{"language": ["en"], "license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer", "peft"], "datasets": ["b-mc2/sql-create-context"], "base_model": "mistralai/Mistral-7B-v0.1", "reference": ["https://www.philschmid.de/fine-tune-llms-in-2024-with-trl"], "model-index": [{"name": "mistral-7b-text-to-sql", "results": []}]}
delayedkarma/mistral-7b-text-to-sql
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "en", "dataset:b-mc2/sql-create-context", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-13T18:19:12+00:00
[]
[ "en" ]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #en #dataset-b-mc2/sql-create-context #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
# mistral-7b-text-to-sql - This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the b-mc2/sql-create-context dataset. - These are the adapter weights, and the code to use these for generation is given below. - A full model will be uploaded at a later date. - Primary reference: URL ## Model description - Model type: Language model - Language(s) (NLP): English - License: Apache 2.0 - Finetuned from model : Mistral-7B-v0.1 ## How to get started with the model ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.2.2 - Datasets 2.16.1 - Tokenizers 0.15.2
[ "# mistral-7b-text-to-sql\n\n- This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the b-mc2/sql-create-context dataset.\n- These are the adapter weights, and the code to use these for generation is given below.\n- A full model will be uploaded at a later date.\n- Primary reference: URL", "## Model description\n\n- Model type: Language model\n- Language(s) (NLP): English\n- License: Apache 2.0\n- Finetuned from model : Mistral-7B-v0.1", "## How to get started with the model", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 3\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3", "### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.2.2\n- Datasets 2.16.1\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #en #dataset-b-mc2/sql-create-context #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n", "# mistral-7b-text-to-sql\n\n- This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the b-mc2/sql-create-context dataset.\n- These are the adapter weights, and the code to use these for generation is given below.\n- A full model will be uploaded at a later date.\n- Primary reference: URL", "## Model description\n\n- Model type: Language model\n- Language(s) (NLP): English\n- License: Apache 2.0\n- Finetuned from model : Mistral-7B-v0.1", "## How to get started with the model", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 3\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3", "### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.2.2\n- Datasets 2.16.1\n- Tokenizers 0.15.2" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiq This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.5477 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 200 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.2342 | 0.04 | 100 | 6.1857 | | 5.7599 | 0.07 | 200 | 5.7751 | | 5.7433 | 0.11 | 300 | 5.7142 | | 5.6021 | 0.15 | 400 | 5.6776 | | 5.5084 | 0.18 | 500 | 5.6349 | | 5.3825 | 0.22 | 600 | 5.6201 | | 5.6698 | 0.26 | 700 | 5.5831 | | 5.4089 | 0.29 | 800 | 5.5687 | | 5.601 | 0.33 | 900 | 5.5574 | | 5.4708 | 0.37 | 1000 | 5.5555 | | 5.5956 | 0.4 | 1100 | 5.5520 | | 5.4704 | 0.44 | 1200 | 5.5494 | | 5.4824 | 0.47 | 1300 | 5.5502 | | 5.589 | 0.51 | 1400 | 5.5478 | | 5.5612 | 0.55 | 1500 | 5.5456 | | 5.4741 | 0.58 | 1600 | 5.5430 | | 5.463 | 0.62 | 1700 | 5.5426 | | 5.5071 | 0.66 | 1800 | 5.5424 | | 5.5469 | 0.69 | 1900 | 5.5419 | | 5.4266 | 0.73 | 2000 | 5.5428 | | 5.4848 | 0.77 | 2100 | 5.5438 | | 5.5069 | 0.8 | 2200 | 5.5446 | | 5.5885 | 0.84 | 2300 | 5.5469 | | 5.4484 | 0.88 | 2400 | 5.5462 | | 5.3859 | 0.91 | 2500 | 5.5475 | | 5.465 | 0.95 | 2600 | 5.5476 | | 5.4355 | 0.99 | 2700 | 5.5477 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "gpt2-large", "model-index": [{"name": "tiq", "results": []}]}
smiled0g/tiq
null
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:gpt2-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T18:19:14+00:00
[]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
tiq === This model is a fine-tuned version of gpt2-large on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 5.5477 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_steps: 200 * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.0+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-to-image
null
# LoRA model of Historia Menou/メノウ・ヒストリア (Maou Gakuin no Futekigousha) ## What Is This? This is the LoRA model of waifu Historia Menou/メノウ・ヒストリア (Maou Gakuin no Futekigousha). ## How Is It Trained? * This model is trained with [kohya-ss/sd-scripts](https://github.com/kohya-ss/sd-scripts), and the test images are generated with [a1111's webui](AUTOMATIC1111/stable-diffusion-webui) and [API sdk](https://github.com/mix1009/sdwebuiapi). * The [auto-training framework](https://github.com/deepghs/cyberharem) is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The architecture of base model is is `SD1.5`. * Dataset used for training is the `stage3-p480-1200` in [CyberHarem/historia_menou_maougakuinnofutekigousha](https://huggingface.co/datasets/CyberHarem/historia_menou_maougakuinnofutekigousha), which contains 170 images. * The images in the dataset is auto-cropped from anime videos, more images for other waifus in the same anime can be found in [BangumiBase/maougakuinnofutekigousha](https://huggingface.co/datasets/BangumiBase/maougakuinnofutekigousha) * **Trigger word is `historia_menou_maougakuinnofutekigousha`.** * **Trigger word of anime style is is `anime_style`.** * Pruned core tags for this waifu are `short hair, pointy ears, earrings, blue eyes, hair between eyes, grey hair, black hair, breasts`. You can add them to the prompt when some features of waifu (e.g. hair color) are not stable. * For more details in training, you can take a look at [training configuration file](https://huggingface.co/CyberHarem/historia_menou_maougakuinnofutekigousha/resolve/main/train.toml). * For more details in LoRA, you can download it, and read the metadata with a1111's webui. ## How to Use It? After downloading the safetensors files for the specified step, you need to use them like common LoRA. * Recommended LoRA weight is 0.5-0.85. * Recommended trigger word weight is 0.7-1.1. For example, if you want to use the model from step 1674, you need to download [`1674/historia_menou_maougakuinnofutekigousha.safetensors`](https://huggingface.co/CyberHarem/historia_menou_maougakuinnofutekigousha/resolve/main/1674/historia_menou_maougakuinnofutekigousha.safetensors) as LoRA. By using this model, you can generate images for the desired characters. ## Which Step Should I Use? We selected 5 good steps for you to choose. The best one is step 1674. 780 images (722.01 MiB) were generated for auto-testing. ![Metrics Plot](metrics_plot.png) Here are the preview of the recommended steps: | Step | Epoch | CCIP | AI Corrupt | Bikini Plus | Score | Download | pattern_0 | pattern_1_0 | pattern_1_1 | pattern_2 | portrait_0 | portrait_1 | portrait_2 | full_body_0 | full_body_1 | profile_0 | profile_1 | free_0 | free_1 | shorts | maid_0 | maid_1 | miko | yukata | suit | china | bikini_0 | bikini_1 | bikini_2 | sit | squat | kneel | jump | crossed_arms | angry | smile | cry | grin | n_lie_0 | n_lie_1 | n_stand_0 | n_stand_1 | n_stand_2 | n_sex_0 | n_sex_1 | |-------:|--------:|:----------|:-------------|:--------------|:----------|:----------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------|:----------------------------------------------|:----------------------------------------------|:------------------------------------------|:--------------------------------------------|:--------------------------------------------|:--------------------------------------------|:----------------------------------------------|:----------------------------------------------|:------------------------------------------|:------------------------------------------|:------------------------------------|:------------------------------------|:------------------------------------|:------------------------------------|:------------------------------------|:--------------------------------|:------------------------------------|:--------------------------------|:----------------------------------|:----------------------------------------|:----------------------------------------|:----------------------------------------|:------------------------------|:----------------------------------|:----------------------------------|:--------------------------------|:------------------------------------------------|:----------------------------------|:----------------------------------|:------------------------------|:--------------------------------|:--------------------------------------|:--------------------------------------|:------------------------------------------|:------------------------------------------|:------------------------------------------|:--------------------------------------|:--------------------------------------| | 1674 | 54 | **0.954** | 0.988 | 0.818 | **0.689** | [Download](https://huggingface.co/CyberHarem/historia_menou_maougakuinnofutekigousha/resolve/main/1674/historia_menou_maougakuinnofutekigousha.zip) | ![pattern_0](1674/previews/pattern_0.png) | ![pattern_1_0](1674/previews/pattern_1_0.png) | ![pattern_1_1](1674/previews/pattern_1_1.png) | ![pattern_2](1674/previews/pattern_2.png) | ![portrait_0](1674/previews/portrait_0.png) | ![portrait_1](1674/previews/portrait_1.png) | ![portrait_2](1674/previews/portrait_2.png) | ![full_body_0](1674/previews/full_body_0.png) | ![full_body_1](1674/previews/full_body_1.png) | ![profile_0](1674/previews/profile_0.png) | ![profile_1](1674/previews/profile_1.png) | ![free_0](1674/previews/free_0.png) | ![free_1](1674/previews/free_1.png) | ![shorts](1674/previews/shorts.png) | ![maid_0](1674/previews/maid_0.png) | ![maid_1](1674/previews/maid_1.png) | ![miko](1674/previews/miko.png) | ![yukata](1674/previews/yukata.png) | ![suit](1674/previews/suit.png) | ![china](1674/previews/china.png) | ![bikini_0](1674/previews/bikini_0.png) | ![bikini_1](1674/previews/bikini_1.png) | ![bikini_2](1674/previews/bikini_2.png) | ![sit](1674/previews/sit.png) | ![squat](1674/previews/squat.png) | ![kneel](1674/previews/kneel.png) | ![jump](1674/previews/jump.png) | ![crossed_arms](1674/previews/crossed_arms.png) | ![angry](1674/previews/angry.png) | ![smile](1674/previews/smile.png) | ![cry](1674/previews/cry.png) | ![grin](1674/previews/grin.png) | ![n_lie_0](1674/previews/n_lie_0.png) | ![n_lie_1](1674/previews/n_lie_1.png) | ![n_stand_0](1674/previews/n_stand_0.png) | ![n_stand_1](1674/previews/n_stand_1.png) | ![n_stand_2](1674/previews/n_stand_2.png) | ![n_sex_0](1674/previews/n_sex_0.png) | ![n_sex_1](1674/previews/n_sex_1.png) | | 930 | 30 | 0.940 | 0.984 | 0.824 | 0.687 | [Download](https://huggingface.co/CyberHarem/historia_menou_maougakuinnofutekigousha/resolve/main/930/historia_menou_maougakuinnofutekigousha.zip) | ![pattern_0](930/previews/pattern_0.png) | ![pattern_1_0](930/previews/pattern_1_0.png) | ![pattern_1_1](930/previews/pattern_1_1.png) | ![pattern_2](930/previews/pattern_2.png) | ![portrait_0](930/previews/portrait_0.png) | ![portrait_1](930/previews/portrait_1.png) | ![portrait_2](930/previews/portrait_2.png) | ![full_body_0](930/previews/full_body_0.png) | ![full_body_1](930/previews/full_body_1.png) | ![profile_0](930/previews/profile_0.png) | ![profile_1](930/previews/profile_1.png) | ![free_0](930/previews/free_0.png) | ![free_1](930/previews/free_1.png) | ![shorts](930/previews/shorts.png) | ![maid_0](930/previews/maid_0.png) | ![maid_1](930/previews/maid_1.png) | ![miko](930/previews/miko.png) | ![yukata](930/previews/yukata.png) | ![suit](930/previews/suit.png) | ![china](930/previews/china.png) | ![bikini_0](930/previews/bikini_0.png) | ![bikini_1](930/previews/bikini_1.png) | ![bikini_2](930/previews/bikini_2.png) | ![sit](930/previews/sit.png) | ![squat](930/previews/squat.png) | ![kneel](930/previews/kneel.png) | ![jump](930/previews/jump.png) | ![crossed_arms](930/previews/crossed_arms.png) | ![angry](930/previews/angry.png) | ![smile](930/previews/smile.png) | ![cry](930/previews/cry.png) | ![grin](930/previews/grin.png) | ![n_lie_0](930/previews/n_lie_0.png) | ![n_lie_1](930/previews/n_lie_1.png) | ![n_stand_0](930/previews/n_stand_0.png) | ![n_stand_1](930/previews/n_stand_1.png) | ![n_stand_2](930/previews/n_stand_2.png) | ![n_sex_0](930/previews/n_sex_0.png) | ![n_sex_1](930/previews/n_sex_1.png) | | 1581 | 51 | 0.943 | 0.989 | 0.822 | 0.685 | [Download](https://huggingface.co/CyberHarem/historia_menou_maougakuinnofutekigousha/resolve/main/1581/historia_menou_maougakuinnofutekigousha.zip) | ![pattern_0](1581/previews/pattern_0.png) | ![pattern_1_0](1581/previews/pattern_1_0.png) | ![pattern_1_1](1581/previews/pattern_1_1.png) | ![pattern_2](1581/previews/pattern_2.png) | ![portrait_0](1581/previews/portrait_0.png) | ![portrait_1](1581/previews/portrait_1.png) | ![portrait_2](1581/previews/portrait_2.png) | ![full_body_0](1581/previews/full_body_0.png) | ![full_body_1](1581/previews/full_body_1.png) | ![profile_0](1581/previews/profile_0.png) | ![profile_1](1581/previews/profile_1.png) | ![free_0](1581/previews/free_0.png) | ![free_1](1581/previews/free_1.png) | ![shorts](1581/previews/shorts.png) | ![maid_0](1581/previews/maid_0.png) | ![maid_1](1581/previews/maid_1.png) | ![miko](1581/previews/miko.png) | ![yukata](1581/previews/yukata.png) | ![suit](1581/previews/suit.png) | ![china](1581/previews/china.png) | ![bikini_0](1581/previews/bikini_0.png) | ![bikini_1](1581/previews/bikini_1.png) | ![bikini_2](1581/previews/bikini_2.png) | ![sit](1581/previews/sit.png) | ![squat](1581/previews/squat.png) | ![kneel](1581/previews/kneel.png) | ![jump](1581/previews/jump.png) | ![crossed_arms](1581/previews/crossed_arms.png) | ![angry](1581/previews/angry.png) | ![smile](1581/previews/smile.png) | ![cry](1581/previews/cry.png) | ![grin](1581/previews/grin.png) | ![n_lie_0](1581/previews/n_lie_0.png) | ![n_lie_1](1581/previews/n_lie_1.png) | ![n_stand_0](1581/previews/n_stand_0.png) | ![n_stand_1](1581/previews/n_stand_1.png) | ![n_stand_2](1581/previews/n_stand_2.png) | ![n_sex_0](1581/previews/n_sex_0.png) | ![n_sex_1](1581/previews/n_sex_1.png) | | 1023 | 33 | 0.932 | 0.992 | **0.830** | 0.684 | [Download](https://huggingface.co/CyberHarem/historia_menou_maougakuinnofutekigousha/resolve/main/1023/historia_menou_maougakuinnofutekigousha.zip) | ![pattern_0](1023/previews/pattern_0.png) | ![pattern_1_0](1023/previews/pattern_1_0.png) | ![pattern_1_1](1023/previews/pattern_1_1.png) | ![pattern_2](1023/previews/pattern_2.png) | ![portrait_0](1023/previews/portrait_0.png) | ![portrait_1](1023/previews/portrait_1.png) | ![portrait_2](1023/previews/portrait_2.png) | ![full_body_0](1023/previews/full_body_0.png) | ![full_body_1](1023/previews/full_body_1.png) | ![profile_0](1023/previews/profile_0.png) | ![profile_1](1023/previews/profile_1.png) | ![free_0](1023/previews/free_0.png) | ![free_1](1023/previews/free_1.png) | ![shorts](1023/previews/shorts.png) | ![maid_0](1023/previews/maid_0.png) | ![maid_1](1023/previews/maid_1.png) | ![miko](1023/previews/miko.png) | ![yukata](1023/previews/yukata.png) | ![suit](1023/previews/suit.png) | ![china](1023/previews/china.png) | ![bikini_0](1023/previews/bikini_0.png) | ![bikini_1](1023/previews/bikini_1.png) | ![bikini_2](1023/previews/bikini_2.png) | ![sit](1023/previews/sit.png) | ![squat](1023/previews/squat.png) | ![kneel](1023/previews/kneel.png) | ![jump](1023/previews/jump.png) | ![crossed_arms](1023/previews/crossed_arms.png) | ![angry](1023/previews/angry.png) | ![smile](1023/previews/smile.png) | ![cry](1023/previews/cry.png) | ![grin](1023/previews/grin.png) | ![n_lie_0](1023/previews/n_lie_0.png) | ![n_lie_1](1023/previews/n_lie_1.png) | ![n_stand_0](1023/previews/n_stand_0.png) | ![n_stand_1](1023/previews/n_stand_1.png) | ![n_stand_2](1023/previews/n_stand_2.png) | ![n_sex_0](1023/previews/n_sex_0.png) | ![n_sex_1](1023/previews/n_sex_1.png) | | 1116 | 36 | 0.943 | **0.994** | 0.821 | 0.683 | [Download](https://huggingface.co/CyberHarem/historia_menou_maougakuinnofutekigousha/resolve/main/1116/historia_menou_maougakuinnofutekigousha.zip) | ![pattern_0](1116/previews/pattern_0.png) | ![pattern_1_0](1116/previews/pattern_1_0.png) | ![pattern_1_1](1116/previews/pattern_1_1.png) | ![pattern_2](1116/previews/pattern_2.png) | ![portrait_0](1116/previews/portrait_0.png) | ![portrait_1](1116/previews/portrait_1.png) | ![portrait_2](1116/previews/portrait_2.png) | ![full_body_0](1116/previews/full_body_0.png) | ![full_body_1](1116/previews/full_body_1.png) | ![profile_0](1116/previews/profile_0.png) | ![profile_1](1116/previews/profile_1.png) | ![free_0](1116/previews/free_0.png) | ![free_1](1116/previews/free_1.png) | ![shorts](1116/previews/shorts.png) | ![maid_0](1116/previews/maid_0.png) | ![maid_1](1116/previews/maid_1.png) | ![miko](1116/previews/miko.png) | ![yukata](1116/previews/yukata.png) | ![suit](1116/previews/suit.png) | ![china](1116/previews/china.png) | ![bikini_0](1116/previews/bikini_0.png) | ![bikini_1](1116/previews/bikini_1.png) | ![bikini_2](1116/previews/bikini_2.png) | ![sit](1116/previews/sit.png) | ![squat](1116/previews/squat.png) | ![kneel](1116/previews/kneel.png) | ![jump](1116/previews/jump.png) | ![crossed_arms](1116/previews/crossed_arms.png) | ![angry](1116/previews/angry.png) | ![smile](1116/previews/smile.png) | ![cry](1116/previews/cry.png) | ![grin](1116/previews/grin.png) | ![n_lie_0](1116/previews/n_lie_0.png) | ![n_lie_1](1116/previews/n_lie_1.png) | ![n_stand_0](1116/previews/n_stand_0.png) | ![n_stand_1](1116/previews/n_stand_1.png) | ![n_stand_2](1116/previews/n_stand_2.png) | ![n_sex_0](1116/previews/n_sex_0.png) | ![n_sex_1](1116/previews/n_sex_1.png) | ## Anything Else? Because the automation of LoRA training always annoys some people. So for the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. ## All Steps We uploaded the files in all steps. you can check the images, metrics and download them in the following links: * [Steps From 1023 to 1860](all/0.md) * [Steps From 93 to 930](all/1.md)
{"license": "mit", "tags": ["art", "not-for-all-audiences"], "datasets": ["CyberHarem/historia_menou_maougakuinnofutekigousha", "BangumiBase/maougakuinnofutekigousha"], "pipeline_tag": "text-to-image"}
CyberHarem/historia_menou_maougakuinnofutekigousha
null
[ "art", "not-for-all-audiences", "text-to-image", "dataset:CyberHarem/historia_menou_maougakuinnofutekigousha", "dataset:BangumiBase/maougakuinnofutekigousha", "license:mit", "region:us" ]
null
2024-04-13T18:23:34+00:00
[]
[]
TAGS #art #not-for-all-audiences #text-to-image #dataset-CyberHarem/historia_menou_maougakuinnofutekigousha #dataset-BangumiBase/maougakuinnofutekigousha #license-mit #region-us
LoRA model of Historia Menou/メノウ・ヒストリア (Maou Gakuin no Futekigousha) ==================================================================== What Is This? ------------- This is the LoRA model of waifu Historia Menou/メノウ・ヒストリア (Maou Gakuin no Futekigousha). How Is It Trained? ------------------ * This model is trained with kohya-ss/sd-scripts, and the test images are generated with a1111's webui and API sdk. * The auto-training framework is maintained by DeepGHS Team. The architecture of base model is is 'SD1.5'. * Dataset used for training is the 'stage3-p480-1200' in CyberHarem/historia\_menou\_maougakuinnofutekigousha, which contains 170 images. * The images in the dataset is auto-cropped from anime videos, more images for other waifus in the same anime can be found in BangumiBase/maougakuinnofutekigousha * Trigger word is 'historia\_menou\_maougakuinnofutekigousha'. * Trigger word of anime style is is 'anime\_style'. * Pruned core tags for this waifu are 'short hair, pointy ears, earrings, blue eyes, hair between eyes, grey hair, black hair, breasts'. You can add them to the prompt when some features of waifu (e.g. hair color) are not stable. * For more details in training, you can take a look at training configuration file. * For more details in LoRA, you can download it, and read the metadata with a1111's webui. How to Use It? -------------- After downloading the safetensors files for the specified step, you need to use them like common LoRA. * Recommended LoRA weight is 0.5-0.85. * Recommended trigger word weight is 0.7-1.1. For example, if you want to use the model from step 1674, you need to download '1674/historia\_menou\_maougakuinnofutekigousha.safetensors' as LoRA. By using this model, you can generate images for the desired characters. Which Step Should I Use? ------------------------ We selected 5 good steps for you to choose. The best one is step 1674. 780 images (722.01 MiB) were generated for auto-testing. !Metrics Plot Here are the preview of the recommended steps: Anything Else? -------------- Because the automation of LoRA training always annoys some people. So for the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. All Steps --------- We uploaded the files in all steps. you can check the images, metrics and download them in the following links: * Steps From 1023 to 1860 * Steps From 93 to 930
[]
[ "TAGS\n#art #not-for-all-audiences #text-to-image #dataset-CyberHarem/historia_menou_maougakuinnofutekigousha #dataset-BangumiBase/maougakuinnofutekigousha #license-mit #region-us \n" ]
text-generation
transformers
# Inex12M7-7B Inex12M7-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. * [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b) ## 🧩 Configuration ```yaml models: - model: MSL7/INEX12-7b # No parameters necessary for base model - model: liminerity/M7-7b parameters: density: 0.53 weight: 0.6 merge_method: dare_ties base_model: MSL7/INEX12-7b parameters: int8_mask: true dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/Inex12M7-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"], "base_model": ["liminerity/M7-7b"]}
automerger/Inex12M7-7B
null
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:liminerity/M7-7b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T18:24:09+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #base_model-liminerity/M7-7b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Inex12M7-7B Inex12M7-7B is an automated merge created by Maxime Labonne using the following configuration. * liminerity/M7-7b ## Configuration ## Usage
[ "# Inex12M7-7B\n\nInex12M7-7B is an automated merge created by Maxime Labonne using the following configuration.\n* liminerity/M7-7b", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #base_model-liminerity/M7-7b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Inex12M7-7B\n\nInex12M7-7B is an automated merge created by Maxime Labonne using the following configuration.\n* liminerity/M7-7b", "## Configuration", "## Usage" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Sand-Red/Llama_OpenI
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T18:24:16+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [Equall/Saul-Base](https://huggingface.co/Equall/Saul-Base) * [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Equall/Saul-Base layer_range: [0, 32] - model: HuggingFaceH4/zephyr-7b-beta layer_range: [0, 32] merge_method: slerp base_model: HuggingFaceH4/zephyr-7b-beta parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Equall/Saul-Base", "HuggingFaceH4/zephyr-7b-beta"]}
mergekit-community/mergekit-slerp-fmitxcg
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:Equall/Saul-Base", "base_model:HuggingFaceH4/zephyr-7b-beta", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T18:25:27+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-Equall/Saul-Base #base_model-HuggingFaceH4/zephyr-7b-beta #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * Equall/Saul-Base * HuggingFaceH4/zephyr-7b-beta ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* Equall/Saul-Base\n* HuggingFaceH4/zephyr-7b-beta", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-Equall/Saul-Base #base_model-HuggingFaceH4/zephyr-7b-beta #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* Equall/Saul-Base\n* HuggingFaceH4/zephyr-7b-beta", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/NousResearch/Nous-Hermes-Llama2-70b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF/resolve/main/Nous-Hermes-Llama2-70b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": ["mit"], "library_name": "transformers", "tags": ["llama-2", "self-instruct", "distillation", "synthetic instruction"], "base_model": "NousResearch/Nous-Hermes-Llama2-70b", "quantized_by": "mradermacher"}
mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF
null
[ "transformers", "gguf", "llama-2", "self-instruct", "distillation", "synthetic instruction", "en", "base_model:NousResearch/Nous-Hermes-Llama2-70b", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-13T18:27:11+00:00
[]
[ "en" ]
TAGS #transformers #gguf #llama-2 #self-instruct #distillation #synthetic instruction #en #base_model-NousResearch/Nous-Hermes-Llama2-70b #license-mit #endpoints_compatible #region-us
About ----- weighted/imatrix quants of URL static quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #llama-2 #self-instruct #distillation #synthetic instruction #en #base_model-NousResearch/Nous-Hermes-Llama2-70b #license-mit #endpoints_compatible #region-us \n" ]