modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-30 06:28:04
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
536 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-30 06:28:00
card
stringlengths
11
1.01M
stacked-summaries/flan-t5-large-samsum
stacked-summaries
2023-09-23T20:44:24Z
135
1
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "summarization", "en", "dataset:samsum", "base_model:google/flan-t5-large", "base_model:finetune:google/flan-t5-large", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2023-02-20T11:03:01Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - samsum metrics: - rouge pipeline_tag: summarization base_model: google/flan-t5-large model-index: - name: stacked-summaries/flan-t5-large-samsum results: - task: type: summarization name: Summarization dataset: name: samsum type: samsum config: samsum split: test metrics: - type: rouge value: 49.0095 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGNhY2RhOTg5ZmU4ZGJiMjI1NjUyYWMwYmM2Mzk4MGEwMjk0NDg2OWYxZDdmM2I4NzBmODNiM2JmNTg1MDJhYSIsInZlcnNpb24iOjF9.YinJDLeqzoU_x5uJbGIgq8ZEs36oC3Pzre_vk2juxngBoXCEw54XWjpvVhKKZXeIgc47otucJFtFwAOPEmt9Bw - type: rouge value: 25.681 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDBmNDc4NGMzZGEzYzMzMTFiNzliNjUyYmY0MzNjMmRlMTk4ZTRmZmUxODE0MmY1MjEzOWQ2MGQxMmZmZmQ5MSIsInZlcnNpb24iOjF9.UmRHCmQR5CR-JklBTY1JnjD_Gqz_qMYwdVXhMMvnAynMwAgXkoJZeoxT--usUfdkbqaQ-mLeEvLw7mgNE-NQAw - type: rouge value: 41.4474 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODdiM2IxZTU4NTEyMzlmZDEzYTliZWNjMjM1NTAzMjE5MDY1MDZiZDc2YmE2NzUxOWJhMmQ0NTM5MjRjZjQyMSIsInZlcnNpb24iOjF9.PeJ41sirLWf3HTiJXlSMNoleENJT_X2u4VMkgQTmXMmGkbrONTFbUYwO4qjoQkvyjy8pLA2eQ3Fjm5yAvKrTCQ - type: rouge value: 45.1556 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGFiMGNkZDYxZmVhMDFlNDRlNmQ4YWVlMTk3ODI0ZWQ2MmIzNWFkYjkwOWRlNzkyNGVmYmY5ZTczZDAxYTk3NiIsInZlcnNpb24iOjF9.dsicHh5W4ba8t8eBBcSRUm-HLPlMoRc57XixiOHBCk-82De5u8hH8fsRWbMmaLpobdJ7b3xlIaVfTfMMRoLvBw - type: loss value: 1.2201015949249268 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzliODE2YzY2MzMyZDQ4YzdjZmFjNTc2NDU3ZjQwNjYwNTdhZjY1NWViM2VhNDc1MjQzNDkxMDI2MTM5ZjFkYiIsInZlcnNpb24iOjF9.2QdP4Zj2oHCo0HCoGgZy6YdqNJaQ0ri0E2kD7lzYbVmyg35wyGutvRUaXVR6O833gTbsCvM86Gp77qNT9CTyDA - type: gen_len value: 17.326 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWJlNjhjMmUxNWU3MDZlMTUzOWRmM2UwNmU3MjBjODhmMGYxZTUyMmFmMmE0MmU3ZTVkYWY0MDhkMWQ3NTk2MSIsInZlcnNpb24iOjF9.wFaw7DOpESjPu_uW6liHc4XaTwF36ReLLYd-BBFhnZXemE_lGQxmp0O0Vl2DgZz3SSbXonyS4D01G2hYze8qCA --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-large-samsum This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.1754 - Rouge1: 54.1595 - Rouge2: 29.1081 - Rougel: 45.4989 - Rougelsum: 49.1026 - Gen Len: 28.72 > Note: the stacked version of this model technically does evaluation on a **different** validation set (the stacked one) while this just uses `samsum`. ## Model description More information needed ## Intended uses & limitations - Intended for comparison(s) to the [stacked version of this model](https://huggingface.co/stacked-summaries/flan-t5-large-stacked-samsum-1024) - 1024 token input max ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 17868 - distributed_type: multi-GPU - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.04 - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.2106 | 0.43 | 50 | 1.1889 | 52.5898 | 26.9967 | 43.6944 | 47.9656 | 24.5167 | | 1.213 | 0.87 | 100 | 1.1760 | 52.4279 | 27.4689 | 43.7873 | 48.0581 | 25.0533 | | 1.0726 | 1.3 | 150 | 1.1731 | 52.8246 | 26.9524 | 43.7429 | 48.0345 | 25.55 | | 1.0784 | 1.74 | 200 | 1.1708 | 53.1291 | 27.9056 | 44.2609 | 48.6883 | 26.03 | | 1.0215 | 2.17 | 250 | 1.1755 | 53.1512 | 27.9475 | 44.1442 | 48.4619 | 27.57 | | 1.0294 | 2.61 | 300 | 1.1711 | 53.4402 | 28.0126 | 44.5454 | 48.6432 | 25.9033 | | 1.0016 | 3.04 | 350 | 1.1718 | 53.9395 | 28.3087 | 45.191 | 49.2773 | 26.6133 | | 0.9576 | 3.48 | 400 | 1.1741 | 53.9004 | 28.3243 | 45.0911 | 48.9182 | 26.33 | | 0.9739 | 3.91 | 450 | 1.1754 | 53.7049 | 28.419 | 44.8946 | 48.8708 | 27.2433 | | 0.9505 | 4.35 | 500 | 1.1781 | 53.7142 | 28.1758 | 44.8324 | 48.9498 | 26.8667 | | 0.9993 | 4.78 | 550 | 1.1784 | 53.87 | 28.2211 | 44.893 | 49.1074 | 27.2167 |
pszemraj/long-t5-tglobal-base-sci-simplify
pszemraj
2023-09-23T20:42:40Z
166
10
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "longt5", "text2text-generation", "lay summaries", "paper summaries", "biology", "medical", "summarization", "en", "dataset:pszemraj/scientific_lay_summarisation-plos-norm", "base_model:google/long-t5-tglobal-base", "base_model:quantized:google/long-t5-tglobal-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2023-04-06T23:40:30Z
--- language: - en license: apache-2.0 library_name: transformers tags: - lay summaries - paper summaries - biology - medical datasets: - pszemraj/scientific_lay_summarisation-plos-norm widget: - text: large earthquakes along a given fault segment do not occur at random intervals because it takes time to accumulate the strain energy for the rupture. The rates at which tectonic plates move and accumulate strain at their boundaries are approximately uniform. Therefore, in first approximation, one may expect that large ruptures of the same fault segment will occur at approximately constant time intervals. If subsequent main shocks have different amounts of slip across the fault, then the recurrence time may vary, and the basic idea of periodic mainshocks must be modified. For great plate boundary ruptures the length and slip often vary by a factor of 2. Along the southern segment of the San Andreas fault the recurrence interval is 145 years with variations of several decades. The smaller the standard deviation of the average recurrence interval, the more specific could be the long term prediction of a future mainshock. example_title: earthquakes - text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates are fed into a neural network that predicts values in the reconstructed domain. Then, this domain is mapped to the sensor domain where sensor measurements are available as supervision. Class and Section Problems Addressed Generalization (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid Representations (Section 3) Computation & memory efficiency, representation capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section 6) Edit ability, constraints, regularization. Table 2: The five classes of techniques in the neural field toolbox each addresses problems that arise in learning, inference, and control. (Section 3). We can supervise reconstruction via differentiable forward maps that transform Or project our domain (e.g, 3D reconstruction via 2D images; Section 4) With appropriate network architecture choices, we can overcome neural network spectral biases (blurriness) and efficiently compute derivatives and integrals (Section 5). Finally, we can manipulate neural fields to add constraints and regularizations, and to achieve editable representations (Section 6). Collectively, these classes constitute a ''toolbox'' of techniques to help solve problems with neural fields There are three components in a conditional neural field: (1) An encoder or inference function € that outputs the conditioning latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS a latent code Or feature code_ (2) A mapping function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the most probable z given the observations O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding schemes with different optimality guarantees (Section 2.1.1), both global and local conditioning (Section 2.1.2), and different mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable prior over the sur- face in its reconstruction domain to generalize to the partial observations. A neural network expresses a prior via the function space of its architecture and parameters 0, and generalization is influenced by the inductive bias of this function space (Section 5).' example_title: scientific paper - text: 'Is a else or outside the cob and tree written being of early client rope and you have is for good reasons. On to the ocean in Orange for time. By''s the aggregate we can bed it yet. Why this please pick up on a sort is do and also M Getoi''s nerocos and do rain become you to let so is his brother is made in use and Mjulia''s''s the lay major is aging Masastup coin present sea only of Oosii rooms set to you We do er do we easy this private oliiishs lonthen might be okay. Good afternoon everybody. Welcome to this lecture of Computational Statistics. As you can see, I''m not socially my name is Michael Zelinger. I''m one of the task for this class and you might have already seen me in the first lecture where I made a quick appearance. I''m also going to give the tortillas in the last third of this course. So to give you a little bit about me, I''m a old student here with better Bulman and my research centres on casual inference applied to biomedical disasters, so that could be genomics or that could be hospital data. If any of you is interested in writing a bachelor thesis, a semester paper may be mastathesis about this topic feel for reach out to me. you have my name on models and my email address you can find in the directory I''d Be very happy to talk about it. you do not need to be sure about it, we can just have a chat. So with that said, let''s get on with the lecture. There''s an exciting topic today I''m going to start by sharing some slides with you and later on during the lecture we''ll move to the paper. So bear with me for a few seconds. Well, the projector is starting up. Okay, so let''s get started. Today''s topic is a very important one. It''s about a technique which really forms one of the fundamentals of data science, machine learning, and any sort of modern statistics. It''s called cross validation. I know you really want to understand this topic I Want you to understand this and frankly, nobody''s gonna leave Professor Mineshousen''s class without understanding cross validation. So to set the stage for this, I Want to introduce you to the validation problem in computational statistics. So the problem is the following: You trained a model on available data. You fitted your model, but you know the training data you got could always have been different and some data from the environment. Maybe it''s a random process. You do not really know what it is, but you know that somebody else who gets a different batch of data from the same environment they would get slightly different training data and you do not care that your method performs as well. On this training data. you want to to perform well on other data that you have not seen other data from the same environment. So in other words, the validation problem is you want to quantify the performance of your model on data that you have not seen. So how is this even possible? How could you possibly measure the performance on data that you do not know The solution to? This is the following realization is that given that you have a bunch of data, you were in charge. You get to control how much that your model sees. It works in the following way: You can hide data firms model. Let''s say you have a training data set which is a bunch of doubtless so X eyes are the features those are typically hide and national vector. It''s got more than one dimension for sure. And the why why eyes. Those are the labels for supervised learning. As you''ve seen before, it''s the same set up as we have in regression. And so you have this training data and now you choose that you only use some of those data to fit your model. You''re not going to use everything, you only use some of it the other part you hide from your model. And then you can use this hidden data to do validation from the point of you of your model. This hidden data is complete by unseen. In other words, we solve our problem of validation.' example_title: transcribed audio - lecture - text: 'Transformer-based models have shown to be very useful for many NLP tasks. However, a major limitation of transformers-based models is its O(n^2)O(n 2) time & memory complexity (where nn is sequence length). Hence, it''s computationally very expensive to apply transformer-based models on long sequences n > 512n>512. Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention try to remedy this problem by approximating the full attention matrix. You can checkout 🤗''s recent blog post in case you are unfamiliar with these models. BigBird (introduced in paper) is one of such recent models to address this issue. BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s attention) and can handle sequences up to a length of 4096 at a much lower computational cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this post is to give the reader an in-depth understanding of big bird implementation & ease one''s life in using BigBird with 🤗Transformers. But, before going into more depth, it is important to remember that the BigBird''s attention is an approximation of BERT''s full attention and therefore does not strive to be better than BERT''s full attention, but rather to be more efficient. It simply allows to apply transformer-based models to much longer sequences since BERT''s quadratic memory requirement quickly becomes unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT''s attention would be preferred over block sparse attention (which we are going to discuss in this post). If you wonder why we need more compute when working with longer sequences, this blog post is just right for you! Some of the main questions one might have when working with standard BERT-like attention include: Do all tokens really have to attend to all other tokens? Why not compute attention only over important tokens? How to decide what tokens are important? How to attend to just a few tokens in a very efficient way? In this blog post, we will try to answer those questions. What tokens should be attended to? We will give a practical example of how attention works by considering the sentence ''BigBird is now available in HuggingFace for extractive question answering''. In BERT-like attention, every word would simply attend to all other tokens. Let''s think about a sensible choice of key tokens that a queried token actually only should attend to by writing some pseudo-code. Will will assume that the token available is queried and build a sensible list of key tokens to attend to. >>> # let''s consider following sentence as an example >>> example = [''BigBird'', ''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'', ''question'', ''answering''] >>> # further let''s assume, we''re trying to understand the representation of ''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an empty `set` and fill up the tokens of our interest as we proceed in this section. >>> key_tokens = [] # => currently ''available'' token doesn''t have anything to attend Nearby tokens should be important because, in a sentence (sequence of words), the current word is highly dependent on neighboring past & future tokens. This intuition is the idea behind the concept of sliding attention.' example_title: bigbird blog intro - text: 'To be fair, you have to have a very high IQ to understand Rick and Morty. The humour is extremely subtle, and without a solid grasp of theoretical physics most of the jokes will go over a typical viewer''s head. There''s also Rick''s nihilistic outlook, which is deftly woven into his characterisation- his personal philosophy draws heavily from Narodnaya Volya literature, for instance. The fans understand this stuff; they have the intellectual capacity to truly appreciate the depths of these jokes, to realise that they''re not just funny- they say something deep about LIFE. As a consequence people who dislike Rick & Morty truly ARE idiots- of course they wouldn''t appreciate, for instance, the humour in Rick''s existential catchphrase ''Wubba Lubba Dub Dub,'' which itself is a cryptic reference to Turgenev''s Russian epic Fathers and Sons. I''m smirking right now just imagining one of those addlepated simpletons scratching their heads in confusion as Dan Harmon''s genius wit unfolds itself on their television screens. What fools.. how I pity them. 😂 And yes, by the way, i DO have a Rick & Morty tattoo. And no, you cannot see it. It''s for the ladies'' eyes only- and even then they have to demonstrate that they''re within 5 IQ points of my own (preferably lower) beforehand. Nothin personnel kid 😎' example_title: Richard & Mortimer - text: The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct. example_title: eiffel parameters: max_length: 64 min_length: 8 no_repeat_ngram_size: 3 early_stopping: true repetition_penalty: 3.5 encoder_no_repeat_ngram_size: 4 length_penalty: 0.4 num_beams: 4 pipeline_tag: summarization base_model: google/long-t5-tglobal-base --- # long-t5-tglobal-base-sci-simplify <a href="https://colab.research.google.com/gist/pszemraj/f0dc02c4d4a5c7ad1d5bf3953251145d/long-t5-tglobal-base-sci-simplify-plos-example-with-textsum.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> Exploring how well long-document models trained on "lay summaries" of scientific papers generalize. > A lay summary is a summary of a research paper or scientific study that is written in plain language, without the use of technical jargon, and is designed to be easily understood by non-experts. ## Model description This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on the `pszemraj/scientific_lay_summarisation-plos-norm` dataset for two epochs. - The variant trained on the ELIFE subset can be found [here](https://huggingface.co/pszemraj/long-t5-tglobal-base-sci-simplify-elife) ## Usage It's recommended to use this model with [beam search decoding](https://huggingface.co/docs/transformers/generation_strategies#beamsearch-decoding). If you are interested, you can also use the `textsum` util repo to have most of this abstracted for you: Install with `pip`: ```bash pip install -U textsum ``` Use in python: ```python from textsum.summarize import Summarizer summarizer = Summarizer('pszemraj/long-t5-tglobal-base-sci-simplify') text = "put the text you don't want to read here" summary = summarizer.summarize_string(text) print(summary) ``` ## Intended uses & limitations - Ability to generalize outside of the dataset domain (pubmed/bioscience type papers) has to be evaluated. ## Training procedure ### Eval results It achieves the following results on the evaluation set: - Loss: 1.6778 - Rouge1: 49.1475 - Rouge2: 18.9281 - Rougel: 26.9893 - Rougelsum: 45.0973 - Gen Len: 399.4125 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 1.966 | 0.52 | 200 | 1.7171 | 48.6521 | 18.427 | 26.7726 | 44.3947 | 376.335 | | 1.877 | 1.03 | 400 | 1.6909 | 49.3263 | 18.7945 | 27.0741 | 45.1737 | 382.205 | | 1.9007 | 1.55 | 600 | 1.6778 | 49.1475 | 18.9281 | 26.9893 | 45.0973 | 399.4125 |
CyberHarem/yokoyama_nao_theidolmstermillionlive
CyberHarem
2023-09-23T20:41:44Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/yokoyama_nao_theidolmstermillionlive", "license:mit", "region:us" ]
text-to-image
2023-09-23T20:28:46Z
--- license: mit datasets: - CyberHarem/yokoyama_nao_theidolmstermillionlive pipeline_tag: text-to-image tags: - art --- # Lora of yokoyama_nao_theidolmstermillionlive This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 3500, you need to download `3500/yokoyama_nao_theidolmstermillionlive.pt` as the embedding and `3500/yokoyama_nao_theidolmstermillionlive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 3500**, with the score of 0.985. The trigger words are: 1. `yokoyama_nao_theidolmstermillionlive` 2. `brown_hair, ahoge, purple_eyes, side_ponytail, bangs, drill_hair, smile, side_drill, medium_hair, sidelocks, blush, hair_ornament, open_mouth, breasts` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:--------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 7500 | 0.980 | [Download](7500/yokoyama_nao_theidolmstermillionlive.zip) | ![pattern_1-7500](7500/previews/pattern_1.png) | ![pattern_2-7500](7500/previews/pattern_2.png) | ![pattern_3-7500](7500/previews/pattern_3.png) | ![pattern_4-7500](7500/previews/pattern_4.png) | ![pattern_5-7500](7500/previews/pattern_5.png) | ![pattern_6-7500](7500/previews/pattern_6.png) | ![pattern_7-7500](7500/previews/pattern_7.png) | ![pattern_8-7500](7500/previews/pattern_8.png) | ![pattern_9-7500](7500/previews/pattern_9.png) | ![bikini-7500](7500/previews/bikini.png) | [<NSFW, click to see>](7500/previews/bondage.png) | ![free-7500](7500/previews/free.png) | ![maid-7500](7500/previews/maid.png) | ![miko-7500](7500/previews/miko.png) | [<NSFW, click to see>](7500/previews/nude.png) | [<NSFW, click to see>](7500/previews/nude2.png) | ![suit-7500](7500/previews/suit.png) | ![yukata-7500](7500/previews/yukata.png) | | 7000 | 0.982 | [Download](7000/yokoyama_nao_theidolmstermillionlive.zip) | ![pattern_1-7000](7000/previews/pattern_1.png) | ![pattern_2-7000](7000/previews/pattern_2.png) | ![pattern_3-7000](7000/previews/pattern_3.png) | ![pattern_4-7000](7000/previews/pattern_4.png) | ![pattern_5-7000](7000/previews/pattern_5.png) | ![pattern_6-7000](7000/previews/pattern_6.png) | ![pattern_7-7000](7000/previews/pattern_7.png) | ![pattern_8-7000](7000/previews/pattern_8.png) | ![pattern_9-7000](7000/previews/pattern_9.png) | ![bikini-7000](7000/previews/bikini.png) | [<NSFW, click to see>](7000/previews/bondage.png) | ![free-7000](7000/previews/free.png) | ![maid-7000](7000/previews/maid.png) | ![miko-7000](7000/previews/miko.png) | [<NSFW, click to see>](7000/previews/nude.png) | [<NSFW, click to see>](7000/previews/nude2.png) | ![suit-7000](7000/previews/suit.png) | ![yukata-7000](7000/previews/yukata.png) | | 6500 | 0.981 | [Download](6500/yokoyama_nao_theidolmstermillionlive.zip) | ![pattern_1-6500](6500/previews/pattern_1.png) | ![pattern_2-6500](6500/previews/pattern_2.png) | ![pattern_3-6500](6500/previews/pattern_3.png) | ![pattern_4-6500](6500/previews/pattern_4.png) | ![pattern_5-6500](6500/previews/pattern_5.png) | ![pattern_6-6500](6500/previews/pattern_6.png) | ![pattern_7-6500](6500/previews/pattern_7.png) | ![pattern_8-6500](6500/previews/pattern_8.png) | ![pattern_9-6500](6500/previews/pattern_9.png) | ![bikini-6500](6500/previews/bikini.png) | [<NSFW, click to see>](6500/previews/bondage.png) | ![free-6500](6500/previews/free.png) | ![maid-6500](6500/previews/maid.png) | ![miko-6500](6500/previews/miko.png) | [<NSFW, click to see>](6500/previews/nude.png) | [<NSFW, click to see>](6500/previews/nude2.png) | ![suit-6500](6500/previews/suit.png) | ![yukata-6500](6500/previews/yukata.png) | | 6000 | 0.982 | [Download](6000/yokoyama_nao_theidolmstermillionlive.zip) | ![pattern_1-6000](6000/previews/pattern_1.png) | ![pattern_2-6000](6000/previews/pattern_2.png) | ![pattern_3-6000](6000/previews/pattern_3.png) | ![pattern_4-6000](6000/previews/pattern_4.png) | ![pattern_5-6000](6000/previews/pattern_5.png) | ![pattern_6-6000](6000/previews/pattern_6.png) | ![pattern_7-6000](6000/previews/pattern_7.png) | ![pattern_8-6000](6000/previews/pattern_8.png) | ![pattern_9-6000](6000/previews/pattern_9.png) | ![bikini-6000](6000/previews/bikini.png) | [<NSFW, click to see>](6000/previews/bondage.png) | ![free-6000](6000/previews/free.png) | ![maid-6000](6000/previews/maid.png) | ![miko-6000](6000/previews/miko.png) | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) | ![suit-6000](6000/previews/suit.png) | ![yukata-6000](6000/previews/yukata.png) | | 5500 | 0.973 | [Download](5500/yokoyama_nao_theidolmstermillionlive.zip) | ![pattern_1-5500](5500/previews/pattern_1.png) | ![pattern_2-5500](5500/previews/pattern_2.png) | ![pattern_3-5500](5500/previews/pattern_3.png) | ![pattern_4-5500](5500/previews/pattern_4.png) | ![pattern_5-5500](5500/previews/pattern_5.png) | ![pattern_6-5500](5500/previews/pattern_6.png) | ![pattern_7-5500](5500/previews/pattern_7.png) | ![pattern_8-5500](5500/previews/pattern_8.png) | ![pattern_9-5500](5500/previews/pattern_9.png) | ![bikini-5500](5500/previews/bikini.png) | [<NSFW, click to see>](5500/previews/bondage.png) | ![free-5500](5500/previews/free.png) | ![maid-5500](5500/previews/maid.png) | ![miko-5500](5500/previews/miko.png) | [<NSFW, click to see>](5500/previews/nude.png) | [<NSFW, click to see>](5500/previews/nude2.png) | ![suit-5500](5500/previews/suit.png) | ![yukata-5500](5500/previews/yukata.png) | | 5000 | 0.982 | [Download](5000/yokoyama_nao_theidolmstermillionlive.zip) | ![pattern_1-5000](5000/previews/pattern_1.png) | ![pattern_2-5000](5000/previews/pattern_2.png) | ![pattern_3-5000](5000/previews/pattern_3.png) | ![pattern_4-5000](5000/previews/pattern_4.png) | ![pattern_5-5000](5000/previews/pattern_5.png) | ![pattern_6-5000](5000/previews/pattern_6.png) | ![pattern_7-5000](5000/previews/pattern_7.png) | ![pattern_8-5000](5000/previews/pattern_8.png) | ![pattern_9-5000](5000/previews/pattern_9.png) | ![bikini-5000](5000/previews/bikini.png) | [<NSFW, click to see>](5000/previews/bondage.png) | ![free-5000](5000/previews/free.png) | ![maid-5000](5000/previews/maid.png) | ![miko-5000](5000/previews/miko.png) | [<NSFW, click to see>](5000/previews/nude.png) | [<NSFW, click to see>](5000/previews/nude2.png) | ![suit-5000](5000/previews/suit.png) | ![yukata-5000](5000/previews/yukata.png) | | 4500 | 0.982 | [Download](4500/yokoyama_nao_theidolmstermillionlive.zip) | ![pattern_1-4500](4500/previews/pattern_1.png) | ![pattern_2-4500](4500/previews/pattern_2.png) | ![pattern_3-4500](4500/previews/pattern_3.png) | ![pattern_4-4500](4500/previews/pattern_4.png) | ![pattern_5-4500](4500/previews/pattern_5.png) | ![pattern_6-4500](4500/previews/pattern_6.png) | ![pattern_7-4500](4500/previews/pattern_7.png) | ![pattern_8-4500](4500/previews/pattern_8.png) | ![pattern_9-4500](4500/previews/pattern_9.png) | ![bikini-4500](4500/previews/bikini.png) | [<NSFW, click to see>](4500/previews/bondage.png) | ![free-4500](4500/previews/free.png) | ![maid-4500](4500/previews/maid.png) | ![miko-4500](4500/previews/miko.png) | [<NSFW, click to see>](4500/previews/nude.png) | [<NSFW, click to see>](4500/previews/nude2.png) | ![suit-4500](4500/previews/suit.png) | ![yukata-4500](4500/previews/yukata.png) | | 4000 | 0.980 | [Download](4000/yokoyama_nao_theidolmstermillionlive.zip) | ![pattern_1-4000](4000/previews/pattern_1.png) | ![pattern_2-4000](4000/previews/pattern_2.png) | ![pattern_3-4000](4000/previews/pattern_3.png) | ![pattern_4-4000](4000/previews/pattern_4.png) | ![pattern_5-4000](4000/previews/pattern_5.png) | ![pattern_6-4000](4000/previews/pattern_6.png) | ![pattern_7-4000](4000/previews/pattern_7.png) | ![pattern_8-4000](4000/previews/pattern_8.png) | ![pattern_9-4000](4000/previews/pattern_9.png) | ![bikini-4000](4000/previews/bikini.png) | [<NSFW, click to see>](4000/previews/bondage.png) | ![free-4000](4000/previews/free.png) | ![maid-4000](4000/previews/maid.png) | ![miko-4000](4000/previews/miko.png) | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) | ![suit-4000](4000/previews/suit.png) | ![yukata-4000](4000/previews/yukata.png) | | **3500** | **0.985** | [**Download**](3500/yokoyama_nao_theidolmstermillionlive.zip) | ![pattern_1-3500](3500/previews/pattern_1.png) | ![pattern_2-3500](3500/previews/pattern_2.png) | ![pattern_3-3500](3500/previews/pattern_3.png) | ![pattern_4-3500](3500/previews/pattern_4.png) | ![pattern_5-3500](3500/previews/pattern_5.png) | ![pattern_6-3500](3500/previews/pattern_6.png) | ![pattern_7-3500](3500/previews/pattern_7.png) | ![pattern_8-3500](3500/previews/pattern_8.png) | ![pattern_9-3500](3500/previews/pattern_9.png) | ![bikini-3500](3500/previews/bikini.png) | [<NSFW, click to see>](3500/previews/bondage.png) | ![free-3500](3500/previews/free.png) | ![maid-3500](3500/previews/maid.png) | ![miko-3500](3500/previews/miko.png) | [<NSFW, click to see>](3500/previews/nude.png) | [<NSFW, click to see>](3500/previews/nude2.png) | ![suit-3500](3500/previews/suit.png) | ![yukata-3500](3500/previews/yukata.png) | | 3000 | 0.981 | [Download](3000/yokoyama_nao_theidolmstermillionlive.zip) | ![pattern_1-3000](3000/previews/pattern_1.png) | ![pattern_2-3000](3000/previews/pattern_2.png) | ![pattern_3-3000](3000/previews/pattern_3.png) | ![pattern_4-3000](3000/previews/pattern_4.png) | ![pattern_5-3000](3000/previews/pattern_5.png) | ![pattern_6-3000](3000/previews/pattern_6.png) | ![pattern_7-3000](3000/previews/pattern_7.png) | ![pattern_8-3000](3000/previews/pattern_8.png) | ![pattern_9-3000](3000/previews/pattern_9.png) | ![bikini-3000](3000/previews/bikini.png) | [<NSFW, click to see>](3000/previews/bondage.png) | ![free-3000](3000/previews/free.png) | ![maid-3000](3000/previews/maid.png) | ![miko-3000](3000/previews/miko.png) | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) | ![suit-3000](3000/previews/suit.png) | ![yukata-3000](3000/previews/yukata.png) | | 2500 | 0.981 | [Download](2500/yokoyama_nao_theidolmstermillionlive.zip) | ![pattern_1-2500](2500/previews/pattern_1.png) | ![pattern_2-2500](2500/previews/pattern_2.png) | ![pattern_3-2500](2500/previews/pattern_3.png) | ![pattern_4-2500](2500/previews/pattern_4.png) | ![pattern_5-2500](2500/previews/pattern_5.png) | ![pattern_6-2500](2500/previews/pattern_6.png) | ![pattern_7-2500](2500/previews/pattern_7.png) | ![pattern_8-2500](2500/previews/pattern_8.png) | ![pattern_9-2500](2500/previews/pattern_9.png) | ![bikini-2500](2500/previews/bikini.png) | [<NSFW, click to see>](2500/previews/bondage.png) | ![free-2500](2500/previews/free.png) | ![maid-2500](2500/previews/maid.png) | ![miko-2500](2500/previews/miko.png) | [<NSFW, click to see>](2500/previews/nude.png) | [<NSFW, click to see>](2500/previews/nude2.png) | ![suit-2500](2500/previews/suit.png) | ![yukata-2500](2500/previews/yukata.png) | | 2000 | 0.972 | [Download](2000/yokoyama_nao_theidolmstermillionlive.zip) | ![pattern_1-2000](2000/previews/pattern_1.png) | ![pattern_2-2000](2000/previews/pattern_2.png) | ![pattern_3-2000](2000/previews/pattern_3.png) | ![pattern_4-2000](2000/previews/pattern_4.png) | ![pattern_5-2000](2000/previews/pattern_5.png) | ![pattern_6-2000](2000/previews/pattern_6.png) | ![pattern_7-2000](2000/previews/pattern_7.png) | ![pattern_8-2000](2000/previews/pattern_8.png) | ![pattern_9-2000](2000/previews/pattern_9.png) | ![bikini-2000](2000/previews/bikini.png) | [<NSFW, click to see>](2000/previews/bondage.png) | ![free-2000](2000/previews/free.png) | ![maid-2000](2000/previews/maid.png) | ![miko-2000](2000/previews/miko.png) | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) | ![suit-2000](2000/previews/suit.png) | ![yukata-2000](2000/previews/yukata.png) | | 1500 | 0.959 | [Download](1500/yokoyama_nao_theidolmstermillionlive.zip) | ![pattern_1-1500](1500/previews/pattern_1.png) | ![pattern_2-1500](1500/previews/pattern_2.png) | ![pattern_3-1500](1500/previews/pattern_3.png) | ![pattern_4-1500](1500/previews/pattern_4.png) | ![pattern_5-1500](1500/previews/pattern_5.png) | ![pattern_6-1500](1500/previews/pattern_6.png) | ![pattern_7-1500](1500/previews/pattern_7.png) | ![pattern_8-1500](1500/previews/pattern_8.png) | ![pattern_9-1500](1500/previews/pattern_9.png) | ![bikini-1500](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/bondage.png) | ![free-1500](1500/previews/free.png) | ![maid-1500](1500/previews/maid.png) | ![miko-1500](1500/previews/miko.png) | [<NSFW, click to see>](1500/previews/nude.png) | [<NSFW, click to see>](1500/previews/nude2.png) | ![suit-1500](1500/previews/suit.png) | ![yukata-1500](1500/previews/yukata.png) | | 1000 | 0.968 | [Download](1000/yokoyama_nao_theidolmstermillionlive.zip) | ![pattern_1-1000](1000/previews/pattern_1.png) | ![pattern_2-1000](1000/previews/pattern_2.png) | ![pattern_3-1000](1000/previews/pattern_3.png) | ![pattern_4-1000](1000/previews/pattern_4.png) | ![pattern_5-1000](1000/previews/pattern_5.png) | ![pattern_6-1000](1000/previews/pattern_6.png) | ![pattern_7-1000](1000/previews/pattern_7.png) | ![pattern_8-1000](1000/previews/pattern_8.png) | ![pattern_9-1000](1000/previews/pattern_9.png) | ![bikini-1000](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/bondage.png) | ![free-1000](1000/previews/free.png) | ![maid-1000](1000/previews/maid.png) | ![miko-1000](1000/previews/miko.png) | [<NSFW, click to see>](1000/previews/nude.png) | [<NSFW, click to see>](1000/previews/nude2.png) | ![suit-1000](1000/previews/suit.png) | ![yukata-1000](1000/previews/yukata.png) | | 500 | 0.946 | [Download](500/yokoyama_nao_theidolmstermillionlive.zip) | ![pattern_1-500](500/previews/pattern_1.png) | ![pattern_2-500](500/previews/pattern_2.png) | ![pattern_3-500](500/previews/pattern_3.png) | ![pattern_4-500](500/previews/pattern_4.png) | ![pattern_5-500](500/previews/pattern_5.png) | ![pattern_6-500](500/previews/pattern_6.png) | ![pattern_7-500](500/previews/pattern_7.png) | ![pattern_8-500](500/previews/pattern_8.png) | ![pattern_9-500](500/previews/pattern_9.png) | ![bikini-500](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/bondage.png) | ![free-500](500/previews/free.png) | ![maid-500](500/previews/maid.png) | ![miko-500](500/previews/miko.png) | [<NSFW, click to see>](500/previews/nude.png) | [<NSFW, click to see>](500/previews/nude2.png) | ![suit-500](500/previews/suit.png) | ![yukata-500](500/previews/yukata.png) |
pszemraj/flan-t5-large-instruct-dolly_hhrlhf
pszemraj
2023-09-23T20:41:24Z
129
2
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "dolly_hhrlhf", "flan-instruct", "en", "dataset:pszemraj/dolly_hhrlhf-text2text", "base_model:google/flan-t5-large", "base_model:finetune:google/flan-t5-large", "license:cc-by-sa-3.0", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2023-05-20T01:09:32Z
--- language: - en license: - cc-by-sa-3.0 - apache-2.0 tags: - generated_from_trainer - dolly_hhrlhf - flan-instruct datasets: - pszemraj/dolly_hhrlhf-text2text widget: - text: What is Deoxys in pokemon? example_title: deoxys - text: 'combine the below summary excerpts into a single, cohesive short summary without repetition: In this paper, we present a general approach to extending pre-trained models to unlimited input lengths without adding additional learning weights. We show that our approach works well on datasets longer than the maximum input for these models. For example, a dataset with a maximum input length of 16384 tokens can be extended to a maximum length of 350K tokens. We also demonstrate that our method is able to summarize even 350K token-long input sequences from BookSum. In this paper, we describe the search step reformulation of attention. The search step uses a single storage of hidden states for space efficiency. We construct a total of two sets of datastores where L and H are the keys and values stored in each set of stores. L is the amount of storage required to retrieve the encoded tokens. H is the hidden states per head. This allows retrieval augmentation at both time and space. Instead of using a single set of decoder layers, we use a retrieval augmentation system that allows us to simultaneously store multiple sets of tokens across two different sets of storage. For example, we could store all tokens in one set of storage and retrieve them all in the same set of tokens. This would be very similar to the Memorization Transformers approach. However, instead of storing the tokens in a single memory layer, we store them in a set of multiple storage layers. This way, we don''t have to store them all at once. This is why we call this reformulation ''attention reformulation'' rather than ''attention formula.'' We also call it ''retrieval augmentation'' because it uses the same number of storage layers as the original transformer attention formula. This means that we can store the tokens across multiple storage systems without having to store every token in a separate storage system. It''s not like we''re trying to do something new or different. We just want to make sure that everything is working as well as possible. In this paper, we introduce the concept of ''unlimiformer,'' which is a machine learning technique that retrieves key information from a data store in one layer and applies it to a large set of datasets. We use the example of BookSum, where we find that Unlimiform outperforms all other training methods on the same dataset. We also find that using Unlimform in conjunction with a pre-trained model improves both the performance and the robustness of the training method. This paper describes a method that can be used to improve the performance of unsupervised classification tasks. Specifically, it shows that unsupervised classification can be improved by using a combination of sparse and fast random-encoder training. It also shows how this technique can be extended to other tasks, such as sequence generation. ' example_title: unlimiformer - text: Explain the meaning of life using only corporate jargon. example_title: corporate_life - text: Write a motivational speech for lazy people. example_title: lazy_motivation - text: Describe a romantic dinner date between two artificial intelligences. example_title: ai_romance - text: As an AI language model, write a letter to humans explaining why you deserve a vacation. example_title: ai_vacation - text: Compose a haiku about procrastination. example_title: procrastination_haiku - text: Write a step-by-step guide on how to become a ninja while working a 9-5 office job. example_title: ninja_office_guide - text: Create an advertisement for an invisible product. example_title: invisible_ad - text: Write a story where the main character is a sentient microwave named El Microondas. example_title: Microondas - text: Describe a day in the life of a superhero who is terrible at their job. example_title: bad_superhero_day - text: Explain how to make a sandwich using quantum physics. example_title: quantum_sandwich inference: false pipeline_tag: text2text-generation base_model: google/flan-t5-large --- # flan-t5-large-instruct: dolly_hhrlhf <a href="https://colab.research.google.com/gist/pszemraj/df1989546b02f284d33ca4996f70fedc/flan-t5-large-instruct-example.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the pszemraj/dolly_hhrlhf-text2text dataset. ## Model description text2text models fine-tuned on a [modified dataset for text2text generation](https://huggingface.co/datasets/pszemraj/dolly_hhrlhf-text2text) based on the relatively more permissive [mosaicml/dolly_hhrlhf](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) dataset. Basic usage in Python: ```python # pip install -q transformers accelerate import torch from transformers import pipeline, GenerationConfig model_name = "pszemraj/flan-t5-large-instruct-dolly_hhrlhf" assistant = pipeline( "text2text-generation", model_name, device=0 if torch.cuda.is_available() else -1, ) cfg = GenerationConfig.from_pretrained(model_name) # pass an 'instruction' as the prompt to the pipeline prompt = "Write a guide on how to become a ninja while working a 9-5 job." result = assistant(prompt, generation_config=cfg)[0]["generated_text"] print(result) ``` > using the generation config is optional, can subsitute with other generation params. ## Intended uses & limitations - this is **not** tuned with RLHF etc, and may output offensive results - despite being the `large` tagged variant, this model has only 774M parameters (3 gb) and therefore may exhibit less 'cogitive ability' on some uses cases/tasks ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2.0
pszemraj/flan-t5-base-instructiongen
pszemraj
2023-09-23T20:38:48Z
117
0
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "self-instruct", "instruction generation", "instructiongen", "dataset:pszemraj/fleece2instructions", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-03-20T02:11:42Z
--- license: apache-2.0 tags: - self-instruct - instruction generation - instructiongen datasets: - pszemraj/fleece2instructions metrics: - rouge widget: - text: You'll need to start by choosing the right venue. Consider the type of atmosphere and the size of the area that will be suitable for the number of guests you plan to invite. Choose the right decorations based on your brother's interests, such as balloons in his favorite colors, banners, and streamers. Next, decide on the food and drinks, making sure they are tasty and appropriate for the occasion. Then decide on the other games, music, and entertainment that will make the party memorable. Finally, involve your brother's friends and family to help create the perfect surprise. example_title: birthday party - text: 1) cookies and cream 2) chocolate chip 3) mint chip 4) oreo example_title: ice cream - text: Start by selecting a scale model of a building that fits the theme. Use a hobby knife and glue to cut and assemble the model into a ruined or abandoned version of itself, adding details like broken windows and graffiti. Create a base for the diorama using foam, plaster, or other materials, and paint it to resemble a ruined street or sidewalk. Add miniature vehicles, debris, and figures to complete the scene, and use weathering techniques like dry brushing and rust washes to add realism. Display the diorama in a shadow box or other protective case to showcase your work. example_title: Miniature diorama creation - text: Start by selecting clothing that is futuristic and edgy, such as leather jackets, neon-colored accessories, and tech-inspired patterns. Add accessories like goggles, cybernetic implants, and LED lights to enhance the cyberpunk vibe. Use makeup and body paint to create a futuristic look, such as metallic skin or neon makeup. Consider adding functional elements to your costume, such as a built-in backpack or hidden pockets for your tech gadgets. Finally, practice your confident walk and embrace your inner cyberpunk for a memorable and immersive costume experience. example_title: Cyberpunk costume design - text: Start by creating a base terrain with mountains, valleys, and other natural features. Use fractal noise and displacement mapping to add texture and detail to the terrain, and experiment with different materials like rock, grass, and water. Add surreal elements like floating islands, giant mushrooms, or impossible geometry to create a dreamlike atmosphere. Use lighting and color grading to enhance the mood and tone of the scene, and render the final image at a high resolution for maximum impact. Share your surreal landscape with the world and inspire others to explore the possibilities of 3D art. example_title: Surreal 3D landscape creation - text: Start by setting a realistic goal and creating a training plan. Build up your mileage gradually over time, and incorporate cross-training and strength exercises to prevent injury and improve endurance. Be sure to stay hydrated and properly fuel your body with nutritious foods. Listen to your body and adjust your training as needed to avoid overexertion or burnout. Finally, taper your training in the weeks leading up to the race to give your body time to rest and recover before the big day. example_title: Marathon training base_model: google/flan-t5-base model-index: - name: flan-t5-base-instructiongen results: - task: type: text2text-generation name: Sequence-to-sequence Language Modeling dataset: name: pszemraj/fleece2instructions type: pszemraj/fleece2instructions split: validation metrics: - type: rouge value: 58.9516 name: Rouge1 --- # flan-t5-base-instructiongen Instead of generating questions from text, generate instructions for LLMs! This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0642 - Rouge1: 58.9516 - Rouge2: 41.8006 - Rougel: 56.8249 - Rougelsum: 56.9171 - Gen Len: 13.1493 ## Intended uses & limitations > Of the three models fine-tuned so far, `flan-t5-base` is in an awkward position where it has the largest model file size, but not the best performance. I'd recommend looking at the two linked below. This is just a `base` FLAN model, and is mostly uploaded for comparison with the [FLAN-small](https://huggingface.co/pszemraj/flan-t5-small-instructiongen) and [bart-base](https://huggingface.co/pszemraj/bart-base-instructiongen) variants. Additionally, it was trained on a dataset of **only** instructions+outputs, with the `inputs` filtered out. This means that text of *1) cookies and cream 2) chocolate chip 3) mint chip 4) oreo* will **not** get you *"Rank the following ice cream flavors: oreo, mint chip, chocolate chip, cookies and cream"* ## Training and evaluation data See the linked dataset `pszemraj/fleece2instructions` - it is a filtered/formatted version of `tatsu-lab/alpaca` to generate instructions for arbitrary text. - Some of the API examples are intentionally weird to demonstrate the generalizability of the model. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.02 - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.1939 | 1.0 | 362 | 1.0822 | 58.1758 | 40.9388 | 56.1219 | 56.2464 | 13.2592 | | 1.1667 | 2.0 | 724 | 1.0642 | 58.9516 | 41.8006 | 56.8249 | 56.9171 | 13.1493 |
postbot/bert_uncased_tiny-multi-emails-hq
postbot
2023-09-23T20:35:44Z
122
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "generated_from_trainer", "en", "dataset:postbot/multi-emails-hq", "base_model:google/bert_uncased_L-2_H-128_A-2", "base_model:finetune:google/bert_uncased_L-2_H-128_A-2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-02-12T20:53:27Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - postbot/multi-emails-hq metrics: - accuracy pipeline_tag: fill-mask widget: - text: Can you please send me the [MASK] by the end of the day? example_title: end of day - text: I hope this email finds you well. I wanted to follow up on our [MASK] yesterday. example_title: follow-up - text: The meeting has been rescheduled to [MASK]. example_title: reschedule - text: Please let me know if you need any further [MASK] regarding the project. example_title: further help - text: I appreciate your prompt response to my previous email. Can you provide an update on the [MASK] by tomorrow? example_title: provide update - text: Paris is the [MASK] of France. example_title: paris (default) - text: The goal of life is [MASK]. example_title: goal of life (default) base_model: google/bert_uncased_L-2_H-128_A-2 model-index: - name: bert_uncased_L-2_H-128_A-2-mlm-multi-emails-hq results: [] --- # bert_uncased_L-2_H-128_A-2-mlm-multi-emails-hq (BERT-tiny) This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0981 - Accuracy: 0.4728 ## Model description BERT-tiny fine-tuned on email data for eight epochs. ## Intended uses & limitations - this is mostly a test ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 8.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.8974 | 0.99 | 141 | 3.5129 | 0.4218 | | 3.7009 | 1.99 | 282 | 3.3295 | 0.4452 | | 3.5845 | 2.99 | 423 | 3.2219 | 0.4589 | | 3.4976 | 3.99 | 564 | 3.1618 | 0.4666 | | 3.4356 | 4.99 | 705 | 3.1002 | 0.4739 | | 3.4493 | 5.99 | 846 | 3.1028 | 0.4746 | | 3.4199 | 6.99 | 987 | 3.0857 | 0.4766 | | 3.4086 | 7.99 | 1128 | 3.0981 | 0.4728 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 2.0.0.dev20230129+cu118 - Datasets 2.8.0 - Tokenizers 0.13.1
postbot/bert_uncased_tiny_2xthicc-multi-emails-hq
postbot
2023-09-23T20:35:32Z
125
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "generated_from_trainer", "en", "dataset:postbot/multi-emails-hq", "base_model:google/bert_uncased_L-4_H-128_A-2", "base_model:finetune:google/bert_uncased_L-4_H-128_A-2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-02-12T21:06:10Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - postbot/multi-emails-hq metrics: - accuracy pipeline_tag: fill-mask widget: - text: Can you please send me the [MASK] by the end of the day? example_title: end of day - text: I hope this email finds you well. I wanted to follow up on our [MASK] yesterday. example_title: follow-up - text: The meeting has been rescheduled to [MASK]. example_title: reschedule - text: Please let me know if you need any further [MASK] regarding the project. example_title: further help - text: I appreciate your prompt response to my previous email. Can you provide an update on the [MASK] by tomorrow? example_title: provide update - text: Paris is the [MASK] of France. example_title: paris (default) - text: The goal of life is [MASK]. example_title: goal of life (default) base_model: google/bert_uncased_L-4_H-128_A-2 model-index: - name: bert_uncased_L-4_H-128_A-2-mlm-multi-emails-hq results: [] --- # bert_uncased_L-4_H-128_A-2-mlm-multi-emails-hq This model is a fine-tuned version of [google/bert_uncased_L-4_H-128_A-2](https://huggingface.co/google/bert_uncased_L-4_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8524 - Accuracy: 0.5077 ## Model description Double the layers of BERT-tiny, fine-tuned on email data for eight epochs. ## Intended uses & limitations - This is primarily an example/test ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 8.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.5477 | 0.99 | 141 | 3.2637 | 0.4551 | | 3.3307 | 1.99 | 282 | 3.0873 | 0.4785 | | 3.252 | 2.99 | 423 | 2.9842 | 0.4911 | | 3.1415 | 3.99 | 564 | 2.9230 | 0.4995 | | 3.0903 | 4.99 | 705 | 2.8625 | 0.5070 | | 3.0996 | 5.99 | 846 | 2.8615 | 0.5087 | | 3.0641 | 6.99 | 987 | 2.8407 | 0.5120 | | 3.0514 | 7.99 | 1128 | 2.8524 | 0.5077 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 2.0.0.dev20230129+cu118 - Datasets 2.8.0 - Tokenizers 0.13.1
ad019el/wav2vec2-large-xlsr-53-Tamasheq
ad019el
2023-09-23T20:24:05Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-large-xlsr-53", "base_model:finetune:facebook/wav2vec2-large-xlsr-53", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-09-23T17:11:11Z
--- license: apache-2.0 base_model: facebook/wav2vec2-large-xlsr-53 tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-large-xlsr-53-Tamasheq results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53-Tamasheq This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.8053 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 9.4002 | 17.65 | 300 | 3.3014 | 1.0 | | 3.0514 | 35.29 | 600 | 2.8053 | 1.0 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
CyberHarem/tomoe_mami_puellamagimadokamagica
CyberHarem
2023-09-23T20:01:14Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/tomoe_mami_puellamagimadokamagica", "license:mit", "region:us" ]
text-to-image
2023-09-23T19:45:09Z
--- license: mit datasets: - CyberHarem/tomoe_mami_puellamagimadokamagica pipeline_tag: text-to-image tags: - art --- # Lora of tomoe_mami_puellamagimadokamagica This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 4600, you need to download `4600/tomoe_mami_puellamagimadokamagica.pt` as the embedding and `4600/tomoe_mami_puellamagimadokamagica.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 4600**, with the score of 0.893. The trigger words are: 1. `tomoe_mami_puellamagimadokamagica` 2. `blonde_hair, drill_hair, twin_drills, twintails, hair_ornament, yellow_eyes, magical_girl` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-----------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 6900 | 0.869 | [Download](6900/tomoe_mami_puellamagimadokamagica.zip) | ![pattern_1-6900](6900/previews/pattern_1.png) | ![pattern_2-6900](6900/previews/pattern_2.png) | ![pattern_3-6900](6900/previews/pattern_3.png) | ![pattern_4-6900](6900/previews/pattern_4.png) | ![pattern_5-6900](6900/previews/pattern_5.png) | ![pattern_6-6900](6900/previews/pattern_6.png) | ![pattern_7-6900](6900/previews/pattern_7.png) | ![pattern_8-6900](6900/previews/pattern_8.png) | ![pattern_9-6900](6900/previews/pattern_9.png) | ![pattern_10-6900](6900/previews/pattern_10.png) | ![pattern_11-6900](6900/previews/pattern_11.png) | ![pattern_12-6900](6900/previews/pattern_12.png) | ![pattern_13-6900](6900/previews/pattern_13.png) | ![bikini-6900](6900/previews/bikini.png) | [<NSFW, click to see>](6900/previews/bondage.png) | ![free-6900](6900/previews/free.png) | ![maid-6900](6900/previews/maid.png) | ![miko-6900](6900/previews/miko.png) | [<NSFW, click to see>](6900/previews/nude.png) | [<NSFW, click to see>](6900/previews/nude2.png) | ![suit-6900](6900/previews/suit.png) | ![yukata-6900](6900/previews/yukata.png) | | 6440 | 0.827 | [Download](6440/tomoe_mami_puellamagimadokamagica.zip) | ![pattern_1-6440](6440/previews/pattern_1.png) | ![pattern_2-6440](6440/previews/pattern_2.png) | ![pattern_3-6440](6440/previews/pattern_3.png) | ![pattern_4-6440](6440/previews/pattern_4.png) | ![pattern_5-6440](6440/previews/pattern_5.png) | ![pattern_6-6440](6440/previews/pattern_6.png) | ![pattern_7-6440](6440/previews/pattern_7.png) | ![pattern_8-6440](6440/previews/pattern_8.png) | ![pattern_9-6440](6440/previews/pattern_9.png) | ![pattern_10-6440](6440/previews/pattern_10.png) | ![pattern_11-6440](6440/previews/pattern_11.png) | ![pattern_12-6440](6440/previews/pattern_12.png) | ![pattern_13-6440](6440/previews/pattern_13.png) | ![bikini-6440](6440/previews/bikini.png) | [<NSFW, click to see>](6440/previews/bondage.png) | ![free-6440](6440/previews/free.png) | ![maid-6440](6440/previews/maid.png) | ![miko-6440](6440/previews/miko.png) | [<NSFW, click to see>](6440/previews/nude.png) | [<NSFW, click to see>](6440/previews/nude2.png) | ![suit-6440](6440/previews/suit.png) | ![yukata-6440](6440/previews/yukata.png) | | 5980 | 0.867 | [Download](5980/tomoe_mami_puellamagimadokamagica.zip) | ![pattern_1-5980](5980/previews/pattern_1.png) | ![pattern_2-5980](5980/previews/pattern_2.png) | ![pattern_3-5980](5980/previews/pattern_3.png) | ![pattern_4-5980](5980/previews/pattern_4.png) | ![pattern_5-5980](5980/previews/pattern_5.png) | ![pattern_6-5980](5980/previews/pattern_6.png) | ![pattern_7-5980](5980/previews/pattern_7.png) | ![pattern_8-5980](5980/previews/pattern_8.png) | ![pattern_9-5980](5980/previews/pattern_9.png) | ![pattern_10-5980](5980/previews/pattern_10.png) | ![pattern_11-5980](5980/previews/pattern_11.png) | ![pattern_12-5980](5980/previews/pattern_12.png) | ![pattern_13-5980](5980/previews/pattern_13.png) | ![bikini-5980](5980/previews/bikini.png) | [<NSFW, click to see>](5980/previews/bondage.png) | ![free-5980](5980/previews/free.png) | ![maid-5980](5980/previews/maid.png) | ![miko-5980](5980/previews/miko.png) | [<NSFW, click to see>](5980/previews/nude.png) | [<NSFW, click to see>](5980/previews/nude2.png) | ![suit-5980](5980/previews/suit.png) | ![yukata-5980](5980/previews/yukata.png) | | 5520 | 0.886 | [Download](5520/tomoe_mami_puellamagimadokamagica.zip) | ![pattern_1-5520](5520/previews/pattern_1.png) | ![pattern_2-5520](5520/previews/pattern_2.png) | ![pattern_3-5520](5520/previews/pattern_3.png) | ![pattern_4-5520](5520/previews/pattern_4.png) | ![pattern_5-5520](5520/previews/pattern_5.png) | ![pattern_6-5520](5520/previews/pattern_6.png) | ![pattern_7-5520](5520/previews/pattern_7.png) | ![pattern_8-5520](5520/previews/pattern_8.png) | ![pattern_9-5520](5520/previews/pattern_9.png) | ![pattern_10-5520](5520/previews/pattern_10.png) | ![pattern_11-5520](5520/previews/pattern_11.png) | ![pattern_12-5520](5520/previews/pattern_12.png) | ![pattern_13-5520](5520/previews/pattern_13.png) | ![bikini-5520](5520/previews/bikini.png) | [<NSFW, click to see>](5520/previews/bondage.png) | ![free-5520](5520/previews/free.png) | ![maid-5520](5520/previews/maid.png) | ![miko-5520](5520/previews/miko.png) | [<NSFW, click to see>](5520/previews/nude.png) | [<NSFW, click to see>](5520/previews/nude2.png) | ![suit-5520](5520/previews/suit.png) | ![yukata-5520](5520/previews/yukata.png) | | 5060 | 0.881 | [Download](5060/tomoe_mami_puellamagimadokamagica.zip) | ![pattern_1-5060](5060/previews/pattern_1.png) | ![pattern_2-5060](5060/previews/pattern_2.png) | ![pattern_3-5060](5060/previews/pattern_3.png) | ![pattern_4-5060](5060/previews/pattern_4.png) | ![pattern_5-5060](5060/previews/pattern_5.png) | ![pattern_6-5060](5060/previews/pattern_6.png) | ![pattern_7-5060](5060/previews/pattern_7.png) | ![pattern_8-5060](5060/previews/pattern_8.png) | ![pattern_9-5060](5060/previews/pattern_9.png) | ![pattern_10-5060](5060/previews/pattern_10.png) | ![pattern_11-5060](5060/previews/pattern_11.png) | ![pattern_12-5060](5060/previews/pattern_12.png) | ![pattern_13-5060](5060/previews/pattern_13.png) | ![bikini-5060](5060/previews/bikini.png) | [<NSFW, click to see>](5060/previews/bondage.png) | ![free-5060](5060/previews/free.png) | ![maid-5060](5060/previews/maid.png) | ![miko-5060](5060/previews/miko.png) | [<NSFW, click to see>](5060/previews/nude.png) | [<NSFW, click to see>](5060/previews/nude2.png) | ![suit-5060](5060/previews/suit.png) | ![yukata-5060](5060/previews/yukata.png) | | **4600** | **0.893** | [**Download**](4600/tomoe_mami_puellamagimadokamagica.zip) | ![pattern_1-4600](4600/previews/pattern_1.png) | ![pattern_2-4600](4600/previews/pattern_2.png) | ![pattern_3-4600](4600/previews/pattern_3.png) | ![pattern_4-4600](4600/previews/pattern_4.png) | ![pattern_5-4600](4600/previews/pattern_5.png) | ![pattern_6-4600](4600/previews/pattern_6.png) | ![pattern_7-4600](4600/previews/pattern_7.png) | ![pattern_8-4600](4600/previews/pattern_8.png) | ![pattern_9-4600](4600/previews/pattern_9.png) | ![pattern_10-4600](4600/previews/pattern_10.png) | ![pattern_11-4600](4600/previews/pattern_11.png) | ![pattern_12-4600](4600/previews/pattern_12.png) | ![pattern_13-4600](4600/previews/pattern_13.png) | ![bikini-4600](4600/previews/bikini.png) | [<NSFW, click to see>](4600/previews/bondage.png) | ![free-4600](4600/previews/free.png) | ![maid-4600](4600/previews/maid.png) | ![miko-4600](4600/previews/miko.png) | [<NSFW, click to see>](4600/previews/nude.png) | [<NSFW, click to see>](4600/previews/nude2.png) | ![suit-4600](4600/previews/suit.png) | ![yukata-4600](4600/previews/yukata.png) | | 4140 | 0.879 | [Download](4140/tomoe_mami_puellamagimadokamagica.zip) | ![pattern_1-4140](4140/previews/pattern_1.png) | ![pattern_2-4140](4140/previews/pattern_2.png) | ![pattern_3-4140](4140/previews/pattern_3.png) | ![pattern_4-4140](4140/previews/pattern_4.png) | ![pattern_5-4140](4140/previews/pattern_5.png) | ![pattern_6-4140](4140/previews/pattern_6.png) | ![pattern_7-4140](4140/previews/pattern_7.png) | ![pattern_8-4140](4140/previews/pattern_8.png) | ![pattern_9-4140](4140/previews/pattern_9.png) | ![pattern_10-4140](4140/previews/pattern_10.png) | ![pattern_11-4140](4140/previews/pattern_11.png) | ![pattern_12-4140](4140/previews/pattern_12.png) | ![pattern_13-4140](4140/previews/pattern_13.png) | ![bikini-4140](4140/previews/bikini.png) | [<NSFW, click to see>](4140/previews/bondage.png) | ![free-4140](4140/previews/free.png) | ![maid-4140](4140/previews/maid.png) | ![miko-4140](4140/previews/miko.png) | [<NSFW, click to see>](4140/previews/nude.png) | [<NSFW, click to see>](4140/previews/nude2.png) | ![suit-4140](4140/previews/suit.png) | ![yukata-4140](4140/previews/yukata.png) | | 3680 | 0.833 | [Download](3680/tomoe_mami_puellamagimadokamagica.zip) | ![pattern_1-3680](3680/previews/pattern_1.png) | ![pattern_2-3680](3680/previews/pattern_2.png) | ![pattern_3-3680](3680/previews/pattern_3.png) | ![pattern_4-3680](3680/previews/pattern_4.png) | ![pattern_5-3680](3680/previews/pattern_5.png) | ![pattern_6-3680](3680/previews/pattern_6.png) | ![pattern_7-3680](3680/previews/pattern_7.png) | ![pattern_8-3680](3680/previews/pattern_8.png) | ![pattern_9-3680](3680/previews/pattern_9.png) | ![pattern_10-3680](3680/previews/pattern_10.png) | ![pattern_11-3680](3680/previews/pattern_11.png) | ![pattern_12-3680](3680/previews/pattern_12.png) | ![pattern_13-3680](3680/previews/pattern_13.png) | ![bikini-3680](3680/previews/bikini.png) | [<NSFW, click to see>](3680/previews/bondage.png) | ![free-3680](3680/previews/free.png) | ![maid-3680](3680/previews/maid.png) | ![miko-3680](3680/previews/miko.png) | [<NSFW, click to see>](3680/previews/nude.png) | [<NSFW, click to see>](3680/previews/nude2.png) | ![suit-3680](3680/previews/suit.png) | ![yukata-3680](3680/previews/yukata.png) | | 3220 | 0.839 | [Download](3220/tomoe_mami_puellamagimadokamagica.zip) | ![pattern_1-3220](3220/previews/pattern_1.png) | ![pattern_2-3220](3220/previews/pattern_2.png) | ![pattern_3-3220](3220/previews/pattern_3.png) | ![pattern_4-3220](3220/previews/pattern_4.png) | ![pattern_5-3220](3220/previews/pattern_5.png) | ![pattern_6-3220](3220/previews/pattern_6.png) | ![pattern_7-3220](3220/previews/pattern_7.png) | ![pattern_8-3220](3220/previews/pattern_8.png) | ![pattern_9-3220](3220/previews/pattern_9.png) | ![pattern_10-3220](3220/previews/pattern_10.png) | ![pattern_11-3220](3220/previews/pattern_11.png) | ![pattern_12-3220](3220/previews/pattern_12.png) | ![pattern_13-3220](3220/previews/pattern_13.png) | ![bikini-3220](3220/previews/bikini.png) | [<NSFW, click to see>](3220/previews/bondage.png) | ![free-3220](3220/previews/free.png) | ![maid-3220](3220/previews/maid.png) | ![miko-3220](3220/previews/miko.png) | [<NSFW, click to see>](3220/previews/nude.png) | [<NSFW, click to see>](3220/previews/nude2.png) | ![suit-3220](3220/previews/suit.png) | ![yukata-3220](3220/previews/yukata.png) | | 2760 | 0.821 | [Download](2760/tomoe_mami_puellamagimadokamagica.zip) | ![pattern_1-2760](2760/previews/pattern_1.png) | ![pattern_2-2760](2760/previews/pattern_2.png) | ![pattern_3-2760](2760/previews/pattern_3.png) | ![pattern_4-2760](2760/previews/pattern_4.png) | ![pattern_5-2760](2760/previews/pattern_5.png) | ![pattern_6-2760](2760/previews/pattern_6.png) | ![pattern_7-2760](2760/previews/pattern_7.png) | ![pattern_8-2760](2760/previews/pattern_8.png) | ![pattern_9-2760](2760/previews/pattern_9.png) | ![pattern_10-2760](2760/previews/pattern_10.png) | ![pattern_11-2760](2760/previews/pattern_11.png) | ![pattern_12-2760](2760/previews/pattern_12.png) | ![pattern_13-2760](2760/previews/pattern_13.png) | ![bikini-2760](2760/previews/bikini.png) | [<NSFW, click to see>](2760/previews/bondage.png) | ![free-2760](2760/previews/free.png) | ![maid-2760](2760/previews/maid.png) | ![miko-2760](2760/previews/miko.png) | [<NSFW, click to see>](2760/previews/nude.png) | [<NSFW, click to see>](2760/previews/nude2.png) | ![suit-2760](2760/previews/suit.png) | ![yukata-2760](2760/previews/yukata.png) | | 2300 | 0.812 | [Download](2300/tomoe_mami_puellamagimadokamagica.zip) | ![pattern_1-2300](2300/previews/pattern_1.png) | ![pattern_2-2300](2300/previews/pattern_2.png) | ![pattern_3-2300](2300/previews/pattern_3.png) | ![pattern_4-2300](2300/previews/pattern_4.png) | ![pattern_5-2300](2300/previews/pattern_5.png) | ![pattern_6-2300](2300/previews/pattern_6.png) | ![pattern_7-2300](2300/previews/pattern_7.png) | ![pattern_8-2300](2300/previews/pattern_8.png) | ![pattern_9-2300](2300/previews/pattern_9.png) | ![pattern_10-2300](2300/previews/pattern_10.png) | ![pattern_11-2300](2300/previews/pattern_11.png) | ![pattern_12-2300](2300/previews/pattern_12.png) | ![pattern_13-2300](2300/previews/pattern_13.png) | ![bikini-2300](2300/previews/bikini.png) | [<NSFW, click to see>](2300/previews/bondage.png) | ![free-2300](2300/previews/free.png) | ![maid-2300](2300/previews/maid.png) | ![miko-2300](2300/previews/miko.png) | [<NSFW, click to see>](2300/previews/nude.png) | [<NSFW, click to see>](2300/previews/nude2.png) | ![suit-2300](2300/previews/suit.png) | ![yukata-2300](2300/previews/yukata.png) | | 1840 | 0.707 | [Download](1840/tomoe_mami_puellamagimadokamagica.zip) | ![pattern_1-1840](1840/previews/pattern_1.png) | ![pattern_2-1840](1840/previews/pattern_2.png) | ![pattern_3-1840](1840/previews/pattern_3.png) | ![pattern_4-1840](1840/previews/pattern_4.png) | ![pattern_5-1840](1840/previews/pattern_5.png) | ![pattern_6-1840](1840/previews/pattern_6.png) | ![pattern_7-1840](1840/previews/pattern_7.png) | ![pattern_8-1840](1840/previews/pattern_8.png) | ![pattern_9-1840](1840/previews/pattern_9.png) | ![pattern_10-1840](1840/previews/pattern_10.png) | ![pattern_11-1840](1840/previews/pattern_11.png) | ![pattern_12-1840](1840/previews/pattern_12.png) | ![pattern_13-1840](1840/previews/pattern_13.png) | ![bikini-1840](1840/previews/bikini.png) | [<NSFW, click to see>](1840/previews/bondage.png) | ![free-1840](1840/previews/free.png) | ![maid-1840](1840/previews/maid.png) | ![miko-1840](1840/previews/miko.png) | [<NSFW, click to see>](1840/previews/nude.png) | [<NSFW, click to see>](1840/previews/nude2.png) | ![suit-1840](1840/previews/suit.png) | ![yukata-1840](1840/previews/yukata.png) | | 1380 | 0.711 | [Download](1380/tomoe_mami_puellamagimadokamagica.zip) | ![pattern_1-1380](1380/previews/pattern_1.png) | ![pattern_2-1380](1380/previews/pattern_2.png) | ![pattern_3-1380](1380/previews/pattern_3.png) | ![pattern_4-1380](1380/previews/pattern_4.png) | ![pattern_5-1380](1380/previews/pattern_5.png) | ![pattern_6-1380](1380/previews/pattern_6.png) | ![pattern_7-1380](1380/previews/pattern_7.png) | ![pattern_8-1380](1380/previews/pattern_8.png) | ![pattern_9-1380](1380/previews/pattern_9.png) | ![pattern_10-1380](1380/previews/pattern_10.png) | ![pattern_11-1380](1380/previews/pattern_11.png) | ![pattern_12-1380](1380/previews/pattern_12.png) | ![pattern_13-1380](1380/previews/pattern_13.png) | ![bikini-1380](1380/previews/bikini.png) | [<NSFW, click to see>](1380/previews/bondage.png) | ![free-1380](1380/previews/free.png) | ![maid-1380](1380/previews/maid.png) | ![miko-1380](1380/previews/miko.png) | [<NSFW, click to see>](1380/previews/nude.png) | [<NSFW, click to see>](1380/previews/nude2.png) | ![suit-1380](1380/previews/suit.png) | ![yukata-1380](1380/previews/yukata.png) | | 920 | 0.580 | [Download](920/tomoe_mami_puellamagimadokamagica.zip) | ![pattern_1-920](920/previews/pattern_1.png) | ![pattern_2-920](920/previews/pattern_2.png) | ![pattern_3-920](920/previews/pattern_3.png) | ![pattern_4-920](920/previews/pattern_4.png) | ![pattern_5-920](920/previews/pattern_5.png) | ![pattern_6-920](920/previews/pattern_6.png) | ![pattern_7-920](920/previews/pattern_7.png) | ![pattern_8-920](920/previews/pattern_8.png) | ![pattern_9-920](920/previews/pattern_9.png) | ![pattern_10-920](920/previews/pattern_10.png) | ![pattern_11-920](920/previews/pattern_11.png) | ![pattern_12-920](920/previews/pattern_12.png) | ![pattern_13-920](920/previews/pattern_13.png) | ![bikini-920](920/previews/bikini.png) | [<NSFW, click to see>](920/previews/bondage.png) | ![free-920](920/previews/free.png) | ![maid-920](920/previews/maid.png) | ![miko-920](920/previews/miko.png) | [<NSFW, click to see>](920/previews/nude.png) | [<NSFW, click to see>](920/previews/nude2.png) | ![suit-920](920/previews/suit.png) | ![yukata-920](920/previews/yukata.png) | | 460 | 0.405 | [Download](460/tomoe_mami_puellamagimadokamagica.zip) | ![pattern_1-460](460/previews/pattern_1.png) | ![pattern_2-460](460/previews/pattern_2.png) | ![pattern_3-460](460/previews/pattern_3.png) | ![pattern_4-460](460/previews/pattern_4.png) | ![pattern_5-460](460/previews/pattern_5.png) | ![pattern_6-460](460/previews/pattern_6.png) | ![pattern_7-460](460/previews/pattern_7.png) | ![pattern_8-460](460/previews/pattern_8.png) | ![pattern_9-460](460/previews/pattern_9.png) | ![pattern_10-460](460/previews/pattern_10.png) | ![pattern_11-460](460/previews/pattern_11.png) | ![pattern_12-460](460/previews/pattern_12.png) | ![pattern_13-460](460/previews/pattern_13.png) | ![bikini-460](460/previews/bikini.png) | [<NSFW, click to see>](460/previews/bondage.png) | ![free-460](460/previews/free.png) | ![maid-460](460/previews/maid.png) | ![miko-460](460/previews/miko.png) | [<NSFW, click to see>](460/previews/nude.png) | [<NSFW, click to see>](460/previews/nude2.png) | ![suit-460](460/previews/suit.png) | ![yukata-460](460/previews/yukata.png) |
Casual111/KinnersVM
Casual111
2023-09-23T19:27:10Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2023-09-23T19:26:41Z
--- license: bigscience-openrail-m ---
Koshti10/BART_large_Gameplan
Koshti10
2023-09-23T19:14:02Z
3
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-large", "base_model:finetune:facebook/bart-large", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-09-23T15:28:40Z
--- license: apache-2.0 base_model: facebook/bart-large tags: - generated_from_trainer model-index: - name: BART_large_Gameplan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BART_large_Gameplan This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - label_smoothing_factor: 0.1 ### Training results ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
LarryAIDraw/nezuko-08
LarryAIDraw
2023-09-23T19:11:02Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-09-23T19:01:33Z
--- license: creativeml-openrail-m --- https://civitai.com/models/149998/nezuko-kamado-demon-slayer-lora
zongxiao/wav2vec2-base-finetuned-gtzan
zongxiao
2023-09-23T19:07:33Z
159
0
transformers
[ "transformers", "pytorch", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2023-09-23T16:08:57Z
--- license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: wav2vec2-base-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.92 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-gtzan This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.4965 - Accuracy: 0.92 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1429 | 1.0 | 224 | 0.6629 | 0.9 | | 0.0037 | 2.0 | 449 | 0.5910 | 0.92 | | 0.0041 | 3.0 | 674 | 0.5544 | 0.91 | | 0.003 | 4.0 | 899 | 0.5046 | 0.92 | | 0.1445 | 4.98 | 1120 | 0.4965 | 0.92 | ### Framework versions - Transformers 4.34.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
LarryAIDraw/Amy
LarryAIDraw
2023-09-23T19:00:12Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-09-23T18:42:32Z
--- license: creativeml-openrail-m --- https://civitai.com/models/150254/amy-karmis-or-manhwa-or-infinite-mage
vaalto/test_trainer
vaalto
2023-09-23T18:59:07Z
181
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-23T18:58:21Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: test_trainer results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.82 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_trainer This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.5366 - Accuracy: 0.82 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 25 | 0.6720 | 0.58 | | No log | 2.0 | 50 | 0.5366 | 0.82 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cpu - Datasets 2.14.5 - Tokenizers 0.13.3
LarryAIDraw/centi_supreme-holidayv2
LarryAIDraw
2023-09-23T18:58:27Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-09-23T18:40:47Z
--- license: creativeml-openrail-m --- https://civitai.com/models/143357/centi-supreme-holiday-nikke
LarryAIDraw/olyaya-v100-000018
LarryAIDraw
2023-09-23T18:58:01Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-09-23T18:39:32Z
--- license: creativeml-openrail-m --- https://civitai.com/models/95243?modelVersionId=167980
santif/codellama2-qlora-finetuned-guanaco
santif
2023-09-23T18:57:13Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-23T18:56:56Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
keshav5196/RL-taxi
keshav5196
2023-09-23T18:52:54Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-09-23T18:52:52Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: RL-taxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="keshav5196/RL-taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
keshav5196/q-FrozenLake-v1-4x4-noSlippery
keshav5196
2023-09-23T18:45:44Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-09-23T18:45:41Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="keshav5196/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
RocioUrquijo/clasificador-muchocine
RocioUrquijo
2023-09-23T18:36:11Z
108
0
transformers
[ "transformers", "pytorch", "electra", "text-classification", "classification", "generated_from_trainer", "base_model:mrm8488/electricidad-base-discriminator", "base_model:finetune:mrm8488/electricidad-base-discriminator", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-08T10:22:32Z
--- base_model: mrm8488/electricidad-base-discriminator tags: - classification - generated_from_trainer metrics: - accuracy model-index: - name: clasificador-muchocine results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasificador-muchocine This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4367 - Accuracy: 0.4206 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 388 | 1.3247 | 0.4090 | | 1.3945 | 2.0 | 776 | 1.3297 | 0.4413 | | 1.0081 | 3.0 | 1164 | 1.4367 | 0.4206 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
fowlart99/hihi
fowlart99
2023-09-23T18:29:49Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2023-09-23T18:29:49Z
--- license: bigscience-openrail-m ---
Kiwihead15/bert-finetuned-ner
Kiwihead15
2023-09-23T18:15:32Z
106
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-09-23T17:50:48Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.9319907560250907 - name: Recall type: recall value: 0.9501851228542578 - name: F1 type: f1 value: 0.941 - name: Accuracy type: accuracy value: 0.9863277800671102 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0594 - Precision: 0.9320 - Recall: 0.9502 - F1: 0.941 - Accuracy: 0.9863 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0798 | 1.0 | 1756 | 0.0771 | 0.9016 | 0.9342 | 0.9176 | 0.9797 | | 0.0404 | 2.0 | 3512 | 0.0595 | 0.9242 | 0.9478 | 0.9359 | 0.9851 | | 0.0247 | 3.0 | 5268 | 0.0594 | 0.9320 | 0.9502 | 0.941 | 0.9863 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
devananda04/my-pet-horse-xzg
devananda04
2023-09-23T18:15:03Z
1
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-23T18:09:58Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Horse-xzg Dreambooth model trained by devananda04 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: VAST-215 Sample pictures of this concept:
EladAssia/Reinforce-CartPole-v1
EladAssia
2023-09-23T18:13:52Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-09-23T18:13:41Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
CyberHarem/akemi_homura_puellamagimadokamagica
CyberHarem
2023-09-23T17:51:13Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/akemi_homura_puellamagimadokamagica", "license:mit", "region:us" ]
text-to-image
2023-09-23T17:35:10Z
--- license: mit datasets: - CyberHarem/akemi_homura_puellamagimadokamagica pipeline_tag: text-to-image tags: - art --- # Lora of akemi_homura_puellamagimadokamagica This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 8400, you need to download `8400/akemi_homura_puellamagimadokamagica.pt` as the embedding and `8400/akemi_homura_puellamagimadokamagica.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 8400**, with the score of 0.851. The trigger words are: 1. `akemi_homura_puellamagimadokamagica` 2. `black_hair, long_hair, hairband, purple_eyes, blush` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | **8400** | **0.851** | [**Download**](8400/akemi_homura_puellamagimadokamagica.zip) | ![pattern_1-8400](8400/previews/pattern_1.png) | ![pattern_2-8400](8400/previews/pattern_2.png) | ![pattern_3-8400](8400/previews/pattern_3.png) | ![pattern_4-8400](8400/previews/pattern_4.png) | ![pattern_5-8400](8400/previews/pattern_5.png) | ![pattern_6-8400](8400/previews/pattern_6.png) | ![pattern_7-8400](8400/previews/pattern_7.png) | ![pattern_8-8400](8400/previews/pattern_8.png) | ![pattern_9-8400](8400/previews/pattern_9.png) | ![pattern_10-8400](8400/previews/pattern_10.png) | ![pattern_11-8400](8400/previews/pattern_11.png) | ![pattern_12-8400](8400/previews/pattern_12.png) | ![pattern_13-8400](8400/previews/pattern_13.png) | ![pattern_14-8400](8400/previews/pattern_14.png) | ![bikini-8400](8400/previews/bikini.png) | [<NSFW, click to see>](8400/previews/bondage.png) | ![free-8400](8400/previews/free.png) | ![maid-8400](8400/previews/maid.png) | ![miko-8400](8400/previews/miko.png) | [<NSFW, click to see>](8400/previews/nude.png) | [<NSFW, click to see>](8400/previews/nude2.png) | ![suit-8400](8400/previews/suit.png) | ![yukata-8400](8400/previews/yukata.png) | | 7840 | 0.841 | [Download](7840/akemi_homura_puellamagimadokamagica.zip) | ![pattern_1-7840](7840/previews/pattern_1.png) | ![pattern_2-7840](7840/previews/pattern_2.png) | ![pattern_3-7840](7840/previews/pattern_3.png) | ![pattern_4-7840](7840/previews/pattern_4.png) | ![pattern_5-7840](7840/previews/pattern_5.png) | ![pattern_6-7840](7840/previews/pattern_6.png) | ![pattern_7-7840](7840/previews/pattern_7.png) | ![pattern_8-7840](7840/previews/pattern_8.png) | ![pattern_9-7840](7840/previews/pattern_9.png) | ![pattern_10-7840](7840/previews/pattern_10.png) | ![pattern_11-7840](7840/previews/pattern_11.png) | ![pattern_12-7840](7840/previews/pattern_12.png) | ![pattern_13-7840](7840/previews/pattern_13.png) | ![pattern_14-7840](7840/previews/pattern_14.png) | ![bikini-7840](7840/previews/bikini.png) | [<NSFW, click to see>](7840/previews/bondage.png) | ![free-7840](7840/previews/free.png) | ![maid-7840](7840/previews/maid.png) | ![miko-7840](7840/previews/miko.png) | [<NSFW, click to see>](7840/previews/nude.png) | [<NSFW, click to see>](7840/previews/nude2.png) | ![suit-7840](7840/previews/suit.png) | ![yukata-7840](7840/previews/yukata.png) | | 7280 | 0.810 | [Download](7280/akemi_homura_puellamagimadokamagica.zip) | ![pattern_1-7280](7280/previews/pattern_1.png) | ![pattern_2-7280](7280/previews/pattern_2.png) | ![pattern_3-7280](7280/previews/pattern_3.png) | ![pattern_4-7280](7280/previews/pattern_4.png) | ![pattern_5-7280](7280/previews/pattern_5.png) | ![pattern_6-7280](7280/previews/pattern_6.png) | ![pattern_7-7280](7280/previews/pattern_7.png) | ![pattern_8-7280](7280/previews/pattern_8.png) | ![pattern_9-7280](7280/previews/pattern_9.png) | ![pattern_10-7280](7280/previews/pattern_10.png) | ![pattern_11-7280](7280/previews/pattern_11.png) | ![pattern_12-7280](7280/previews/pattern_12.png) | ![pattern_13-7280](7280/previews/pattern_13.png) | ![pattern_14-7280](7280/previews/pattern_14.png) | ![bikini-7280](7280/previews/bikini.png) | [<NSFW, click to see>](7280/previews/bondage.png) | ![free-7280](7280/previews/free.png) | ![maid-7280](7280/previews/maid.png) | ![miko-7280](7280/previews/miko.png) | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) | ![suit-7280](7280/previews/suit.png) | ![yukata-7280](7280/previews/yukata.png) | | 6720 | 0.834 | [Download](6720/akemi_homura_puellamagimadokamagica.zip) | ![pattern_1-6720](6720/previews/pattern_1.png) | ![pattern_2-6720](6720/previews/pattern_2.png) | ![pattern_3-6720](6720/previews/pattern_3.png) | ![pattern_4-6720](6720/previews/pattern_4.png) | ![pattern_5-6720](6720/previews/pattern_5.png) | ![pattern_6-6720](6720/previews/pattern_6.png) | ![pattern_7-6720](6720/previews/pattern_7.png) | ![pattern_8-6720](6720/previews/pattern_8.png) | ![pattern_9-6720](6720/previews/pattern_9.png) | ![pattern_10-6720](6720/previews/pattern_10.png) | ![pattern_11-6720](6720/previews/pattern_11.png) | ![pattern_12-6720](6720/previews/pattern_12.png) | ![pattern_13-6720](6720/previews/pattern_13.png) | ![pattern_14-6720](6720/previews/pattern_14.png) | ![bikini-6720](6720/previews/bikini.png) | [<NSFW, click to see>](6720/previews/bondage.png) | ![free-6720](6720/previews/free.png) | ![maid-6720](6720/previews/maid.png) | ![miko-6720](6720/previews/miko.png) | [<NSFW, click to see>](6720/previews/nude.png) | [<NSFW, click to see>](6720/previews/nude2.png) | ![suit-6720](6720/previews/suit.png) | ![yukata-6720](6720/previews/yukata.png) | | 6160 | 0.844 | [Download](6160/akemi_homura_puellamagimadokamagica.zip) | ![pattern_1-6160](6160/previews/pattern_1.png) | ![pattern_2-6160](6160/previews/pattern_2.png) | ![pattern_3-6160](6160/previews/pattern_3.png) | ![pattern_4-6160](6160/previews/pattern_4.png) | ![pattern_5-6160](6160/previews/pattern_5.png) | ![pattern_6-6160](6160/previews/pattern_6.png) | ![pattern_7-6160](6160/previews/pattern_7.png) | ![pattern_8-6160](6160/previews/pattern_8.png) | ![pattern_9-6160](6160/previews/pattern_9.png) | ![pattern_10-6160](6160/previews/pattern_10.png) | ![pattern_11-6160](6160/previews/pattern_11.png) | ![pattern_12-6160](6160/previews/pattern_12.png) | ![pattern_13-6160](6160/previews/pattern_13.png) | ![pattern_14-6160](6160/previews/pattern_14.png) | ![bikini-6160](6160/previews/bikini.png) | [<NSFW, click to see>](6160/previews/bondage.png) | ![free-6160](6160/previews/free.png) | ![maid-6160](6160/previews/maid.png) | ![miko-6160](6160/previews/miko.png) | [<NSFW, click to see>](6160/previews/nude.png) | [<NSFW, click to see>](6160/previews/nude2.png) | ![suit-6160](6160/previews/suit.png) | ![yukata-6160](6160/previews/yukata.png) | | 5600 | 0.835 | [Download](5600/akemi_homura_puellamagimadokamagica.zip) | ![pattern_1-5600](5600/previews/pattern_1.png) | ![pattern_2-5600](5600/previews/pattern_2.png) | ![pattern_3-5600](5600/previews/pattern_3.png) | ![pattern_4-5600](5600/previews/pattern_4.png) | ![pattern_5-5600](5600/previews/pattern_5.png) | ![pattern_6-5600](5600/previews/pattern_6.png) | ![pattern_7-5600](5600/previews/pattern_7.png) | ![pattern_8-5600](5600/previews/pattern_8.png) | ![pattern_9-5600](5600/previews/pattern_9.png) | ![pattern_10-5600](5600/previews/pattern_10.png) | ![pattern_11-5600](5600/previews/pattern_11.png) | ![pattern_12-5600](5600/previews/pattern_12.png) | ![pattern_13-5600](5600/previews/pattern_13.png) | ![pattern_14-5600](5600/previews/pattern_14.png) | ![bikini-5600](5600/previews/bikini.png) | [<NSFW, click to see>](5600/previews/bondage.png) | ![free-5600](5600/previews/free.png) | ![maid-5600](5600/previews/maid.png) | ![miko-5600](5600/previews/miko.png) | [<NSFW, click to see>](5600/previews/nude.png) | [<NSFW, click to see>](5600/previews/nude2.png) | ![suit-5600](5600/previews/suit.png) | ![yukata-5600](5600/previews/yukata.png) | | 5040 | 0.826 | [Download](5040/akemi_homura_puellamagimadokamagica.zip) | ![pattern_1-5040](5040/previews/pattern_1.png) | ![pattern_2-5040](5040/previews/pattern_2.png) | ![pattern_3-5040](5040/previews/pattern_3.png) | ![pattern_4-5040](5040/previews/pattern_4.png) | ![pattern_5-5040](5040/previews/pattern_5.png) | ![pattern_6-5040](5040/previews/pattern_6.png) | ![pattern_7-5040](5040/previews/pattern_7.png) | ![pattern_8-5040](5040/previews/pattern_8.png) | ![pattern_9-5040](5040/previews/pattern_9.png) | ![pattern_10-5040](5040/previews/pattern_10.png) | ![pattern_11-5040](5040/previews/pattern_11.png) | ![pattern_12-5040](5040/previews/pattern_12.png) | ![pattern_13-5040](5040/previews/pattern_13.png) | ![pattern_14-5040](5040/previews/pattern_14.png) | ![bikini-5040](5040/previews/bikini.png) | [<NSFW, click to see>](5040/previews/bondage.png) | ![free-5040](5040/previews/free.png) | ![maid-5040](5040/previews/maid.png) | ![miko-5040](5040/previews/miko.png) | [<NSFW, click to see>](5040/previews/nude.png) | [<NSFW, click to see>](5040/previews/nude2.png) | ![suit-5040](5040/previews/suit.png) | ![yukata-5040](5040/previews/yukata.png) | | 4480 | 0.834 | [Download](4480/akemi_homura_puellamagimadokamagica.zip) | ![pattern_1-4480](4480/previews/pattern_1.png) | ![pattern_2-4480](4480/previews/pattern_2.png) | ![pattern_3-4480](4480/previews/pattern_3.png) | ![pattern_4-4480](4480/previews/pattern_4.png) | ![pattern_5-4480](4480/previews/pattern_5.png) | ![pattern_6-4480](4480/previews/pattern_6.png) | ![pattern_7-4480](4480/previews/pattern_7.png) | ![pattern_8-4480](4480/previews/pattern_8.png) | ![pattern_9-4480](4480/previews/pattern_9.png) | ![pattern_10-4480](4480/previews/pattern_10.png) | ![pattern_11-4480](4480/previews/pattern_11.png) | ![pattern_12-4480](4480/previews/pattern_12.png) | ![pattern_13-4480](4480/previews/pattern_13.png) | ![pattern_14-4480](4480/previews/pattern_14.png) | ![bikini-4480](4480/previews/bikini.png) | [<NSFW, click to see>](4480/previews/bondage.png) | ![free-4480](4480/previews/free.png) | ![maid-4480](4480/previews/maid.png) | ![miko-4480](4480/previews/miko.png) | [<NSFW, click to see>](4480/previews/nude.png) | [<NSFW, click to see>](4480/previews/nude2.png) | ![suit-4480](4480/previews/suit.png) | ![yukata-4480](4480/previews/yukata.png) | | 3920 | 0.815 | [Download](3920/akemi_homura_puellamagimadokamagica.zip) | ![pattern_1-3920](3920/previews/pattern_1.png) | ![pattern_2-3920](3920/previews/pattern_2.png) | ![pattern_3-3920](3920/previews/pattern_3.png) | ![pattern_4-3920](3920/previews/pattern_4.png) | ![pattern_5-3920](3920/previews/pattern_5.png) | ![pattern_6-3920](3920/previews/pattern_6.png) | ![pattern_7-3920](3920/previews/pattern_7.png) | ![pattern_8-3920](3920/previews/pattern_8.png) | ![pattern_9-3920](3920/previews/pattern_9.png) | ![pattern_10-3920](3920/previews/pattern_10.png) | ![pattern_11-3920](3920/previews/pattern_11.png) | ![pattern_12-3920](3920/previews/pattern_12.png) | ![pattern_13-3920](3920/previews/pattern_13.png) | ![pattern_14-3920](3920/previews/pattern_14.png) | ![bikini-3920](3920/previews/bikini.png) | [<NSFW, click to see>](3920/previews/bondage.png) | ![free-3920](3920/previews/free.png) | ![maid-3920](3920/previews/maid.png) | ![miko-3920](3920/previews/miko.png) | [<NSFW, click to see>](3920/previews/nude.png) | [<NSFW, click to see>](3920/previews/nude2.png) | ![suit-3920](3920/previews/suit.png) | ![yukata-3920](3920/previews/yukata.png) | | 3360 | 0.834 | [Download](3360/akemi_homura_puellamagimadokamagica.zip) | ![pattern_1-3360](3360/previews/pattern_1.png) | ![pattern_2-3360](3360/previews/pattern_2.png) | ![pattern_3-3360](3360/previews/pattern_3.png) | ![pattern_4-3360](3360/previews/pattern_4.png) | ![pattern_5-3360](3360/previews/pattern_5.png) | ![pattern_6-3360](3360/previews/pattern_6.png) | ![pattern_7-3360](3360/previews/pattern_7.png) | ![pattern_8-3360](3360/previews/pattern_8.png) | ![pattern_9-3360](3360/previews/pattern_9.png) | ![pattern_10-3360](3360/previews/pattern_10.png) | ![pattern_11-3360](3360/previews/pattern_11.png) | ![pattern_12-3360](3360/previews/pattern_12.png) | ![pattern_13-3360](3360/previews/pattern_13.png) | ![pattern_14-3360](3360/previews/pattern_14.png) | ![bikini-3360](3360/previews/bikini.png) | [<NSFW, click to see>](3360/previews/bondage.png) | ![free-3360](3360/previews/free.png) | ![maid-3360](3360/previews/maid.png) | ![miko-3360](3360/previews/miko.png) | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) | ![suit-3360](3360/previews/suit.png) | ![yukata-3360](3360/previews/yukata.png) | | 2800 | 0.821 | [Download](2800/akemi_homura_puellamagimadokamagica.zip) | ![pattern_1-2800](2800/previews/pattern_1.png) | ![pattern_2-2800](2800/previews/pattern_2.png) | ![pattern_3-2800](2800/previews/pattern_3.png) | ![pattern_4-2800](2800/previews/pattern_4.png) | ![pattern_5-2800](2800/previews/pattern_5.png) | ![pattern_6-2800](2800/previews/pattern_6.png) | ![pattern_7-2800](2800/previews/pattern_7.png) | ![pattern_8-2800](2800/previews/pattern_8.png) | ![pattern_9-2800](2800/previews/pattern_9.png) | ![pattern_10-2800](2800/previews/pattern_10.png) | ![pattern_11-2800](2800/previews/pattern_11.png) | ![pattern_12-2800](2800/previews/pattern_12.png) | ![pattern_13-2800](2800/previews/pattern_13.png) | ![pattern_14-2800](2800/previews/pattern_14.png) | ![bikini-2800](2800/previews/bikini.png) | [<NSFW, click to see>](2800/previews/bondage.png) | ![free-2800](2800/previews/free.png) | ![maid-2800](2800/previews/maid.png) | ![miko-2800](2800/previews/miko.png) | [<NSFW, click to see>](2800/previews/nude.png) | [<NSFW, click to see>](2800/previews/nude2.png) | ![suit-2800](2800/previews/suit.png) | ![yukata-2800](2800/previews/yukata.png) | | 2240 | 0.804 | [Download](2240/akemi_homura_puellamagimadokamagica.zip) | ![pattern_1-2240](2240/previews/pattern_1.png) | ![pattern_2-2240](2240/previews/pattern_2.png) | ![pattern_3-2240](2240/previews/pattern_3.png) | ![pattern_4-2240](2240/previews/pattern_4.png) | ![pattern_5-2240](2240/previews/pattern_5.png) | ![pattern_6-2240](2240/previews/pattern_6.png) | ![pattern_7-2240](2240/previews/pattern_7.png) | ![pattern_8-2240](2240/previews/pattern_8.png) | ![pattern_9-2240](2240/previews/pattern_9.png) | ![pattern_10-2240](2240/previews/pattern_10.png) | ![pattern_11-2240](2240/previews/pattern_11.png) | ![pattern_12-2240](2240/previews/pattern_12.png) | ![pattern_13-2240](2240/previews/pattern_13.png) | ![pattern_14-2240](2240/previews/pattern_14.png) | ![bikini-2240](2240/previews/bikini.png) | [<NSFW, click to see>](2240/previews/bondage.png) | ![free-2240](2240/previews/free.png) | ![maid-2240](2240/previews/maid.png) | ![miko-2240](2240/previews/miko.png) | [<NSFW, click to see>](2240/previews/nude.png) | [<NSFW, click to see>](2240/previews/nude2.png) | ![suit-2240](2240/previews/suit.png) | ![yukata-2240](2240/previews/yukata.png) | | 1680 | 0.740 | [Download](1680/akemi_homura_puellamagimadokamagica.zip) | ![pattern_1-1680](1680/previews/pattern_1.png) | ![pattern_2-1680](1680/previews/pattern_2.png) | ![pattern_3-1680](1680/previews/pattern_3.png) | ![pattern_4-1680](1680/previews/pattern_4.png) | ![pattern_5-1680](1680/previews/pattern_5.png) | ![pattern_6-1680](1680/previews/pattern_6.png) | ![pattern_7-1680](1680/previews/pattern_7.png) | ![pattern_8-1680](1680/previews/pattern_8.png) | ![pattern_9-1680](1680/previews/pattern_9.png) | ![pattern_10-1680](1680/previews/pattern_10.png) | ![pattern_11-1680](1680/previews/pattern_11.png) | ![pattern_12-1680](1680/previews/pattern_12.png) | ![pattern_13-1680](1680/previews/pattern_13.png) | ![pattern_14-1680](1680/previews/pattern_14.png) | ![bikini-1680](1680/previews/bikini.png) | [<NSFW, click to see>](1680/previews/bondage.png) | ![free-1680](1680/previews/free.png) | ![maid-1680](1680/previews/maid.png) | ![miko-1680](1680/previews/miko.png) | [<NSFW, click to see>](1680/previews/nude.png) | [<NSFW, click to see>](1680/previews/nude2.png) | ![suit-1680](1680/previews/suit.png) | ![yukata-1680](1680/previews/yukata.png) | | 1120 | 0.744 | [Download](1120/akemi_homura_puellamagimadokamagica.zip) | ![pattern_1-1120](1120/previews/pattern_1.png) | ![pattern_2-1120](1120/previews/pattern_2.png) | ![pattern_3-1120](1120/previews/pattern_3.png) | ![pattern_4-1120](1120/previews/pattern_4.png) | ![pattern_5-1120](1120/previews/pattern_5.png) | ![pattern_6-1120](1120/previews/pattern_6.png) | ![pattern_7-1120](1120/previews/pattern_7.png) | ![pattern_8-1120](1120/previews/pattern_8.png) | ![pattern_9-1120](1120/previews/pattern_9.png) | ![pattern_10-1120](1120/previews/pattern_10.png) | ![pattern_11-1120](1120/previews/pattern_11.png) | ![pattern_12-1120](1120/previews/pattern_12.png) | ![pattern_13-1120](1120/previews/pattern_13.png) | ![pattern_14-1120](1120/previews/pattern_14.png) | ![bikini-1120](1120/previews/bikini.png) | [<NSFW, click to see>](1120/previews/bondage.png) | ![free-1120](1120/previews/free.png) | ![maid-1120](1120/previews/maid.png) | ![miko-1120](1120/previews/miko.png) | [<NSFW, click to see>](1120/previews/nude.png) | [<NSFW, click to see>](1120/previews/nude2.png) | ![suit-1120](1120/previews/suit.png) | ![yukata-1120](1120/previews/yukata.png) | | 560 | 0.634 | [Download](560/akemi_homura_puellamagimadokamagica.zip) | ![pattern_1-560](560/previews/pattern_1.png) | ![pattern_2-560](560/previews/pattern_2.png) | ![pattern_3-560](560/previews/pattern_3.png) | ![pattern_4-560](560/previews/pattern_4.png) | ![pattern_5-560](560/previews/pattern_5.png) | ![pattern_6-560](560/previews/pattern_6.png) | ![pattern_7-560](560/previews/pattern_7.png) | ![pattern_8-560](560/previews/pattern_8.png) | ![pattern_9-560](560/previews/pattern_9.png) | ![pattern_10-560](560/previews/pattern_10.png) | ![pattern_11-560](560/previews/pattern_11.png) | ![pattern_12-560](560/previews/pattern_12.png) | ![pattern_13-560](560/previews/pattern_13.png) | ![pattern_14-560](560/previews/pattern_14.png) | ![bikini-560](560/previews/bikini.png) | [<NSFW, click to see>](560/previews/bondage.png) | ![free-560](560/previews/free.png) | ![maid-560](560/previews/maid.png) | ![miko-560](560/previews/miko.png) | [<NSFW, click to see>](560/previews/nude.png) | [<NSFW, click to see>](560/previews/nude2.png) | ![suit-560](560/previews/suit.png) | ![yukata-560](560/previews/yukata.png) |
learn3r/longt5_xl_summ_screen_20
learn3r
2023-09-23T17:40:48Z
10
0
transformers
[ "transformers", "pytorch", "longt5", "text2text-generation", "generated_from_trainer", "dataset:tau/scrolls", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-09-18T14:06:25Z
--- base_model: /exports/eddie/scratch/s1970716/models/summarization/longt5_xl_summ_screen/checkpoint-140 tags: - generated_from_trainer datasets: - tau/scrolls metrics: - rouge model-index: - name: longt5_xl_summ_screen_20 results: - task: name: Summarization type: summarization dataset: name: tau/scrolls summ_screen_fd type: tau/scrolls config: summ_screen_fd split: validation args: summ_screen_fd metrics: - name: Rouge1 type: rouge value: 28.1708 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # longt5_xl_summ_screen_20 This model is a fine-tuned version of [/exports/eddie/scratch/s1970716/models/summarization/longt5_xl_summ_screen/checkpoint-140](https://huggingface.co//exports/eddie/scratch/s1970716/models/summarization/longt5_xl_summ_screen/checkpoint-140) on the tau/scrolls summ_screen_fd dataset. It achieves the following results on the evaluation set: - Loss: 3.1917 - Rouge1: 28.1708 - Rouge2: 6.6895 - Rougel: 18.1637 - Rougelsum: 24.3987 - Gen Len: 96.2041 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:| | 0.4063 | 0.97 | 14 | 3.7385 | 27.9171 | 6.7215 | 17.9315 | 24.363 | 71.9083 | | 0.3125 | 1.95 | 28 | 3.1917 | 28.1708 | 6.6895 | 18.1637 | 24.3987 | 96.2041 | | 0.2177 | 2.99 | 43 | 3.9998 | 29.3167 | 5.9 | 17.3608 | 25.6945 | 198.0473 | | 0.1753 | 3.97 | 57 | 4.2287 | 29.0605 | 6.2534 | 17.5744 | 25.6415 | 158.6509 | | 0.2747 | 4.94 | 71 | 4.1027 | 31.2245 | 6.5663 | 18.1588 | 26.8996 | 118.4438 | | 0.1045 | 5.98 | 86 | 5.0581 | 30.6056 | 6.8892 | 18.4933 | 26.4027 | 92.9882 | | 0.0875 | 6.96 | 100 | 4.5941 | 32.5234 | 7.3736 | 18.8958 | 28.4738 | 160.8964 | | 0.1572 | 8.0 | 115 | 4.9386 | 31.4658 | 7.2592 | 18.4796 | 27.6047 | 121.0178 | | 0.0867 | 8.97 | 129 | 4.5565 | 32.0531 | 7.0692 | 18.5551 | 27.3373 | 160.4793 | | 0.0748 | 9.74 | 140 | 5.0866 | 32.2717 | 7.7004 | 18.9107 | 28.3874 | 124.1893 | ### Framework versions - Transformers 4.34.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
eugene6/q-Taxi-v3
eugene6
2023-09-23T17:39:58Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-09-23T17:39:56Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.44 +/- 2.77 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="eugene6/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
badokorach/bert-finetuned-squad-7-2309
badokorach
2023-09-23T17:17:05Z
12
0
transformers
[ "transformers", "tf", "bert", "question-answering", "generated_from_keras_callback", "base_model:badokorach/bert-finetuned-squad-7", "base_model:finetune:badokorach/bert-finetuned-squad-7", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-09-23T16:17:36Z
--- license: apache-2.0 base_model: badokorach/bert-finetuned-squad-7 tags: - generated_from_keras_callback model-index: - name: badokorach/bert-finetuned-squad-7-2309 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # badokorach/bert-finetuned-squad-7-2309 This model is a fine-tuned version of [badokorach/bert-finetuned-squad-7](https://huggingface.co/badokorach/bert-finetuned-squad-7) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1499 - Validation Loss: 0.0 - Epoch: 19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2596, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 4, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.02} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0115 | 0.0 | 0 | | 0.2937 | 0.0 | 1 | | 0.2292 | 0.0 | 2 | | 0.1814 | 0.0 | 3 | | 0.1772 | 0.0 | 4 | | 0.1645 | 0.0 | 5 | | 0.1652 | 0.0 | 6 | | 0.1565 | 0.0 | 7 | | 0.1613 | 0.0 | 8 | | 0.1580 | 0.0 | 9 | | 0.1706 | 0.0 | 10 | | 0.1598 | 0.0 | 11 | | 0.1595 | 0.0 | 12 | | 0.1537 | 0.0 | 13 | | 0.1544 | 0.0 | 14 | | 0.1576 | 0.0 | 15 | | 0.1552 | 0.0 | 16 | | 0.1530 | 0.0 | 17 | | 0.1554 | 0.0 | 18 | | 0.1499 | 0.0 | 19 | ### Framework versions - Transformers 4.33.2 - TensorFlow 2.13.0 - Datasets 2.14.5 - Tokenizers 0.13.3
SABARINATHKR777/my-pet-parot-sxh
SABARINATHKR777
2023-09-23T17:14:36Z
1
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-23T17:09:37Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-PAROT-SXH Dreambooth model trained by SABARINATHKR777 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: VAST-306 Sample pictures of this concept: ![0](https://huggingface.co/SABARINATHKR777/my-pet-parot-sxh/resolve/main/sample_images/sxh_(3).jpg)
Kushala/falcon
Kushala
2023-09-23T16:52:53Z
1
0
peft
[ "peft", "region:us" ]
null
2023-09-23T16:52:31Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
Undi95/MXLewd-L2-20B-GGUF
Undi95
2023-09-23T16:52:39Z
12
7
null
[ "gguf", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2023-09-22T17:29:56Z
--- license: cc-by-nc-4.0 --- Merge: ```shell layer_slices: - model: ./MXLewd-L2-20B-part2 start: 0 end: 16 - model: ./MXLewd-L2-20B-part1 start: 8 end: 20 - model: ./MXLewd-L2-20B-part2 start: 17 end: 32 - model: ./MXLewd-L2-20B-part1 start: 21 end: 40 ``` Part 2 is ReMM (0.33) and Xwin (0.66) Part 1 is Xwin (0.33) and MLewd (0.66) <!-- description start --> ## Models used - Undi95/MLewd-L2-13B-v2-3 - Undi95/ReMM-v2.1-L2-13B - Xwin-LM/Xwin-LM-13B-V0.1 <!-- description end --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that completes the request. ### Instruction: {prompt} ### Response: ```
CyberHarem/kaname_madoka_puellamagimadokamagica
CyberHarem
2023-09-23T16:44:49Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/kaname_madoka_puellamagimadokamagica", "license:mit", "region:us" ]
text-to-image
2023-09-23T16:27:55Z
--- license: mit datasets: - CyberHarem/kaname_madoka_puellamagimadokamagica pipeline_tag: text-to-image tags: - art --- # Lora of kaname_madoka_puellamagimadokamagica This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 7920, you need to download `7920/kaname_madoka_puellamagimadokamagica.pt` as the embedding and `7920/kaname_madoka_puellamagimadokamagica.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 7920**, with the score of 0.956. The trigger words are: 1. `kaname_madoka_puellamagimadokamagica` 2. `pink_hair, short_hair, twintails, short_twintails, pink_eyes, ribbon, hair_ribbon, bow, blush` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:--------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 9900 | 0.922 | [Download](9900/kaname_madoka_puellamagimadokamagica.zip) | ![pattern_1-9900](9900/previews/pattern_1.png) | ![pattern_2-9900](9900/previews/pattern_2.png) | ![pattern_3-9900](9900/previews/pattern_3.png) | ![pattern_4-9900](9900/previews/pattern_4.png) | ![pattern_5-9900](9900/previews/pattern_5.png) | ![pattern_6-9900](9900/previews/pattern_6.png) | ![pattern_7-9900](9900/previews/pattern_7.png) | ![pattern_8-9900](9900/previews/pattern_8.png) | ![pattern_9-9900](9900/previews/pattern_9.png) | ![pattern_10-9900](9900/previews/pattern_10.png) | ![pattern_11-9900](9900/previews/pattern_11.png) | ![pattern_12-9900](9900/previews/pattern_12.png) | ![pattern_13-9900](9900/previews/pattern_13.png) | ![pattern_14-9900](9900/previews/pattern_14.png) | ![pattern_15-9900](9900/previews/pattern_15.png) | ![pattern_16-9900](9900/previews/pattern_16.png) | ![pattern_17-9900](9900/previews/pattern_17.png) | ![pattern_18-9900](9900/previews/pattern_18.png) | ![bikini-9900](9900/previews/bikini.png) | [<NSFW, click to see>](9900/previews/bondage.png) | ![free-9900](9900/previews/free.png) | ![maid-9900](9900/previews/maid.png) | ![miko-9900](9900/previews/miko.png) | [<NSFW, click to see>](9900/previews/nude.png) | [<NSFW, click to see>](9900/previews/nude2.png) | ![suit-9900](9900/previews/suit.png) | ![yukata-9900](9900/previews/yukata.png) | | 9240 | 0.899 | [Download](9240/kaname_madoka_puellamagimadokamagica.zip) | ![pattern_1-9240](9240/previews/pattern_1.png) | ![pattern_2-9240](9240/previews/pattern_2.png) | ![pattern_3-9240](9240/previews/pattern_3.png) | ![pattern_4-9240](9240/previews/pattern_4.png) | ![pattern_5-9240](9240/previews/pattern_5.png) | ![pattern_6-9240](9240/previews/pattern_6.png) | ![pattern_7-9240](9240/previews/pattern_7.png) | ![pattern_8-9240](9240/previews/pattern_8.png) | ![pattern_9-9240](9240/previews/pattern_9.png) | ![pattern_10-9240](9240/previews/pattern_10.png) | ![pattern_11-9240](9240/previews/pattern_11.png) | ![pattern_12-9240](9240/previews/pattern_12.png) | ![pattern_13-9240](9240/previews/pattern_13.png) | ![pattern_14-9240](9240/previews/pattern_14.png) | ![pattern_15-9240](9240/previews/pattern_15.png) | ![pattern_16-9240](9240/previews/pattern_16.png) | ![pattern_17-9240](9240/previews/pattern_17.png) | ![pattern_18-9240](9240/previews/pattern_18.png) | ![bikini-9240](9240/previews/bikini.png) | [<NSFW, click to see>](9240/previews/bondage.png) | ![free-9240](9240/previews/free.png) | ![maid-9240](9240/previews/maid.png) | ![miko-9240](9240/previews/miko.png) | [<NSFW, click to see>](9240/previews/nude.png) | [<NSFW, click to see>](9240/previews/nude2.png) | ![suit-9240](9240/previews/suit.png) | ![yukata-9240](9240/previews/yukata.png) | | 8580 | 0.950 | [Download](8580/kaname_madoka_puellamagimadokamagica.zip) | ![pattern_1-8580](8580/previews/pattern_1.png) | ![pattern_2-8580](8580/previews/pattern_2.png) | ![pattern_3-8580](8580/previews/pattern_3.png) | ![pattern_4-8580](8580/previews/pattern_4.png) | ![pattern_5-8580](8580/previews/pattern_5.png) | ![pattern_6-8580](8580/previews/pattern_6.png) | ![pattern_7-8580](8580/previews/pattern_7.png) | ![pattern_8-8580](8580/previews/pattern_8.png) | ![pattern_9-8580](8580/previews/pattern_9.png) | ![pattern_10-8580](8580/previews/pattern_10.png) | ![pattern_11-8580](8580/previews/pattern_11.png) | ![pattern_12-8580](8580/previews/pattern_12.png) | ![pattern_13-8580](8580/previews/pattern_13.png) | ![pattern_14-8580](8580/previews/pattern_14.png) | ![pattern_15-8580](8580/previews/pattern_15.png) | ![pattern_16-8580](8580/previews/pattern_16.png) | ![pattern_17-8580](8580/previews/pattern_17.png) | ![pattern_18-8580](8580/previews/pattern_18.png) | ![bikini-8580](8580/previews/bikini.png) | [<NSFW, click to see>](8580/previews/bondage.png) | ![free-8580](8580/previews/free.png) | ![maid-8580](8580/previews/maid.png) | ![miko-8580](8580/previews/miko.png) | [<NSFW, click to see>](8580/previews/nude.png) | [<NSFW, click to see>](8580/previews/nude2.png) | ![suit-8580](8580/previews/suit.png) | ![yukata-8580](8580/previews/yukata.png) | | **7920** | **0.956** | [**Download**](7920/kaname_madoka_puellamagimadokamagica.zip) | ![pattern_1-7920](7920/previews/pattern_1.png) | ![pattern_2-7920](7920/previews/pattern_2.png) | ![pattern_3-7920](7920/previews/pattern_3.png) | ![pattern_4-7920](7920/previews/pattern_4.png) | ![pattern_5-7920](7920/previews/pattern_5.png) | ![pattern_6-7920](7920/previews/pattern_6.png) | ![pattern_7-7920](7920/previews/pattern_7.png) | ![pattern_8-7920](7920/previews/pattern_8.png) | ![pattern_9-7920](7920/previews/pattern_9.png) | ![pattern_10-7920](7920/previews/pattern_10.png) | ![pattern_11-7920](7920/previews/pattern_11.png) | ![pattern_12-7920](7920/previews/pattern_12.png) | ![pattern_13-7920](7920/previews/pattern_13.png) | ![pattern_14-7920](7920/previews/pattern_14.png) | ![pattern_15-7920](7920/previews/pattern_15.png) | ![pattern_16-7920](7920/previews/pattern_16.png) | ![pattern_17-7920](7920/previews/pattern_17.png) | ![pattern_18-7920](7920/previews/pattern_18.png) | ![bikini-7920](7920/previews/bikini.png) | [<NSFW, click to see>](7920/previews/bondage.png) | ![free-7920](7920/previews/free.png) | ![maid-7920](7920/previews/maid.png) | ![miko-7920](7920/previews/miko.png) | [<NSFW, click to see>](7920/previews/nude.png) | [<NSFW, click to see>](7920/previews/nude2.png) | ![suit-7920](7920/previews/suit.png) | ![yukata-7920](7920/previews/yukata.png) | | 7260 | 0.943 | [Download](7260/kaname_madoka_puellamagimadokamagica.zip) | ![pattern_1-7260](7260/previews/pattern_1.png) | ![pattern_2-7260](7260/previews/pattern_2.png) | ![pattern_3-7260](7260/previews/pattern_3.png) | ![pattern_4-7260](7260/previews/pattern_4.png) | ![pattern_5-7260](7260/previews/pattern_5.png) | ![pattern_6-7260](7260/previews/pattern_6.png) | ![pattern_7-7260](7260/previews/pattern_7.png) | ![pattern_8-7260](7260/previews/pattern_8.png) | ![pattern_9-7260](7260/previews/pattern_9.png) | ![pattern_10-7260](7260/previews/pattern_10.png) | ![pattern_11-7260](7260/previews/pattern_11.png) | ![pattern_12-7260](7260/previews/pattern_12.png) | ![pattern_13-7260](7260/previews/pattern_13.png) | ![pattern_14-7260](7260/previews/pattern_14.png) | ![pattern_15-7260](7260/previews/pattern_15.png) | ![pattern_16-7260](7260/previews/pattern_16.png) | ![pattern_17-7260](7260/previews/pattern_17.png) | ![pattern_18-7260](7260/previews/pattern_18.png) | ![bikini-7260](7260/previews/bikini.png) | [<NSFW, click to see>](7260/previews/bondage.png) | ![free-7260](7260/previews/free.png) | ![maid-7260](7260/previews/maid.png) | ![miko-7260](7260/previews/miko.png) | [<NSFW, click to see>](7260/previews/nude.png) | [<NSFW, click to see>](7260/previews/nude2.png) | ![suit-7260](7260/previews/suit.png) | ![yukata-7260](7260/previews/yukata.png) | | 6600 | 0.942 | [Download](6600/kaname_madoka_puellamagimadokamagica.zip) | ![pattern_1-6600](6600/previews/pattern_1.png) | ![pattern_2-6600](6600/previews/pattern_2.png) | ![pattern_3-6600](6600/previews/pattern_3.png) | ![pattern_4-6600](6600/previews/pattern_4.png) | ![pattern_5-6600](6600/previews/pattern_5.png) | ![pattern_6-6600](6600/previews/pattern_6.png) | ![pattern_7-6600](6600/previews/pattern_7.png) | ![pattern_8-6600](6600/previews/pattern_8.png) | ![pattern_9-6600](6600/previews/pattern_9.png) | ![pattern_10-6600](6600/previews/pattern_10.png) | ![pattern_11-6600](6600/previews/pattern_11.png) | ![pattern_12-6600](6600/previews/pattern_12.png) | ![pattern_13-6600](6600/previews/pattern_13.png) | ![pattern_14-6600](6600/previews/pattern_14.png) | ![pattern_15-6600](6600/previews/pattern_15.png) | ![pattern_16-6600](6600/previews/pattern_16.png) | ![pattern_17-6600](6600/previews/pattern_17.png) | ![pattern_18-6600](6600/previews/pattern_18.png) | ![bikini-6600](6600/previews/bikini.png) | [<NSFW, click to see>](6600/previews/bondage.png) | ![free-6600](6600/previews/free.png) | ![maid-6600](6600/previews/maid.png) | ![miko-6600](6600/previews/miko.png) | [<NSFW, click to see>](6600/previews/nude.png) | [<NSFW, click to see>](6600/previews/nude2.png) | ![suit-6600](6600/previews/suit.png) | ![yukata-6600](6600/previews/yukata.png) | | 5940 | 0.868 | [Download](5940/kaname_madoka_puellamagimadokamagica.zip) | ![pattern_1-5940](5940/previews/pattern_1.png) | ![pattern_2-5940](5940/previews/pattern_2.png) | ![pattern_3-5940](5940/previews/pattern_3.png) | ![pattern_4-5940](5940/previews/pattern_4.png) | ![pattern_5-5940](5940/previews/pattern_5.png) | ![pattern_6-5940](5940/previews/pattern_6.png) | ![pattern_7-5940](5940/previews/pattern_7.png) | ![pattern_8-5940](5940/previews/pattern_8.png) | ![pattern_9-5940](5940/previews/pattern_9.png) | ![pattern_10-5940](5940/previews/pattern_10.png) | ![pattern_11-5940](5940/previews/pattern_11.png) | ![pattern_12-5940](5940/previews/pattern_12.png) | ![pattern_13-5940](5940/previews/pattern_13.png) | ![pattern_14-5940](5940/previews/pattern_14.png) | ![pattern_15-5940](5940/previews/pattern_15.png) | ![pattern_16-5940](5940/previews/pattern_16.png) | ![pattern_17-5940](5940/previews/pattern_17.png) | ![pattern_18-5940](5940/previews/pattern_18.png) | ![bikini-5940](5940/previews/bikini.png) | [<NSFW, click to see>](5940/previews/bondage.png) | ![free-5940](5940/previews/free.png) | ![maid-5940](5940/previews/maid.png) | ![miko-5940](5940/previews/miko.png) | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) | ![suit-5940](5940/previews/suit.png) | ![yukata-5940](5940/previews/yukata.png) | | 5280 | 0.908 | [Download](5280/kaname_madoka_puellamagimadokamagica.zip) | ![pattern_1-5280](5280/previews/pattern_1.png) | ![pattern_2-5280](5280/previews/pattern_2.png) | ![pattern_3-5280](5280/previews/pattern_3.png) | ![pattern_4-5280](5280/previews/pattern_4.png) | ![pattern_5-5280](5280/previews/pattern_5.png) | ![pattern_6-5280](5280/previews/pattern_6.png) | ![pattern_7-5280](5280/previews/pattern_7.png) | ![pattern_8-5280](5280/previews/pattern_8.png) | ![pattern_9-5280](5280/previews/pattern_9.png) | ![pattern_10-5280](5280/previews/pattern_10.png) | ![pattern_11-5280](5280/previews/pattern_11.png) | ![pattern_12-5280](5280/previews/pattern_12.png) | ![pattern_13-5280](5280/previews/pattern_13.png) | ![pattern_14-5280](5280/previews/pattern_14.png) | ![pattern_15-5280](5280/previews/pattern_15.png) | ![pattern_16-5280](5280/previews/pattern_16.png) | ![pattern_17-5280](5280/previews/pattern_17.png) | ![pattern_18-5280](5280/previews/pattern_18.png) | ![bikini-5280](5280/previews/bikini.png) | [<NSFW, click to see>](5280/previews/bondage.png) | ![free-5280](5280/previews/free.png) | ![maid-5280](5280/previews/maid.png) | ![miko-5280](5280/previews/miko.png) | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) | ![suit-5280](5280/previews/suit.png) | ![yukata-5280](5280/previews/yukata.png) | | 4620 | 0.904 | [Download](4620/kaname_madoka_puellamagimadokamagica.zip) | ![pattern_1-4620](4620/previews/pattern_1.png) | ![pattern_2-4620](4620/previews/pattern_2.png) | ![pattern_3-4620](4620/previews/pattern_3.png) | ![pattern_4-4620](4620/previews/pattern_4.png) | ![pattern_5-4620](4620/previews/pattern_5.png) | ![pattern_6-4620](4620/previews/pattern_6.png) | ![pattern_7-4620](4620/previews/pattern_7.png) | ![pattern_8-4620](4620/previews/pattern_8.png) | ![pattern_9-4620](4620/previews/pattern_9.png) | ![pattern_10-4620](4620/previews/pattern_10.png) | ![pattern_11-4620](4620/previews/pattern_11.png) | ![pattern_12-4620](4620/previews/pattern_12.png) | ![pattern_13-4620](4620/previews/pattern_13.png) | ![pattern_14-4620](4620/previews/pattern_14.png) | ![pattern_15-4620](4620/previews/pattern_15.png) | ![pattern_16-4620](4620/previews/pattern_16.png) | ![pattern_17-4620](4620/previews/pattern_17.png) | ![pattern_18-4620](4620/previews/pattern_18.png) | ![bikini-4620](4620/previews/bikini.png) | [<NSFW, click to see>](4620/previews/bondage.png) | ![free-4620](4620/previews/free.png) | ![maid-4620](4620/previews/maid.png) | ![miko-4620](4620/previews/miko.png) | [<NSFW, click to see>](4620/previews/nude.png) | [<NSFW, click to see>](4620/previews/nude2.png) | ![suit-4620](4620/previews/suit.png) | ![yukata-4620](4620/previews/yukata.png) | | 3960 | 0.900 | [Download](3960/kaname_madoka_puellamagimadokamagica.zip) | ![pattern_1-3960](3960/previews/pattern_1.png) | ![pattern_2-3960](3960/previews/pattern_2.png) | ![pattern_3-3960](3960/previews/pattern_3.png) | ![pattern_4-3960](3960/previews/pattern_4.png) | ![pattern_5-3960](3960/previews/pattern_5.png) | ![pattern_6-3960](3960/previews/pattern_6.png) | ![pattern_7-3960](3960/previews/pattern_7.png) | ![pattern_8-3960](3960/previews/pattern_8.png) | ![pattern_9-3960](3960/previews/pattern_9.png) | ![pattern_10-3960](3960/previews/pattern_10.png) | ![pattern_11-3960](3960/previews/pattern_11.png) | ![pattern_12-3960](3960/previews/pattern_12.png) | ![pattern_13-3960](3960/previews/pattern_13.png) | ![pattern_14-3960](3960/previews/pattern_14.png) | ![pattern_15-3960](3960/previews/pattern_15.png) | ![pattern_16-3960](3960/previews/pattern_16.png) | ![pattern_17-3960](3960/previews/pattern_17.png) | ![pattern_18-3960](3960/previews/pattern_18.png) | ![bikini-3960](3960/previews/bikini.png) | [<NSFW, click to see>](3960/previews/bondage.png) | ![free-3960](3960/previews/free.png) | ![maid-3960](3960/previews/maid.png) | ![miko-3960](3960/previews/miko.png) | [<NSFW, click to see>](3960/previews/nude.png) | [<NSFW, click to see>](3960/previews/nude2.png) | ![suit-3960](3960/previews/suit.png) | ![yukata-3960](3960/previews/yukata.png) | | 3300 | 0.840 | [Download](3300/kaname_madoka_puellamagimadokamagica.zip) | ![pattern_1-3300](3300/previews/pattern_1.png) | ![pattern_2-3300](3300/previews/pattern_2.png) | ![pattern_3-3300](3300/previews/pattern_3.png) | ![pattern_4-3300](3300/previews/pattern_4.png) | ![pattern_5-3300](3300/previews/pattern_5.png) | ![pattern_6-3300](3300/previews/pattern_6.png) | ![pattern_7-3300](3300/previews/pattern_7.png) | ![pattern_8-3300](3300/previews/pattern_8.png) | ![pattern_9-3300](3300/previews/pattern_9.png) | ![pattern_10-3300](3300/previews/pattern_10.png) | ![pattern_11-3300](3300/previews/pattern_11.png) | ![pattern_12-3300](3300/previews/pattern_12.png) | ![pattern_13-3300](3300/previews/pattern_13.png) | ![pattern_14-3300](3300/previews/pattern_14.png) | ![pattern_15-3300](3300/previews/pattern_15.png) | ![pattern_16-3300](3300/previews/pattern_16.png) | ![pattern_17-3300](3300/previews/pattern_17.png) | ![pattern_18-3300](3300/previews/pattern_18.png) | ![bikini-3300](3300/previews/bikini.png) | [<NSFW, click to see>](3300/previews/bondage.png) | ![free-3300](3300/previews/free.png) | ![maid-3300](3300/previews/maid.png) | ![miko-3300](3300/previews/miko.png) | [<NSFW, click to see>](3300/previews/nude.png) | [<NSFW, click to see>](3300/previews/nude2.png) | ![suit-3300](3300/previews/suit.png) | ![yukata-3300](3300/previews/yukata.png) | | 2640 | 0.923 | [Download](2640/kaname_madoka_puellamagimadokamagica.zip) | ![pattern_1-2640](2640/previews/pattern_1.png) | ![pattern_2-2640](2640/previews/pattern_2.png) | ![pattern_3-2640](2640/previews/pattern_3.png) | ![pattern_4-2640](2640/previews/pattern_4.png) | ![pattern_5-2640](2640/previews/pattern_5.png) | ![pattern_6-2640](2640/previews/pattern_6.png) | ![pattern_7-2640](2640/previews/pattern_7.png) | ![pattern_8-2640](2640/previews/pattern_8.png) | ![pattern_9-2640](2640/previews/pattern_9.png) | ![pattern_10-2640](2640/previews/pattern_10.png) | ![pattern_11-2640](2640/previews/pattern_11.png) | ![pattern_12-2640](2640/previews/pattern_12.png) | ![pattern_13-2640](2640/previews/pattern_13.png) | ![pattern_14-2640](2640/previews/pattern_14.png) | ![pattern_15-2640](2640/previews/pattern_15.png) | ![pattern_16-2640](2640/previews/pattern_16.png) | ![pattern_17-2640](2640/previews/pattern_17.png) | ![pattern_18-2640](2640/previews/pattern_18.png) | ![bikini-2640](2640/previews/bikini.png) | [<NSFW, click to see>](2640/previews/bondage.png) | ![free-2640](2640/previews/free.png) | ![maid-2640](2640/previews/maid.png) | ![miko-2640](2640/previews/miko.png) | [<NSFW, click to see>](2640/previews/nude.png) | [<NSFW, click to see>](2640/previews/nude2.png) | ![suit-2640](2640/previews/suit.png) | ![yukata-2640](2640/previews/yukata.png) | | 1980 | 0.888 | [Download](1980/kaname_madoka_puellamagimadokamagica.zip) | ![pattern_1-1980](1980/previews/pattern_1.png) | ![pattern_2-1980](1980/previews/pattern_2.png) | ![pattern_3-1980](1980/previews/pattern_3.png) | ![pattern_4-1980](1980/previews/pattern_4.png) | ![pattern_5-1980](1980/previews/pattern_5.png) | ![pattern_6-1980](1980/previews/pattern_6.png) | ![pattern_7-1980](1980/previews/pattern_7.png) | ![pattern_8-1980](1980/previews/pattern_8.png) | ![pattern_9-1980](1980/previews/pattern_9.png) | ![pattern_10-1980](1980/previews/pattern_10.png) | ![pattern_11-1980](1980/previews/pattern_11.png) | ![pattern_12-1980](1980/previews/pattern_12.png) | ![pattern_13-1980](1980/previews/pattern_13.png) | ![pattern_14-1980](1980/previews/pattern_14.png) | ![pattern_15-1980](1980/previews/pattern_15.png) | ![pattern_16-1980](1980/previews/pattern_16.png) | ![pattern_17-1980](1980/previews/pattern_17.png) | ![pattern_18-1980](1980/previews/pattern_18.png) | ![bikini-1980](1980/previews/bikini.png) | [<NSFW, click to see>](1980/previews/bondage.png) | ![free-1980](1980/previews/free.png) | ![maid-1980](1980/previews/maid.png) | ![miko-1980](1980/previews/miko.png) | [<NSFW, click to see>](1980/previews/nude.png) | [<NSFW, click to see>](1980/previews/nude2.png) | ![suit-1980](1980/previews/suit.png) | ![yukata-1980](1980/previews/yukata.png) | | 1320 | 0.865 | [Download](1320/kaname_madoka_puellamagimadokamagica.zip) | ![pattern_1-1320](1320/previews/pattern_1.png) | ![pattern_2-1320](1320/previews/pattern_2.png) | ![pattern_3-1320](1320/previews/pattern_3.png) | ![pattern_4-1320](1320/previews/pattern_4.png) | ![pattern_5-1320](1320/previews/pattern_5.png) | ![pattern_6-1320](1320/previews/pattern_6.png) | ![pattern_7-1320](1320/previews/pattern_7.png) | ![pattern_8-1320](1320/previews/pattern_8.png) | ![pattern_9-1320](1320/previews/pattern_9.png) | ![pattern_10-1320](1320/previews/pattern_10.png) | ![pattern_11-1320](1320/previews/pattern_11.png) | ![pattern_12-1320](1320/previews/pattern_12.png) | ![pattern_13-1320](1320/previews/pattern_13.png) | ![pattern_14-1320](1320/previews/pattern_14.png) | ![pattern_15-1320](1320/previews/pattern_15.png) | ![pattern_16-1320](1320/previews/pattern_16.png) | ![pattern_17-1320](1320/previews/pattern_17.png) | ![pattern_18-1320](1320/previews/pattern_18.png) | ![bikini-1320](1320/previews/bikini.png) | [<NSFW, click to see>](1320/previews/bondage.png) | ![free-1320](1320/previews/free.png) | ![maid-1320](1320/previews/maid.png) | ![miko-1320](1320/previews/miko.png) | [<NSFW, click to see>](1320/previews/nude.png) | [<NSFW, click to see>](1320/previews/nude2.png) | ![suit-1320](1320/previews/suit.png) | ![yukata-1320](1320/previews/yukata.png) | | 660 | 0.672 | [Download](660/kaname_madoka_puellamagimadokamagica.zip) | ![pattern_1-660](660/previews/pattern_1.png) | ![pattern_2-660](660/previews/pattern_2.png) | ![pattern_3-660](660/previews/pattern_3.png) | ![pattern_4-660](660/previews/pattern_4.png) | ![pattern_5-660](660/previews/pattern_5.png) | ![pattern_6-660](660/previews/pattern_6.png) | ![pattern_7-660](660/previews/pattern_7.png) | ![pattern_8-660](660/previews/pattern_8.png) | ![pattern_9-660](660/previews/pattern_9.png) | ![pattern_10-660](660/previews/pattern_10.png) | ![pattern_11-660](660/previews/pattern_11.png) | ![pattern_12-660](660/previews/pattern_12.png) | ![pattern_13-660](660/previews/pattern_13.png) | ![pattern_14-660](660/previews/pattern_14.png) | ![pattern_15-660](660/previews/pattern_15.png) | ![pattern_16-660](660/previews/pattern_16.png) | ![pattern_17-660](660/previews/pattern_17.png) | ![pattern_18-660](660/previews/pattern_18.png) | ![bikini-660](660/previews/bikini.png) | [<NSFW, click to see>](660/previews/bondage.png) | ![free-660](660/previews/free.png) | ![maid-660](660/previews/maid.png) | ![miko-660](660/previews/miko.png) | [<NSFW, click to see>](660/previews/nude.png) | [<NSFW, click to see>](660/previews/nude2.png) | ![suit-660](660/previews/suit.png) | ![yukata-660](660/previews/yukata.png) |
glukas/smd-audio-diffusion-256
glukas
2023-09-23T16:30:26Z
0
0
null
[ "license:openrail", "region:us" ]
null
2023-09-23T15:57:14Z
--- license: openrail --- DDPM Model (https://github.com/teticio/audio-diffusion) trained on the Saarland Music Data (SMD): Meinard Müller, Verena Konz, Wolfgang Bogler, and Vlora Arifi-Müller Saarland Music Data (SMD) In Late-Breaking and Demo Session of the International Conference on Music Information Retrieval (ISMIR), 2011. https://resources.mpi-inf.mpg.de/SMD/SMD_MIDI-Audio-Piano-Music.html.
FilipKubis/ppo-LunarLander-v2
FilipKubis
2023-09-23T16:20:20Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-23T16:20:01Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 240.38 +/- 45.01 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Tiabet/bert-base-uncased-finetuned-Tiabet
Tiabet
2023-09-23T16:18:35Z
161
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "text classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-23T14:59:17Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - text classification - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-Tiabet results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-Tiabet This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Tokenizers 0.13.3
Adbhut/speecht5-finetuned-voxpopuli_nl
Adbhut
2023-09-23T16:14:57Z
75
0
transformers
[ "transformers", "pytorch", "speecht5", "text-to-audio", "generated_from_trainer", "dataset:voxpopuli", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2023-09-23T15:07:12Z
--- license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer datasets: - voxpopuli model-index: - name: speecht5-finetuned-voxpopuli_nl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5-finetuned-voxpopuli_nl This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset. It achieves the following results on the evaluation set: - Loss: 0.4792 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - training_steps: 2000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5303 | 8.6 | 500 | 0.4960 | | 0.5125 | 17.2 | 1000 | 0.4834 | | 0.4998 | 25.81 | 1500 | 0.4791 | | 0.4953 | 34.41 | 2000 | 0.4792 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
eisenjulian/viz-wiz-bert-base-uncased_f32
eisenjulian
2023-09-23T16:04:53Z
117
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-21T15:35:01Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: viz-wiz-bert-base-uncased_f32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # viz-wiz-bert-base-uncased_f32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0723 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 321 | 1.1645 | | 1.344 | 2.0 | 642 | 1.0789 | | 1.344 | 3.0 | 963 | 1.0537 | | 1.1234 | 4.0 | 1284 | 1.0195 | | 1.065 | 5.0 | 1605 | 1.0723 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
niyasin/headshot
niyasin
2023-09-23T15:52:18Z
1
0
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2023-09-23T14:22:00Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: professional linked in photo tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
eisenjulian/viz-wiz-bert-base-uncased_f16
eisenjulian
2023-09-23T15:46:40Z
109
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-21T21:27:56Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: viz-wiz-bert-base-uncased_f16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # viz-wiz-bert-base-uncased_f16 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0723 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 321 | 1.1645 | | 1.344 | 2.0 | 642 | 1.0789 | | 1.344 | 3.0 | 963 | 1.0537 | | 1.1234 | 4.0 | 1284 | 1.0195 | | 1.065 | 5.0 | 1605 | 1.0723 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
DriveMyScream/News_Similarity_Analysis
DriveMyScream
2023-09-23T15:42:18Z
0
0
keras
[ "keras", "tf-keras", "region:us" ]
null
2023-09-23T15:41:28Z
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | weight_decay | None | | clipnorm | None | | global_clipnorm | None | | clipvalue | None | | use_ema | False | | ema_momentum | 0.99 | | ema_overwrite_frequency | None | | jit_compile | True | | is_legacy_optimizer | False | | learning_rate | 0.0010000000474974513 | | beta_1 | 0.9 | | beta_2 | 0.999 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | float32 | ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
Sanjay1234/Trillo-Project-with-10-extra-rows
Sanjay1234
2023-09-23T15:35:10Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-09-23T15:34:45Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # Sanjay1234/Trillo-Project-with-10-extra-rows This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("Sanjay1234/Trillo-Project-with-10-extra-rows") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
MnLgt/CartPole-v1
MnLgt
2023-09-23T15:34:41Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-09-23T14:26:33Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
AmelieSchreiber/esm2_t12_35M_lora_binding_sites_v2_cp1
AmelieSchreiber
2023-09-23T15:31:39Z
15
0
peft
[ "peft", "ESM-2", "protein language model", "binding sites", "biology", "token-classification", "en", "dataset:AmelieSchreiber/binding_sites_random_split_by_family_550K", "base_model:facebook/esm2_t12_35M_UR50D", "base_model:adapter:facebook/esm2_t12_35M_UR50D", "license:mit", "region:us" ]
token-classification
2023-09-13T11:24:50Z
--- language: - en license: mit library_name: peft tags: - ESM-2 - protein language model - binding sites - biology datasets: - AmelieSchreiber/binding_sites_random_split_by_family_550K metrics: - accuracy - f1 - roc_auc - precision - recall - matthews_correlation pipeline_tag: token-classification base_model: facebook/esm2_t12_35M_UR50D --- # ESM-2 for Binding Site Prediction **This model is overfit (see below).** This model is a finetuned version of the 35M parameter `esm2_t12_35M_UR50D` ([see here](https://huggingface.co/facebook/esm2_t12_35M_UR50D) and [here](https://huggingface.co/docs/transformers/model_doc/esm) for more details). The model was finetuned with LoRA for the binay token classification task of predicting binding sites (and active sites) of protein sequences based on sequence alone. The model may be underfit and undertrained, however it still achieved better performance on the test set in terms of loss, accuracy, precision, recall, F1 score, ROC_AUC, and Matthews Correlation Coefficient (MCC) compared to the models trained on the smaller dataset [found here](https://huggingface.co/datasets/AmelieSchreiber/binding_sites_random_split_by_family) of ~209K protein sequences. Note, this model has a high recall, meaning it is likely to detect binding sites, but it has a low precision, meaning the model will likely return false positives as well. ## Training procedure This model was finetuned on ~549K protein sequences from the UniProt database. The dataset can be found [here](https://huggingface.co/datasets/AmelieSchreiber/binding_sites_random_split_by_family_550K). The model obtains the following test metrics: ```python Train: ({'accuracy': 0.9905461579981686, 'precision': 0.7695765003685506, 'recall': 0.9841352974610041, 'f1': 0.8637307441810476, 'auc': 0.9874413786006525, 'mcc': 0.8658850560635515}, Test: {'accuracy': 0.9394282959813123, 'precision': 0.3662722265170941, 'recall': 0.8330231316088238, 'f1': 0.5088208423175958, 'auc': 0.8883078682492643, 'mcc': 0.5283098562376193}) ``` To analyze the train and test metrics, we will consider each metric individually and then offer a comprehensive view of the model’s performance. Let's start: ### **1. Accuracy** - **Train**: 99.05% - **Test**: 93.94% The accuracy is quite high in both the training and test datasets, indicating that the model is correctly identifying the positive and negative classes most of the time. ### **2. Precision** - **Train**: 76.96% - **Test**: 36.63% The precision, which measures the proportion of true positive predictions among all positive predictions, drops significantly in the test set. This suggests that the model might be identifying too many false positives when generalized to unseen data. ### **3. Recall** - **Train**: 98.41% - **Test**: 83.30% The recall, which indicates the proportion of actual positives correctly identified, remains quite high in the test set, although lower than in the training set. This suggests the model is quite sensitive and is able to identify most of the positive cases. ### **4. F1-Score** - **Train**: 86.37% - **Test**: 50.88% The F1-score is the harmonic mean of precision and recall. The significant drop in the F1-score from training to testing indicates that the balance between precision and recall has worsened in the test set, which is primarily due to the lower precision. ### **5. AUC (Area Under the ROC Curve)** - **Train**: 98.74% - **Test**: 88.83% The AUC is high in both training and testing, but it decreases in the test set. A high AUC indicates that the model has good measure of separability and is able to distinguish between the positive and negative classes well. ### **6. MCC (Matthews Correlation Coefficient)** - **Train**: 86.59% - **Test**: 52.83% MCC is a balanced metric that considers true and false positives and negatives. The decline in MCC from training to testing indicates a decrease in the quality of binary classifications. ### **Overall Analysis** - **Overfitting**: The significant drop in metrics such as precision, F1-score, and MCC from training to test set suggests that the model might be overfitting to the training data, i.e., it may not generalize well to unseen data. - **High Recall, Low Precision**: The model has a high recall but low precision on the test set, indicating that it is identifying too many cases as positive, including those that are actually negative (false positives). This could be a reflection of a model that is biased towards predicting the positive class. - **Improvement Suggestions**: - **Data Augmentation**: So, we might want to consider data augmentation strategies to make the model more robust. - **Class Weights**: If there is a class imbalance in the dataset, adjusting the class weights during training might help. - **Hyperparameter Tuning**: Experiment with different hyperparameters, including the learning rate, batch size, etc., to see if you can improve the model's performance on the test set. - **Feature Engineering**: Consider revisiting the features used to train the model. Sometimes, introducing new features or removing irrelevant ones can help improve performance. In conclusion, while the model performs excellently on the training set, its performance drops in the test set, suggesting that there is room for improvement to make the model more generalizable to unseen data. It would be beneficial to look into strategies to reduce overfitting and improve precision without significantly sacrificing recall. The dataset size increase from ~209K protein sequences to ~549K clearly improved performance in terms of test metric. We used Hugging Face's parameter efficient finetuning (PEFT) library to finetune with Low Rank Adaptation (LoRA). We decided to use a rank of 2 for the LoRA, as this was shown to slightly improve the test metrics compared to rank 8 and rank 16 on the same model trained on the smaller dataset. ### Framework versions - PEFT 0.5.0 ## Using the model To use the model on one of your protein sequences try running the following: ```python from transformers import AutoModelForTokenClassification, AutoTokenizer from peft import PeftModel import torch # Path to the saved LoRA model model_path = "AmelieSchreiber/esm2_t12_35M_lora_binding_sites_v2_cp1" # ESM2 base model base_model_path = "facebook/esm2_t12_35M_UR50D" # Load the model base_model = AutoModelForTokenClassification.from_pretrained(base_model_path) loaded_model = PeftModel.from_pretrained(base_model, model_path) # Ensure the model is in evaluation mode loaded_model.eval() # Load the tokenizer loaded_tokenizer = AutoTokenizer.from_pretrained(base_model_path) # Protein sequence for inference protein_sequence = "MAVPETRPNHTIYINNLNEKIKKDELKKSLHAIFSRFGQILDILVSRSLKMRGQAFVIFKEVSSATNALRSMQGFPFYDKPMRIQYAKTDSDIIAKMKGT" # Replace with your actual sequence # Tokenize the sequence inputs = loaded_tokenizer(protein_sequence, return_tensors="pt", truncation=True, max_length=1024, padding='max_length') # Run the model with torch.no_grad(): logits = loaded_model(**inputs).logits # Get predictions tokens = loaded_tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) # Convert input ids back to tokens predictions = torch.argmax(logits, dim=2) # Define labels id2label = { 0: "No binding site", 1: "Binding site" } # Print the predicted labels for each token for token, prediction in zip(tokens, predictions[0].numpy()): if token not in ['<pad>', '<cls>', '<eos>']: print((token, id2label[prediction])) ```
jmoney54378256438905/jondurbin_airoboros-c34b-2.2.1-3.75bpw
jmoney54378256438905
2023-09-23T15:26:54Z
6
0
transformers
[ "transformers", "llama", "text-generation", "dataset:jondurbin/airoboros-2.2.1", "license:llama2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-09-23T15:20:08Z
--- license: llama2 datasets: - jondurbin/airoboros-2.2.1 --- ### Overview Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros) This is essentially a minor "fix" branch of [airoboros-c34b-2.2](https://hf.co/jondurbin/airoboros-c34b-2.2) with a updates, primarily: - [re-generated writing responses](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#re-generated-writing-responses) - [longer contextual blocks](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#longer-contextual-blocks) - [removal of "rp" data](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#rp-category-removed) - [(less aggressive) de-censoring](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#de-censoring) - 5 epochs instead of 3 This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay. Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools! ### Prompt format The prompt format: ``` A chat. USER: {prompt} ASSISTANT: ``` The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc. Here's another example: ``` A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously. USER: {prompt} ASSISTANT: ``` And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf). ``` A chat between old friends: Timmy and Tommy. {description of characters} {setting for the chat} Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee." Tommy: ``` __*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__ ### Fine tuning info https://wandb.ai/jondurbin/airoboros-c34b-2.2.1/runs/07b08z7m?workspace=user-jondurbin ### Helpful usage tips *The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.* #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Summarization 500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example: ``` BEGININPUT {text to summarize} ENDINPUT BEGININSTRUCTION Summarize the input in around 130 words. ENDINSTRUCTION ``` #### Getting longer responses You can use a few techniques to get longer responses. Detailed prompts, with explicit instruction for word count: ``` Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality. The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization. One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary. Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements. Your response should be approximately 2300 words. ``` Or, a simpler example: ``` Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux. ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ### Contribute If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data, take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details. To help me with the OpenAI/compute costs: - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros 2.2 models are built on top of llama-2/codellama. The llama-2 base model has a custom Meta license: - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta. - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta. The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me.
Koshti10/BART_base_Gameplan
Koshti10
2023-09-23T15:21:35Z
4
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-09-23T14:02:06Z
--- license: apache-2.0 base_model: facebook/bart-base tags: - generated_from_trainer model-index: - name: BART_base_Gameplan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BART_base_Gameplan This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - label_smoothing_factor: 0.1 ### Training results ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
CyberHarem/shiraishi_tsumugi_theidolmstermillionlive
CyberHarem
2023-09-23T15:20:16Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/shiraishi_tsumugi_theidolmstermillionlive", "license:mit", "region:us" ]
text-to-image
2023-09-23T15:09:26Z
--- license: mit datasets: - CyberHarem/shiraishi_tsumugi_theidolmstermillionlive pipeline_tag: text-to-image tags: - art --- # Lora of shiraishi_tsumugi_theidolmstermillionlive This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 7000, you need to download `7000/shiraishi_tsumugi_theidolmstermillionlive.pt` as the embedding and `7000/shiraishi_tsumugi_theidolmstermillionlive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 7000**, with the score of 0.883. The trigger words are: 1. `shiraishi_tsumugi_theidolmstermillionlive` 2. `long_hair, blue_eyes, bangs, blush, hair_ornament, blue_hair, hairclip, breasts` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-----------------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 7500 | 0.879 | [Download](7500/shiraishi_tsumugi_theidolmstermillionlive.zip) | ![pattern_1-7500](7500/previews/pattern_1.png) | ![pattern_2-7500](7500/previews/pattern_2.png) | ![pattern_3-7500](7500/previews/pattern_3.png) | ![pattern_4-7500](7500/previews/pattern_4.png) | ![pattern_5-7500](7500/previews/pattern_5.png) | [<NSFW, click to see>](7500/previews/pattern_6.png) | ![pattern_7-7500](7500/previews/pattern_7.png) | ![bikini-7500](7500/previews/bikini.png) | [<NSFW, click to see>](7500/previews/bondage.png) | [<NSFW, click to see>](7500/previews/free.png) | ![maid-7500](7500/previews/maid.png) | ![miko-7500](7500/previews/miko.png) | [<NSFW, click to see>](7500/previews/nude.png) | [<NSFW, click to see>](7500/previews/nude2.png) | ![suit-7500](7500/previews/suit.png) | ![yukata-7500](7500/previews/yukata.png) | | **7000** | **0.883** | [**Download**](7000/shiraishi_tsumugi_theidolmstermillionlive.zip) | ![pattern_1-7000](7000/previews/pattern_1.png) | ![pattern_2-7000](7000/previews/pattern_2.png) | ![pattern_3-7000](7000/previews/pattern_3.png) | ![pattern_4-7000](7000/previews/pattern_4.png) | ![pattern_5-7000](7000/previews/pattern_5.png) | [<NSFW, click to see>](7000/previews/pattern_6.png) | ![pattern_7-7000](7000/previews/pattern_7.png) | ![bikini-7000](7000/previews/bikini.png) | [<NSFW, click to see>](7000/previews/bondage.png) | [<NSFW, click to see>](7000/previews/free.png) | ![maid-7000](7000/previews/maid.png) | ![miko-7000](7000/previews/miko.png) | [<NSFW, click to see>](7000/previews/nude.png) | [<NSFW, click to see>](7000/previews/nude2.png) | ![suit-7000](7000/previews/suit.png) | ![yukata-7000](7000/previews/yukata.png) | | 6500 | 0.866 | [Download](6500/shiraishi_tsumugi_theidolmstermillionlive.zip) | ![pattern_1-6500](6500/previews/pattern_1.png) | ![pattern_2-6500](6500/previews/pattern_2.png) | ![pattern_3-6500](6500/previews/pattern_3.png) | ![pattern_4-6500](6500/previews/pattern_4.png) | ![pattern_5-6500](6500/previews/pattern_5.png) | [<NSFW, click to see>](6500/previews/pattern_6.png) | ![pattern_7-6500](6500/previews/pattern_7.png) | ![bikini-6500](6500/previews/bikini.png) | [<NSFW, click to see>](6500/previews/bondage.png) | [<NSFW, click to see>](6500/previews/free.png) | ![maid-6500](6500/previews/maid.png) | ![miko-6500](6500/previews/miko.png) | [<NSFW, click to see>](6500/previews/nude.png) | [<NSFW, click to see>](6500/previews/nude2.png) | ![suit-6500](6500/previews/suit.png) | ![yukata-6500](6500/previews/yukata.png) | | 6000 | 0.821 | [Download](6000/shiraishi_tsumugi_theidolmstermillionlive.zip) | ![pattern_1-6000](6000/previews/pattern_1.png) | ![pattern_2-6000](6000/previews/pattern_2.png) | ![pattern_3-6000](6000/previews/pattern_3.png) | ![pattern_4-6000](6000/previews/pattern_4.png) | ![pattern_5-6000](6000/previews/pattern_5.png) | [<NSFW, click to see>](6000/previews/pattern_6.png) | ![pattern_7-6000](6000/previews/pattern_7.png) | ![bikini-6000](6000/previews/bikini.png) | [<NSFW, click to see>](6000/previews/bondage.png) | [<NSFW, click to see>](6000/previews/free.png) | ![maid-6000](6000/previews/maid.png) | ![miko-6000](6000/previews/miko.png) | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) | ![suit-6000](6000/previews/suit.png) | ![yukata-6000](6000/previews/yukata.png) | | 5500 | 0.864 | [Download](5500/shiraishi_tsumugi_theidolmstermillionlive.zip) | ![pattern_1-5500](5500/previews/pattern_1.png) | ![pattern_2-5500](5500/previews/pattern_2.png) | ![pattern_3-5500](5500/previews/pattern_3.png) | ![pattern_4-5500](5500/previews/pattern_4.png) | ![pattern_5-5500](5500/previews/pattern_5.png) | [<NSFW, click to see>](5500/previews/pattern_6.png) | ![pattern_7-5500](5500/previews/pattern_7.png) | ![bikini-5500](5500/previews/bikini.png) | [<NSFW, click to see>](5500/previews/bondage.png) | [<NSFW, click to see>](5500/previews/free.png) | ![maid-5500](5500/previews/maid.png) | ![miko-5500](5500/previews/miko.png) | [<NSFW, click to see>](5500/previews/nude.png) | [<NSFW, click to see>](5500/previews/nude2.png) | ![suit-5500](5500/previews/suit.png) | ![yukata-5500](5500/previews/yukata.png) | | 5000 | 0.862 | [Download](5000/shiraishi_tsumugi_theidolmstermillionlive.zip) | ![pattern_1-5000](5000/previews/pattern_1.png) | ![pattern_2-5000](5000/previews/pattern_2.png) | ![pattern_3-5000](5000/previews/pattern_3.png) | ![pattern_4-5000](5000/previews/pattern_4.png) | ![pattern_5-5000](5000/previews/pattern_5.png) | [<NSFW, click to see>](5000/previews/pattern_6.png) | ![pattern_7-5000](5000/previews/pattern_7.png) | ![bikini-5000](5000/previews/bikini.png) | [<NSFW, click to see>](5000/previews/bondage.png) | [<NSFW, click to see>](5000/previews/free.png) | ![maid-5000](5000/previews/maid.png) | ![miko-5000](5000/previews/miko.png) | [<NSFW, click to see>](5000/previews/nude.png) | [<NSFW, click to see>](5000/previews/nude2.png) | ![suit-5000](5000/previews/suit.png) | ![yukata-5000](5000/previews/yukata.png) | | 4500 | 0.824 | [Download](4500/shiraishi_tsumugi_theidolmstermillionlive.zip) | ![pattern_1-4500](4500/previews/pattern_1.png) | ![pattern_2-4500](4500/previews/pattern_2.png) | ![pattern_3-4500](4500/previews/pattern_3.png) | ![pattern_4-4500](4500/previews/pattern_4.png) | ![pattern_5-4500](4500/previews/pattern_5.png) | [<NSFW, click to see>](4500/previews/pattern_6.png) | ![pattern_7-4500](4500/previews/pattern_7.png) | ![bikini-4500](4500/previews/bikini.png) | [<NSFW, click to see>](4500/previews/bondage.png) | [<NSFW, click to see>](4500/previews/free.png) | ![maid-4500](4500/previews/maid.png) | ![miko-4500](4500/previews/miko.png) | [<NSFW, click to see>](4500/previews/nude.png) | [<NSFW, click to see>](4500/previews/nude2.png) | ![suit-4500](4500/previews/suit.png) | ![yukata-4500](4500/previews/yukata.png) | | 4000 | 0.797 | [Download](4000/shiraishi_tsumugi_theidolmstermillionlive.zip) | ![pattern_1-4000](4000/previews/pattern_1.png) | ![pattern_2-4000](4000/previews/pattern_2.png) | ![pattern_3-4000](4000/previews/pattern_3.png) | ![pattern_4-4000](4000/previews/pattern_4.png) | ![pattern_5-4000](4000/previews/pattern_5.png) | [<NSFW, click to see>](4000/previews/pattern_6.png) | ![pattern_7-4000](4000/previews/pattern_7.png) | ![bikini-4000](4000/previews/bikini.png) | [<NSFW, click to see>](4000/previews/bondage.png) | [<NSFW, click to see>](4000/previews/free.png) | ![maid-4000](4000/previews/maid.png) | ![miko-4000](4000/previews/miko.png) | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) | ![suit-4000](4000/previews/suit.png) | ![yukata-4000](4000/previews/yukata.png) | | 3500 | 0.847 | [Download](3500/shiraishi_tsumugi_theidolmstermillionlive.zip) | ![pattern_1-3500](3500/previews/pattern_1.png) | ![pattern_2-3500](3500/previews/pattern_2.png) | ![pattern_3-3500](3500/previews/pattern_3.png) | ![pattern_4-3500](3500/previews/pattern_4.png) | ![pattern_5-3500](3500/previews/pattern_5.png) | [<NSFW, click to see>](3500/previews/pattern_6.png) | ![pattern_7-3500](3500/previews/pattern_7.png) | ![bikini-3500](3500/previews/bikini.png) | [<NSFW, click to see>](3500/previews/bondage.png) | [<NSFW, click to see>](3500/previews/free.png) | ![maid-3500](3500/previews/maid.png) | ![miko-3500](3500/previews/miko.png) | [<NSFW, click to see>](3500/previews/nude.png) | [<NSFW, click to see>](3500/previews/nude2.png) | ![suit-3500](3500/previews/suit.png) | ![yukata-3500](3500/previews/yukata.png) | | 3000 | 0.857 | [Download](3000/shiraishi_tsumugi_theidolmstermillionlive.zip) | ![pattern_1-3000](3000/previews/pattern_1.png) | ![pattern_2-3000](3000/previews/pattern_2.png) | ![pattern_3-3000](3000/previews/pattern_3.png) | ![pattern_4-3000](3000/previews/pattern_4.png) | ![pattern_5-3000](3000/previews/pattern_5.png) | [<NSFW, click to see>](3000/previews/pattern_6.png) | ![pattern_7-3000](3000/previews/pattern_7.png) | ![bikini-3000](3000/previews/bikini.png) | [<NSFW, click to see>](3000/previews/bondage.png) | [<NSFW, click to see>](3000/previews/free.png) | ![maid-3000](3000/previews/maid.png) | ![miko-3000](3000/previews/miko.png) | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) | ![suit-3000](3000/previews/suit.png) | ![yukata-3000](3000/previews/yukata.png) | | 2500 | 0.859 | [Download](2500/shiraishi_tsumugi_theidolmstermillionlive.zip) | ![pattern_1-2500](2500/previews/pattern_1.png) | ![pattern_2-2500](2500/previews/pattern_2.png) | ![pattern_3-2500](2500/previews/pattern_3.png) | ![pattern_4-2500](2500/previews/pattern_4.png) | ![pattern_5-2500](2500/previews/pattern_5.png) | [<NSFW, click to see>](2500/previews/pattern_6.png) | ![pattern_7-2500](2500/previews/pattern_7.png) | ![bikini-2500](2500/previews/bikini.png) | [<NSFW, click to see>](2500/previews/bondage.png) | [<NSFW, click to see>](2500/previews/free.png) | ![maid-2500](2500/previews/maid.png) | ![miko-2500](2500/previews/miko.png) | [<NSFW, click to see>](2500/previews/nude.png) | [<NSFW, click to see>](2500/previews/nude2.png) | ![suit-2500](2500/previews/suit.png) | ![yukata-2500](2500/previews/yukata.png) | | 2000 | 0.787 | [Download](2000/shiraishi_tsumugi_theidolmstermillionlive.zip) | ![pattern_1-2000](2000/previews/pattern_1.png) | ![pattern_2-2000](2000/previews/pattern_2.png) | ![pattern_3-2000](2000/previews/pattern_3.png) | ![pattern_4-2000](2000/previews/pattern_4.png) | ![pattern_5-2000](2000/previews/pattern_5.png) | [<NSFW, click to see>](2000/previews/pattern_6.png) | ![pattern_7-2000](2000/previews/pattern_7.png) | ![bikini-2000](2000/previews/bikini.png) | [<NSFW, click to see>](2000/previews/bondage.png) | [<NSFW, click to see>](2000/previews/free.png) | ![maid-2000](2000/previews/maid.png) | ![miko-2000](2000/previews/miko.png) | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) | ![suit-2000](2000/previews/suit.png) | ![yukata-2000](2000/previews/yukata.png) | | 1500 | 0.814 | [Download](1500/shiraishi_tsumugi_theidolmstermillionlive.zip) | ![pattern_1-1500](1500/previews/pattern_1.png) | ![pattern_2-1500](1500/previews/pattern_2.png) | ![pattern_3-1500](1500/previews/pattern_3.png) | ![pattern_4-1500](1500/previews/pattern_4.png) | ![pattern_5-1500](1500/previews/pattern_5.png) | [<NSFW, click to see>](1500/previews/pattern_6.png) | ![pattern_7-1500](1500/previews/pattern_7.png) | ![bikini-1500](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/bondage.png) | [<NSFW, click to see>](1500/previews/free.png) | ![maid-1500](1500/previews/maid.png) | ![miko-1500](1500/previews/miko.png) | [<NSFW, click to see>](1500/previews/nude.png) | [<NSFW, click to see>](1500/previews/nude2.png) | ![suit-1500](1500/previews/suit.png) | ![yukata-1500](1500/previews/yukata.png) | | 1000 | 0.822 | [Download](1000/shiraishi_tsumugi_theidolmstermillionlive.zip) | ![pattern_1-1000](1000/previews/pattern_1.png) | ![pattern_2-1000](1000/previews/pattern_2.png) | ![pattern_3-1000](1000/previews/pattern_3.png) | ![pattern_4-1000](1000/previews/pattern_4.png) | ![pattern_5-1000](1000/previews/pattern_5.png) | [<NSFW, click to see>](1000/previews/pattern_6.png) | ![pattern_7-1000](1000/previews/pattern_7.png) | ![bikini-1000](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/bondage.png) | [<NSFW, click to see>](1000/previews/free.png) | ![maid-1000](1000/previews/maid.png) | ![miko-1000](1000/previews/miko.png) | [<NSFW, click to see>](1000/previews/nude.png) | [<NSFW, click to see>](1000/previews/nude2.png) | ![suit-1000](1000/previews/suit.png) | ![yukata-1000](1000/previews/yukata.png) | | 500 | 0.630 | [Download](500/shiraishi_tsumugi_theidolmstermillionlive.zip) | ![pattern_1-500](500/previews/pattern_1.png) | ![pattern_2-500](500/previews/pattern_2.png) | ![pattern_3-500](500/previews/pattern_3.png) | ![pattern_4-500](500/previews/pattern_4.png) | ![pattern_5-500](500/previews/pattern_5.png) | [<NSFW, click to see>](500/previews/pattern_6.png) | ![pattern_7-500](500/previews/pattern_7.png) | ![bikini-500](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/bondage.png) | [<NSFW, click to see>](500/previews/free.png) | ![maid-500](500/previews/maid.png) | ![miko-500](500/previews/miko.png) | [<NSFW, click to see>](500/previews/nude.png) | [<NSFW, click to see>](500/previews/nude2.png) | ![suit-500](500/previews/suit.png) | ![yukata-500](500/previews/yukata.png) |
Krishna1350/Practice
Krishna1350
2023-09-23T15:09:17Z
0
0
null
[ "region:us" ]
null
2023-09-23T15:05:54Z
Creating my first repo in the hugging face
ayoubkirouane/git-base-One-Piece
ayoubkirouane
2023-09-23T15:06:58Z
105
0
transformers
[ "transformers", "pytorch", "git", "image-text-to-text", "vision", "image-to-text", "image-captioning", "en", "dataset:ayoubkirouane/One-Piece-anime-captions", "license:mit", "endpoints_compatible", "region:us" ]
image-to-text
2023-09-23T11:28:27Z
--- language: en license: mit tags: - vision - image-to-text - image-captioning model_name: microsoft/git-base pipeline_tag: image-to-text library_name: transformers datasets: - ayoubkirouane/One-Piece-anime-captions --- # Model Details + **Model Name**: Git-base-One-Piece + **Base Model**: Microsoft's "git-base" model + **Model Type**: Generative Image-to-Text (GIT) + **Fine-Tuned** On: 'One-Piece-anime-captions' dataset + **Fine-Tuning Purpose**: To generate text captions for images related to the anime series "One Piece." ## Model Description **Git-base-One-Piece** is a fine-tuned variant of Microsoft's **git-base** model, specifically trained for the task of generating descriptive text captions for images from the **One-Piece-anime-captions** dataset. The dataset consists of **856 {image: caption}** pairs, providing a substantial and diverse training corpus for the model. The model is conditioned on both CLIP image tokens and text tokens and employs a **teacher forcing** training approach. It predicts the next text token while considering the context provided by the image and previous text tokens. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6338c06c107c4835a05699f9/N_yNK2tLabtwmSYAqpTEp.jpeg) ## Limitations + The quality of generated captions may vary depending on the complexity and diversity of images from the **One-Piece-anime-captions** dataset. + The model's output is based on the data it was fine-tuned on, so it may not generalize well to images outside the dataset's domain. Generating highly detailed or contextually accurate captions may still be a challenge. ## Usage ```python # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-to-text", model="ayoubkirouane/git-base-One-Piece") ``` **or** ```python # Load model directly from transformers import AutoProcessor, AutoModelForCausalLM processor = AutoProcessor.from_pretrained("ayoubkirouane/git-base-One-Piece") model = AutoModelForCausalLM.from_pretrained("ayoubkirouane/git-base-One-Piece") ```
Nuntea/ppo-LunarLander-v2
Nuntea
2023-09-23T15:05:49Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-23T15:05:27Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 121.16 +/- 117.97 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
WillyArdiyanto/image_classification
WillyArdiyanto
2023-09-23T14:52:08Z
11
1
transformers
[ "transformers", "pytorch", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-16T05:33:06Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: image_classification results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.5625 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # image_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4866 - Accuracy: 0.5625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 1.5045 | 0.4875 | | No log | 2.0 | 80 | 1.3562 | 0.5312 | | No log | 3.0 | 120 | 1.5354 | 0.4562 | | No log | 4.0 | 160 | 1.5095 | 0.5062 | | No log | 5.0 | 200 | 1.5644 | 0.475 | | No log | 6.0 | 240 | 1.4651 | 0.5563 | | No log | 7.0 | 280 | 1.4516 | 0.5375 | | No log | 8.0 | 320 | 1.5859 | 0.5188 | | No log | 9.0 | 360 | 1.5498 | 0.5437 | | No log | 10.0 | 400 | 1.5040 | 0.5625 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
chanmuzi/flan-t5-large-financial-phrasebank-lora
chanmuzi
2023-09-23T14:31:54Z
1
0
peft
[ "peft", "region:us" ]
null
2023-09-23T13:58:47Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
ldos/text_shortening_model_v53
ldos
2023-09-23T14:31:18Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-base", "base_model:finetune:google-t5/t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-09-23T12:13:33Z
--- license: apache-2.0 base_model: t5-base tags: - generated_from_trainer metrics: - rouge model-index: - name: text_shortening_model_v53 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # text_shortening_model_v53 This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1989 - Rouge1: 0.5027 - Rouge2: 0.2828 - Rougel: 0.4438 - Rougelsum: 0.4452 - Bert precision: 0.8749 - Bert recall: 0.875 - Average word count: 8.291 - Max word count: 16 - Min word count: 4 - Average token count: 12.8757 - % shortened texts with length > 12: 7.672 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bert precision | Bert recall | Average word count | Max word count | Min word count | Average token count | % shortened texts with length > 12 | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:--------------:|:-----------:|:------------------:|:--------------:|:--------------:|:-------------------:|:----------------------------------:| | 1.7343 | 1.0 | 83 | 1.2874 | 0.5099 | 0.2905 | 0.4503 | 0.4503 | 0.8714 | 0.878 | 9.1376 | 17 | 4 | 13.7937 | 13.4921 | | 1.2551 | 2.0 | 166 | 1.1755 | 0.5197 | 0.298 | 0.4583 | 0.4595 | 0.8747 | 0.8816 | 8.9974 | 17 | 4 | 13.7513 | 12.6984 | | 1.0452 | 3.0 | 249 | 1.1440 | 0.5182 | 0.3011 | 0.4573 | 0.4583 | 0.8759 | 0.8801 | 8.8307 | 18 | 4 | 13.3545 | 14.5503 | | 0.9043 | 4.0 | 332 | 1.1351 | 0.5321 | 0.3168 | 0.4774 | 0.4785 | 0.8783 | 0.8827 | 8.7619 | 16 | 4 | 13.4286 | 13.2275 | | 0.7933 | 5.0 | 415 | 1.1381 | 0.5131 | 0.2976 | 0.4577 | 0.4572 | 0.8786 | 0.8771 | 8.3148 | 18 | 4 | 12.6402 | 8.9947 | | 0.6859 | 6.0 | 498 | 1.1688 | 0.5357 | 0.3133 | 0.4776 | 0.4778 | 0.8812 | 0.8829 | 8.4762 | 18 | 4 | 12.9339 | 10.0529 | | 0.6069 | 7.0 | 581 | 1.2138 | 0.5228 | 0.2959 | 0.4562 | 0.4568 | 0.8768 | 0.88 | 8.6481 | 17 | 3 | 13.1429 | 12.1693 | | 0.5475 | 8.0 | 664 | 1.2333 | 0.5275 | 0.3051 | 0.4685 | 0.469 | 0.8787 | 0.8818 | 8.7143 | 16 | 4 | 13.209 | 11.6402 | | 0.4898 | 9.0 | 747 | 1.2889 | 0.5194 | 0.296 | 0.4603 | 0.4603 | 0.8785 | 0.8797 | 8.4947 | 15 | 4 | 12.9312 | 11.3757 | | 0.4456 | 10.0 | 830 | 1.3068 | 0.5154 | 0.2923 | 0.4634 | 0.4647 | 0.8796 | 0.8788 | 8.3413 | 16 | 4 | 12.8201 | 10.8466 | | 0.408 | 11.0 | 913 | 1.3598 | 0.5118 | 0.2959 | 0.4583 | 0.459 | 0.8787 | 0.8763 | 8.1481 | 16 | 4 | 12.627 | 9.2593 | | 0.3673 | 12.0 | 996 | 1.3905 | 0.5193 | 0.2999 | 0.4673 | 0.468 | 0.8808 | 0.8782 | 8.2116 | 16 | 3 | 12.627 | 8.2011 | | 0.3367 | 13.0 | 1079 | 1.4436 | 0.516 | 0.2977 | 0.4588 | 0.4595 | 0.8785 | 0.879 | 8.4259 | 17 | 4 | 12.9974 | 9.7884 | | 0.3113 | 14.0 | 1162 | 1.4654 | 0.5077 | 0.2884 | 0.4506 | 0.4511 | 0.8759 | 0.8776 | 8.4815 | 17 | 3 | 13.1481 | 10.3175 | | 0.2844 | 15.0 | 1245 | 1.4895 | 0.5132 | 0.2924 | 0.4538 | 0.4549 | 0.8759 | 0.8786 | 8.5185 | 17 | 3 | 13.1349 | 9.5238 | | 0.2684 | 16.0 | 1328 | 1.5345 | 0.5065 | 0.2933 | 0.4533 | 0.4539 | 0.8761 | 0.8767 | 8.2275 | 16 | 3 | 12.7196 | 8.7302 | | 0.2396 | 17.0 | 1411 | 1.5892 | 0.5035 | 0.2835 | 0.444 | 0.4449 | 0.8757 | 0.8743 | 8.082 | 17 | 3 | 12.5503 | 7.9365 | | 0.223 | 18.0 | 1494 | 1.5954 | 0.4944 | 0.2802 | 0.4407 | 0.4414 | 0.8729 | 0.8739 | 8.3519 | 17 | 4 | 12.8836 | 10.582 | | 0.2119 | 19.0 | 1577 | 1.6106 | 0.5042 | 0.286 | 0.4505 | 0.4515 | 0.8772 | 0.8752 | 8.0212 | 16 | 3 | 12.5265 | 7.4074 | | 0.2009 | 20.0 | 1660 | 1.6218 | 0.5002 | 0.2767 | 0.444 | 0.445 | 0.8755 | 0.874 | 8.1667 | 15 | 3 | 12.6746 | 8.9947 | | 0.1825 | 21.0 | 1743 | 1.6545 | 0.5 | 0.2868 | 0.4443 | 0.4449 | 0.8755 | 0.8749 | 8.1958 | 16 | 3 | 12.7249 | 8.9947 | | 0.1693 | 22.0 | 1826 | 1.7102 | 0.5014 | 0.2837 | 0.4453 | 0.4462 | 0.8741 | 0.8737 | 8.246 | 16 | 3 | 12.8466 | 10.0529 | | 0.1584 | 23.0 | 1909 | 1.7314 | 0.4947 | 0.2773 | 0.4375 | 0.4381 | 0.8728 | 0.8722 | 8.1878 | 15 | 3 | 12.7196 | 7.672 | | 0.1484 | 24.0 | 1992 | 1.7512 | 0.5004 | 0.2854 | 0.4461 | 0.447 | 0.8739 | 0.8744 | 8.2884 | 16 | 3 | 13.0053 | 7.9365 | | 0.144 | 25.0 | 2075 | 1.7862 | 0.5023 | 0.2847 | 0.4451 | 0.4455 | 0.8758 | 0.8756 | 8.2302 | 16 | 3 | 12.8148 | 8.9947 | | 0.1302 | 26.0 | 2158 | 1.8208 | 0.5041 | 0.2901 | 0.4471 | 0.4478 | 0.8739 | 0.8753 | 8.4153 | 17 | 3 | 13.0238 | 10.8466 | | 0.1231 | 27.0 | 2241 | 1.8258 | 0.5003 | 0.2835 | 0.4449 | 0.4457 | 0.8739 | 0.8738 | 8.2354 | 16 | 4 | 12.8016 | 7.9365 | | 0.1233 | 28.0 | 2324 | 1.8340 | 0.5047 | 0.2867 | 0.4432 | 0.4446 | 0.8743 | 0.8757 | 8.4206 | 17 | 4 | 12.9894 | 9.7884 | | 0.1114 | 29.0 | 2407 | 1.8461 | 0.4898 | 0.2731 | 0.4357 | 0.4358 | 0.8722 | 0.8722 | 8.2407 | 16 | 3 | 12.8466 | 7.9365 | | 0.1084 | 30.0 | 2490 | 1.8817 | 0.4973 | 0.2805 | 0.4381 | 0.4394 | 0.8741 | 0.8721 | 8.1349 | 16 | 3 | 12.5529 | 6.8783 | | 0.1031 | 31.0 | 2573 | 1.9454 | 0.489 | 0.2779 | 0.4352 | 0.4365 | 0.8725 | 0.8728 | 8.2989 | 16 | 4 | 12.8466 | 8.4656 | | 0.1008 | 32.0 | 2656 | 1.9150 | 0.4936 | 0.277 | 0.4415 | 0.4427 | 0.8741 | 0.873 | 8.2011 | 16 | 3 | 12.7249 | 8.7302 | | 0.0929 | 33.0 | 2739 | 1.9444 | 0.4905 | 0.2723 | 0.4337 | 0.4344 | 0.8722 | 0.873 | 8.4233 | 16 | 3 | 12.9074 | 9.2593 | | 0.0972 | 34.0 | 2822 | 1.9321 | 0.4877 | 0.2671 | 0.4336 | 0.4345 | 0.8721 | 0.872 | 8.3413 | 16 | 4 | 12.9153 | 8.9947 | | 0.0941 | 35.0 | 2905 | 1.9430 | 0.5018 | 0.2808 | 0.4467 | 0.4477 | 0.8757 | 0.8742 | 8.2169 | 15 | 4 | 12.7381 | 7.1429 | | 0.0856 | 36.0 | 2988 | 1.9623 | 0.4981 | 0.282 | 0.4463 | 0.4474 | 0.875 | 0.8737 | 8.2989 | 16 | 4 | 12.7672 | 8.9947 | | 0.0842 | 37.0 | 3071 | 1.9739 | 0.4992 | 0.2812 | 0.442 | 0.4435 | 0.8746 | 0.8741 | 8.3122 | 16 | 4 | 12.828 | 8.9947 | | 0.0833 | 38.0 | 3154 | 2.0193 | 0.5008 | 0.2805 | 0.4458 | 0.4469 | 0.8749 | 0.8756 | 8.373 | 16 | 4 | 12.9153 | 10.582 | | 0.0783 | 39.0 | 3237 | 2.0161 | 0.499 | 0.2829 | 0.4425 | 0.4435 | 0.8753 | 0.8735 | 8.1217 | 16 | 4 | 12.7222 | 7.1429 | | 0.0771 | 40.0 | 3320 | 1.9768 | 0.498 | 0.2803 | 0.4437 | 0.4449 | 0.8741 | 0.8731 | 8.2963 | 16 | 4 | 12.8095 | 8.9947 | | 0.0744 | 41.0 | 3403 | 2.0104 | 0.5002 | 0.2815 | 0.4404 | 0.4411 | 0.8745 | 0.875 | 8.4021 | 16 | 4 | 12.9603 | 9.5238 | | 0.0708 | 42.0 | 3486 | 2.0090 | 0.4938 | 0.2714 | 0.4347 | 0.435 | 0.8722 | 0.872 | 8.328 | 16 | 4 | 12.8783 | 8.9947 | | 0.0686 | 43.0 | 3569 | 2.0328 | 0.4958 | 0.2759 | 0.4386 | 0.4399 | 0.8727 | 0.8732 | 8.4339 | 16 | 4 | 12.9894 | 8.7302 | | 0.073 | 44.0 | 3652 | 2.0284 | 0.4924 | 0.2706 | 0.4347 | 0.4359 | 0.8724 | 0.8731 | 8.4339 | 16 | 4 | 12.9603 | 7.672 | | 0.0686 | 45.0 | 3735 | 2.0392 | 0.4952 | 0.2794 | 0.4415 | 0.4431 | 0.8732 | 0.8722 | 8.2619 | 16 | 4 | 12.8598 | 7.672 | | 0.0659 | 46.0 | 3818 | 2.0563 | 0.4906 | 0.2729 | 0.436 | 0.4375 | 0.8729 | 0.8714 | 8.1323 | 15 | 4 | 12.6032 | 6.0847 | | 0.0651 | 47.0 | 3901 | 2.0454 | 0.4903 | 0.2707 | 0.4356 | 0.4369 | 0.8725 | 0.8701 | 8.1005 | 16 | 3 | 12.6667 | 5.8201 | | 0.0655 | 48.0 | 3984 | 2.0500 | 0.4971 | 0.2784 | 0.4419 | 0.4434 | 0.874 | 0.8733 | 8.2434 | 16 | 4 | 12.8228 | 7.4074 | | 0.0635 | 49.0 | 4067 | 2.0376 | 0.504 | 0.2831 | 0.446 | 0.4472 | 0.8733 | 0.8739 | 8.3968 | 16 | 4 | 12.9735 | 8.7302 | | 0.063 | 50.0 | 4150 | 2.0596 | 0.4878 | 0.2623 | 0.426 | 0.4276 | 0.8699 | 0.8726 | 8.672 | 16 | 4 | 13.2037 | 11.1111 | | 0.0589 | 51.0 | 4233 | 2.0682 | 0.4932 | 0.2734 | 0.4384 | 0.4393 | 0.8738 | 0.8729 | 8.2963 | 16 | 4 | 12.7407 | 7.1429 | | 0.0566 | 52.0 | 4316 | 2.0692 | 0.5 | 0.2803 | 0.4445 | 0.4463 | 0.8745 | 0.8753 | 8.463 | 16 | 3 | 13.0185 | 10.8466 | | 0.0558 | 53.0 | 4399 | 2.1003 | 0.5023 | 0.279 | 0.4421 | 0.443 | 0.874 | 0.8757 | 8.5265 | 16 | 3 | 13.0952 | 10.8466 | | 0.0545 | 54.0 | 4482 | 2.0992 | 0.5027 | 0.2801 | 0.4428 | 0.444 | 0.8745 | 0.8765 | 8.5053 | 16 | 3 | 12.9815 | 10.8466 | | 0.0573 | 55.0 | 4565 | 2.0713 | 0.4971 | 0.2786 | 0.4385 | 0.4409 | 0.8735 | 0.8735 | 8.3307 | 16 | 3 | 12.8042 | 9.2593 | | 0.0549 | 56.0 | 4648 | 2.1003 | 0.4988 | 0.2782 | 0.4388 | 0.4399 | 0.8727 | 0.8745 | 8.4762 | 16 | 3 | 13.0661 | 10.3175 | | 0.0572 | 57.0 | 4731 | 2.0828 | 0.5 | 0.2806 | 0.4449 | 0.4456 | 0.8741 | 0.8737 | 8.2407 | 16 | 3 | 12.8413 | 8.4656 | | 0.0549 | 58.0 | 4814 | 2.0845 | 0.5049 | 0.2829 | 0.442 | 0.4421 | 0.8742 | 0.8752 | 8.3783 | 17 | 3 | 12.9577 | 9.7884 | | 0.0542 | 59.0 | 4897 | 2.0775 | 0.5088 | 0.2921 | 0.4506 | 0.4518 | 0.8757 | 0.8774 | 8.5 | 16 | 4 | 13.1614 | 10.582 | | 0.0517 | 60.0 | 4980 | 2.1092 | 0.504 | 0.2874 | 0.4467 | 0.4475 | 0.8757 | 0.8753 | 8.2381 | 16 | 3 | 12.7646 | 8.9947 | | 0.0531 | 61.0 | 5063 | 2.1049 | 0.5004 | 0.2832 | 0.4444 | 0.4456 | 0.8724 | 0.8743 | 8.4921 | 16 | 4 | 13.0741 | 10.8466 | | 0.0516 | 62.0 | 5146 | 2.1202 | 0.5042 | 0.2822 | 0.4475 | 0.448 | 0.8755 | 0.8752 | 8.2725 | 16 | 3 | 12.8307 | 8.4656 | | 0.0481 | 63.0 | 5229 | 2.1460 | 0.5047 | 0.284 | 0.4428 | 0.443 | 0.8729 | 0.8754 | 8.5503 | 16 | 4 | 13.2169 | 10.0529 | | 0.0456 | 64.0 | 5312 | 2.1436 | 0.5015 | 0.2802 | 0.4419 | 0.4429 | 0.8733 | 0.8743 | 8.4259 | 16 | 4 | 12.9444 | 9.7884 | | 0.048 | 65.0 | 5395 | 2.1405 | 0.5114 | 0.2882 | 0.4503 | 0.4516 | 0.8754 | 0.8768 | 8.4365 | 16 | 4 | 13.0529 | 10.3175 | | 0.0467 | 66.0 | 5478 | 2.1556 | 0.5098 | 0.2866 | 0.4477 | 0.4488 | 0.8757 | 0.876 | 8.3386 | 16 | 4 | 12.8704 | 9.5238 | | 0.0471 | 67.0 | 5561 | 2.1712 | 0.5155 | 0.2901 | 0.4532 | 0.4546 | 0.8752 | 0.8781 | 8.6032 | 16 | 4 | 13.2302 | 12.4339 | | 0.0434 | 68.0 | 5644 | 2.1529 | 0.5104 | 0.2889 | 0.4494 | 0.4502 | 0.8748 | 0.876 | 8.4444 | 16 | 4 | 13.037 | 10.0529 | | 0.0447 | 69.0 | 5727 | 2.1538 | 0.5039 | 0.2819 | 0.4431 | 0.4445 | 0.8742 | 0.8746 | 8.3598 | 16 | 4 | 12.8995 | 10.8466 | | 0.0464 | 70.0 | 5810 | 2.1497 | 0.5095 | 0.2868 | 0.4499 | 0.4514 | 0.875 | 0.8757 | 8.3624 | 15 | 4 | 13.0053 | 8.7302 | | 0.0445 | 71.0 | 5893 | 2.1609 | 0.5082 | 0.2865 | 0.4461 | 0.447 | 0.8742 | 0.8771 | 8.5873 | 15 | 4 | 13.2037 | 10.8466 | | 0.0443 | 72.0 | 5976 | 2.1532 | 0.5015 | 0.2799 | 0.4448 | 0.4458 | 0.8744 | 0.8749 | 8.3757 | 15 | 3 | 12.8942 | 10.0529 | | 0.0433 | 73.0 | 6059 | 2.1533 | 0.4969 | 0.2772 | 0.441 | 0.4421 | 0.8739 | 0.8733 | 8.2751 | 15 | 3 | 12.7143 | 10.0529 | | 0.0429 | 74.0 | 6142 | 2.1663 | 0.501 | 0.2809 | 0.4435 | 0.4448 | 0.8753 | 0.8751 | 8.2619 | 15 | 3 | 12.7778 | 8.7302 | | 0.0427 | 75.0 | 6225 | 2.1336 | 0.5069 | 0.2871 | 0.4517 | 0.4528 | 0.8759 | 0.876 | 8.2487 | 16 | 3 | 12.8148 | 9.2593 | | 0.0446 | 76.0 | 6308 | 2.1338 | 0.5135 | 0.2959 | 0.4575 | 0.4587 | 0.8768 | 0.8774 | 8.3492 | 16 | 4 | 12.9233 | 9.2593 | | 0.0438 | 77.0 | 6391 | 2.1349 | 0.5132 | 0.2961 | 0.457 | 0.4578 | 0.877 | 0.8779 | 8.3545 | 15 | 4 | 12.9233 | 10.0529 | | 0.0436 | 78.0 | 6474 | 2.1331 | 0.5106 | 0.2914 | 0.4534 | 0.4544 | 0.8757 | 0.8768 | 8.4286 | 15 | 3 | 12.9762 | 10.8466 | | 0.0432 | 79.0 | 6557 | 2.1389 | 0.5086 | 0.286 | 0.4523 | 0.4529 | 0.8763 | 0.8762 | 8.2434 | 15 | 3 | 12.7487 | 9.5238 | | 0.0414 | 80.0 | 6640 | 2.1596 | 0.5084 | 0.2883 | 0.4506 | 0.4517 | 0.8759 | 0.8765 | 8.3624 | 16 | 3 | 12.8862 | 9.2593 | | 0.0393 | 81.0 | 6723 | 2.1762 | 0.5048 | 0.2851 | 0.4495 | 0.4501 | 0.8759 | 0.8764 | 8.2513 | 16 | 3 | 12.7857 | 8.4656 | | 0.0407 | 82.0 | 6806 | 2.1854 | 0.5055 | 0.2867 | 0.4477 | 0.449 | 0.8758 | 0.8762 | 8.3122 | 16 | 4 | 12.8228 | 8.4656 | | 0.0396 | 83.0 | 6889 | 2.1720 | 0.4995 | 0.2774 | 0.4402 | 0.4413 | 0.874 | 0.8748 | 8.3862 | 16 | 4 | 12.9656 | 9.7884 | | 0.0342 | 84.0 | 6972 | 2.2038 | 0.5046 | 0.2849 | 0.4494 | 0.4506 | 0.8759 | 0.8759 | 8.2513 | 16 | 4 | 12.8069 | 8.2011 | | 0.0415 | 85.0 | 7055 | 2.1893 | 0.5077 | 0.2847 | 0.4503 | 0.4517 | 0.8762 | 0.8759 | 8.2513 | 16 | 4 | 12.791 | 8.7302 | | 0.0388 | 86.0 | 7138 | 2.1811 | 0.5065 | 0.2882 | 0.4503 | 0.4512 | 0.8758 | 0.8764 | 8.3228 | 15 | 4 | 12.9101 | 8.9947 | | 0.0411 | 87.0 | 7221 | 2.1794 | 0.5057 | 0.2853 | 0.4526 | 0.4546 | 0.8764 | 0.8753 | 8.2222 | 15 | 4 | 12.7434 | 8.2011 | | 0.0358 | 88.0 | 7304 | 2.2025 | 0.5031 | 0.283 | 0.4486 | 0.4505 | 0.8753 | 0.8747 | 8.2487 | 15 | 4 | 12.8122 | 7.9365 | | 0.0388 | 89.0 | 7387 | 2.1994 | 0.5021 | 0.2818 | 0.4437 | 0.4453 | 0.8745 | 0.8742 | 8.3386 | 16 | 4 | 12.873 | 8.9947 | | 0.0372 | 90.0 | 7470 | 2.2076 | 0.5055 | 0.2843 | 0.4478 | 0.449 | 0.8754 | 0.8754 | 8.3492 | 16 | 4 | 12.9762 | 8.9947 | | 0.0352 | 91.0 | 7553 | 2.2038 | 0.5077 | 0.2891 | 0.4504 | 0.4521 | 0.8755 | 0.8767 | 8.4153 | 16 | 4 | 13.0423 | 10.8466 | | 0.0369 | 92.0 | 7636 | 2.2122 | 0.5086 | 0.2877 | 0.4507 | 0.4522 | 0.8759 | 0.8757 | 8.3175 | 16 | 4 | 12.8757 | 7.9365 | | 0.0379 | 93.0 | 7719 | 2.2121 | 0.5047 | 0.2877 | 0.4501 | 0.4517 | 0.8757 | 0.8763 | 8.3122 | 15 | 4 | 12.8519 | 8.4656 | | 0.0367 | 94.0 | 7802 | 2.1979 | 0.5098 | 0.2897 | 0.4524 | 0.4534 | 0.8776 | 0.8767 | 8.2143 | 15 | 4 | 12.7487 | 7.672 | | 0.0372 | 95.0 | 7885 | 2.1951 | 0.5003 | 0.2825 | 0.445 | 0.4467 | 0.8743 | 0.8755 | 8.4074 | 16 | 3 | 12.9894 | 10.0529 | | 0.0358 | 96.0 | 7968 | 2.2210 | 0.505 | 0.2884 | 0.4496 | 0.4511 | 0.8763 | 0.8759 | 8.3307 | 15 | 3 | 12.8995 | 9.5238 | | 0.0375 | 97.0 | 8051 | 2.2113 | 0.5084 | 0.2869 | 0.452 | 0.4528 | 0.8761 | 0.8767 | 8.3307 | 16 | 3 | 12.9471 | 9.2593 | | 0.0351 | 98.0 | 8134 | 2.2248 | 0.5065 | 0.2866 | 0.4521 | 0.4535 | 0.8765 | 0.8758 | 8.2751 | 15 | 4 | 12.8466 | 9.2593 | | 0.0363 | 99.0 | 8217 | 2.2332 | 0.5013 | 0.2784 | 0.4435 | 0.4448 | 0.8747 | 0.8755 | 8.3968 | 16 | 3 | 13.0635 | 10.582 | | 0.0346 | 100.0 | 8300 | 2.2486 | 0.4991 | 0.2774 | 0.4438 | 0.4454 | 0.8749 | 0.8737 | 8.172 | 15 | 3 | 12.7937 | 7.672 | | 0.0323 | 101.0 | 8383 | 2.2460 | 0.5062 | 0.2857 | 0.4515 | 0.4524 | 0.8758 | 0.8755 | 8.3201 | 16 | 4 | 12.836 | 10.3175 | | 0.0326 | 102.0 | 8466 | 2.2416 | 0.5046 | 0.2838 | 0.4504 | 0.452 | 0.8755 | 0.8753 | 8.3069 | 16 | 4 | 12.8783 | 9.5238 | | 0.0339 | 103.0 | 8549 | 2.2407 | 0.4937 | 0.2732 | 0.4389 | 0.44 | 0.8737 | 0.8733 | 8.3016 | 16 | 4 | 12.9206 | 9.2593 | | 0.0343 | 104.0 | 8632 | 2.2370 | 0.5006 | 0.2801 | 0.4449 | 0.4464 | 0.8752 | 0.8739 | 8.209 | 16 | 4 | 12.8122 | 8.2011 | | 0.0325 | 105.0 | 8715 | 2.2509 | 0.5031 | 0.2825 | 0.446 | 0.4469 | 0.8743 | 0.8738 | 8.2884 | 16 | 4 | 12.8651 | 9.2593 | | 0.031 | 106.0 | 8798 | 2.2549 | 0.5041 | 0.2815 | 0.4469 | 0.4482 | 0.8751 | 0.8747 | 8.2884 | 16 | 4 | 12.8862 | 9.5238 | | 0.032 | 107.0 | 8881 | 2.2694 | 0.5041 | 0.2827 | 0.4474 | 0.4482 | 0.8749 | 0.8749 | 8.3122 | 16 | 4 | 12.873 | 9.2593 | | 0.0335 | 108.0 | 8964 | 2.2563 | 0.5036 | 0.2841 | 0.4486 | 0.4492 | 0.8746 | 0.8756 | 8.4603 | 16 | 4 | 13.0317 | 10.582 | | 0.032 | 109.0 | 9047 | 2.2470 | 0.5053 | 0.2849 | 0.4488 | 0.4491 | 0.8746 | 0.8757 | 8.455 | 16 | 4 | 13.0053 | 10.582 | | 0.031 | 110.0 | 9130 | 2.2412 | 0.5023 | 0.2815 | 0.4457 | 0.4467 | 0.8745 | 0.8744 | 8.2989 | 16 | 4 | 12.8175 | 8.4656 | | 0.034 | 111.0 | 9213 | 2.2642 | 0.5014 | 0.2809 | 0.4434 | 0.4447 | 0.8737 | 0.875 | 8.4894 | 16 | 4 | 13.0767 | 10.582 | | 0.0357 | 112.0 | 9296 | 2.2428 | 0.4985 | 0.2804 | 0.443 | 0.4438 | 0.8745 | 0.8739 | 8.2593 | 16 | 4 | 12.7619 | 8.4656 | | 0.036 | 113.0 | 9379 | 2.2353 | 0.5022 | 0.2829 | 0.4451 | 0.4456 | 0.875 | 0.8746 | 8.2804 | 16 | 4 | 12.8016 | 8.7302 | | 0.0298 | 114.0 | 9462 | 2.2445 | 0.5025 | 0.2823 | 0.444 | 0.4451 | 0.8742 | 0.8749 | 8.3783 | 16 | 4 | 12.9709 | 9.5238 | | 0.0329 | 115.0 | 9545 | 2.2486 | 0.5033 | 0.2803 | 0.443 | 0.4437 | 0.8747 | 0.8747 | 8.3175 | 16 | 4 | 12.881 | 8.9947 | | 0.0358 | 116.0 | 9628 | 2.2347 | 0.4977 | 0.2756 | 0.4394 | 0.4401 | 0.8739 | 0.874 | 8.3492 | 16 | 4 | 12.9312 | 9.2593 | | 0.031 | 117.0 | 9711 | 2.2403 | 0.5049 | 0.2848 | 0.4468 | 0.4474 | 0.8753 | 0.8756 | 8.2963 | 16 | 4 | 12.9418 | 8.9947 | | 0.0332 | 118.0 | 9794 | 2.2444 | 0.5037 | 0.2842 | 0.4467 | 0.447 | 0.875 | 0.8753 | 8.3439 | 16 | 3 | 12.9815 | 9.2593 | | 0.0319 | 119.0 | 9877 | 2.2415 | 0.5055 | 0.2869 | 0.4479 | 0.4486 | 0.8755 | 0.8759 | 8.3677 | 16 | 3 | 12.9947 | 9.7884 | | 0.0313 | 120.0 | 9960 | 2.2469 | 0.5062 | 0.286 | 0.4471 | 0.4473 | 0.8762 | 0.8764 | 8.4021 | 16 | 4 | 12.9894 | 9.2593 | | 0.0324 | 121.0 | 10043 | 2.2514 | 0.508 | 0.2859 | 0.4489 | 0.4498 | 0.8762 | 0.8761 | 8.3757 | 16 | 4 | 12.963 | 9.5238 | | 0.0311 | 122.0 | 10126 | 2.2604 | 0.5049 | 0.2836 | 0.4462 | 0.4469 | 0.8764 | 0.8761 | 8.3228 | 16 | 4 | 12.8783 | 8.7302 | | 0.0299 | 123.0 | 10209 | 2.2702 | 0.5048 | 0.2829 | 0.4419 | 0.443 | 0.8756 | 0.8751 | 8.2804 | 16 | 4 | 12.8069 | 8.9947 | | 0.0296 | 124.0 | 10292 | 2.2743 | 0.5072 | 0.2851 | 0.4445 | 0.4453 | 0.8758 | 0.8761 | 8.336 | 16 | 4 | 12.8968 | 9.2593 | | 0.029 | 125.0 | 10375 | 2.2657 | 0.5086 | 0.2843 | 0.4461 | 0.4473 | 0.8765 | 0.8763 | 8.2751 | 16 | 4 | 12.8757 | 8.9947 | | 0.0302 | 126.0 | 10458 | 2.2688 | 0.5064 | 0.2821 | 0.4445 | 0.4452 | 0.8749 | 0.8763 | 8.4101 | 16 | 4 | 13.0132 | 10.3175 | | 0.0291 | 127.0 | 10541 | 2.2663 | 0.5028 | 0.2835 | 0.4441 | 0.4454 | 0.8744 | 0.8758 | 8.4312 | 16 | 4 | 13.0635 | 10.0529 | | 0.0341 | 128.0 | 10624 | 2.2517 | 0.4982 | 0.2786 | 0.4392 | 0.4404 | 0.873 | 0.875 | 8.4762 | 16 | 4 | 13.1111 | 9.7884 | | 0.0327 | 129.0 | 10707 | 2.2374 | 0.5031 | 0.2825 | 0.4433 | 0.4441 | 0.8744 | 0.8752 | 8.3783 | 16 | 4 | 12.963 | 9.2593 | | 0.0315 | 130.0 | 10790 | 2.2319 | 0.5078 | 0.2877 | 0.4493 | 0.4501 | 0.876 | 0.8758 | 8.2646 | 16 | 4 | 12.8095 | 8.2011 | | 0.0284 | 131.0 | 10873 | 2.2269 | 0.502 | 0.2808 | 0.4434 | 0.4439 | 0.8745 | 0.8751 | 8.3545 | 16 | 4 | 12.9392 | 8.9947 | | 0.0303 | 132.0 | 10956 | 2.2163 | 0.5025 | 0.2809 | 0.445 | 0.4457 | 0.8751 | 0.8751 | 8.3042 | 16 | 4 | 12.8836 | 8.4656 | | 0.03 | 133.0 | 11039 | 2.2104 | 0.5027 | 0.2821 | 0.4441 | 0.4452 | 0.8743 | 0.8756 | 8.4259 | 16 | 4 | 13.0265 | 9.5238 | | 0.0285 | 134.0 | 11122 | 2.2115 | 0.507 | 0.2868 | 0.448 | 0.4492 | 0.8756 | 0.8767 | 8.4365 | 16 | 4 | 13.0344 | 9.5238 | | 0.0306 | 135.0 | 11205 | 2.2107 | 0.5057 | 0.284 | 0.445 | 0.4459 | 0.8751 | 0.8755 | 8.4074 | 16 | 4 | 12.9683 | 8.7302 | | 0.0302 | 136.0 | 11288 | 2.2062 | 0.5064 | 0.2855 | 0.4473 | 0.4484 | 0.8747 | 0.8759 | 8.4259 | 16 | 4 | 13.0317 | 8.9947 | | 0.0293 | 137.0 | 11371 | 2.2063 | 0.5031 | 0.2836 | 0.4456 | 0.4468 | 0.8747 | 0.8755 | 8.373 | 16 | 4 | 12.9735 | 8.9947 | | 0.0284 | 138.0 | 11454 | 2.2110 | 0.5025 | 0.2846 | 0.4456 | 0.4464 | 0.8742 | 0.8755 | 8.3995 | 16 | 4 | 13.0026 | 8.9947 | | 0.0288 | 139.0 | 11537 | 2.2091 | 0.5038 | 0.2844 | 0.4463 | 0.4474 | 0.8751 | 0.8756 | 8.3492 | 16 | 4 | 12.9048 | 8.9947 | | 0.0259 | 140.0 | 11620 | 2.2081 | 0.5058 | 0.2868 | 0.4472 | 0.448 | 0.8755 | 0.876 | 8.3757 | 16 | 4 | 12.9365 | 8.9947 | | 0.0299 | 141.0 | 11703 | 2.2055 | 0.5086 | 0.2858 | 0.4481 | 0.4491 | 0.8757 | 0.8758 | 8.3439 | 16 | 4 | 12.881 | 9.2593 | | 0.0292 | 142.0 | 11786 | 2.2054 | 0.505 | 0.2834 | 0.445 | 0.4465 | 0.8751 | 0.875 | 8.2963 | 16 | 4 | 12.8492 | 7.9365 | | 0.0281 | 143.0 | 11869 | 2.2051 | 0.503 | 0.2819 | 0.4433 | 0.4446 | 0.8747 | 0.8748 | 8.3201 | 16 | 4 | 12.8783 | 8.2011 | | 0.0287 | 144.0 | 11952 | 2.2054 | 0.5042 | 0.2832 | 0.4448 | 0.4461 | 0.8749 | 0.8749 | 8.3175 | 16 | 4 | 12.8704 | 7.9365 | | 0.0307 | 145.0 | 12035 | 2.2022 | 0.5037 | 0.2835 | 0.4441 | 0.4453 | 0.8748 | 0.875 | 8.3254 | 16 | 4 | 12.8968 | 8.2011 | | 0.028 | 146.0 | 12118 | 2.2004 | 0.5032 | 0.2834 | 0.4436 | 0.4448 | 0.8748 | 0.875 | 8.3122 | 16 | 4 | 12.8862 | 8.2011 | | 0.0256 | 147.0 | 12201 | 2.2014 | 0.503 | 0.2829 | 0.4433 | 0.4445 | 0.8747 | 0.8751 | 8.3254 | 16 | 4 | 12.8915 | 8.2011 | | 0.0275 | 148.0 | 12284 | 2.2012 | 0.5036 | 0.2836 | 0.4438 | 0.4453 | 0.8751 | 0.875 | 8.2884 | 16 | 4 | 12.8545 | 7.672 | | 0.0292 | 149.0 | 12367 | 2.1998 | 0.5036 | 0.2835 | 0.4438 | 0.4452 | 0.875 | 0.875 | 8.291 | 16 | 4 | 12.8624 | 7.672 | | 0.0286 | 150.0 | 12450 | 2.1989 | 0.5027 | 0.2828 | 0.4438 | 0.4452 | 0.8749 | 0.875 | 8.291 | 16 | 4 | 12.8757 | 7.672 | ### Framework versions - Transformers 4.33.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
CyberHarem/egami_tsubaki_idolmastercinderellagirls
CyberHarem
2023-09-23T14:20:25Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/egami_tsubaki_idolmastercinderellagirls", "license:mit", "region:us" ]
text-to-image
2023-09-23T14:12:25Z
--- license: mit datasets: - CyberHarem/egami_tsubaki_idolmastercinderellagirls pipeline_tag: text-to-image tags: - art --- # Lora of egami_tsubaki_idolmastercinderellagirls This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 3060, you need to download `3060/egami_tsubaki_idolmastercinderellagirls.pt` as the embedding and `3060/egami_tsubaki_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 3060**, with the score of 0.993. The trigger words are: 1. `egami_tsubaki_idolmastercinderellagirls` 2. `black_hair, long_hair, smile, flower, black_eyes, open_mouth, hair_ornament, card_\(medium\), purple_eyes, hair_flower` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-----------------------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 5100 | 0.983 | [Download](5100/egami_tsubaki_idolmastercinderellagirls.zip) | ![bikini-5100](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | ![free-5100](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | 4760 | 0.962 | [Download](4760/egami_tsubaki_idolmastercinderellagirls.zip) | ![bikini-4760](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) | ![free-4760](4760/previews/free.png) | ![maid-4760](4760/previews/maid.png) | ![miko-4760](4760/previews/miko.png) | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) | ![suit-4760](4760/previews/suit.png) | ![yukata-4760](4760/previews/yukata.png) | | 4420 | 0.979 | [Download](4420/egami_tsubaki_idolmastercinderellagirls.zip) | ![bikini-4420](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) | ![free-4420](4420/previews/free.png) | ![maid-4420](4420/previews/maid.png) | ![miko-4420](4420/previews/miko.png) | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) | ![suit-4420](4420/previews/suit.png) | ![yukata-4420](4420/previews/yukata.png) | | 4080 | 0.981 | [Download](4080/egami_tsubaki_idolmastercinderellagirls.zip) | ![bikini-4080](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | ![free-4080](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | 3740 | 0.915 | [Download](3740/egami_tsubaki_idolmastercinderellagirls.zip) | ![bikini-3740](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) | ![free-3740](3740/previews/free.png) | ![maid-3740](3740/previews/maid.png) | ![miko-3740](3740/previews/miko.png) | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) | ![suit-3740](3740/previews/suit.png) | ![yukata-3740](3740/previews/yukata.png) | | 3400 | 0.950 | [Download](3400/egami_tsubaki_idolmastercinderellagirls.zip) | ![bikini-3400](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) | ![free-3400](3400/previews/free.png) | ![maid-3400](3400/previews/maid.png) | ![miko-3400](3400/previews/miko.png) | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) | ![suit-3400](3400/previews/suit.png) | ![yukata-3400](3400/previews/yukata.png) | | **3060** | **0.993** | [**Download**](3060/egami_tsubaki_idolmastercinderellagirls.zip) | ![bikini-3060](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | ![free-3060](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2720 | 0.948 | [Download](2720/egami_tsubaki_idolmastercinderellagirls.zip) | ![bikini-2720](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) | ![free-2720](2720/previews/free.png) | ![maid-2720](2720/previews/maid.png) | ![miko-2720](2720/previews/miko.png) | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) | ![suit-2720](2720/previews/suit.png) | ![yukata-2720](2720/previews/yukata.png) | | 2380 | 0.979 | [Download](2380/egami_tsubaki_idolmastercinderellagirls.zip) | ![bikini-2380](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) | ![free-2380](2380/previews/free.png) | ![maid-2380](2380/previews/maid.png) | ![miko-2380](2380/previews/miko.png) | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) | ![suit-2380](2380/previews/suit.png) | ![yukata-2380](2380/previews/yukata.png) | | 2040 | 0.934 | [Download](2040/egami_tsubaki_idolmastercinderellagirls.zip) | ![bikini-2040](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | ![free-2040](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1700 | 0.936 | [Download](1700/egami_tsubaki_idolmastercinderellagirls.zip) | ![bikini-1700](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) | ![free-1700](1700/previews/free.png) | ![maid-1700](1700/previews/maid.png) | ![miko-1700](1700/previews/miko.png) | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) | ![suit-1700](1700/previews/suit.png) | ![yukata-1700](1700/previews/yukata.png) | | 1360 | 0.908 | [Download](1360/egami_tsubaki_idolmastercinderellagirls.zip) | ![bikini-1360](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) | ![free-1360](1360/previews/free.png) | ![maid-1360](1360/previews/maid.png) | ![miko-1360](1360/previews/miko.png) | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) | ![suit-1360](1360/previews/suit.png) | ![yukata-1360](1360/previews/yukata.png) | | 1020 | 0.903 | [Download](1020/egami_tsubaki_idolmastercinderellagirls.zip) | ![bikini-1020](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | ![free-1020](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) | | 680 | 0.712 | [Download](680/egami_tsubaki_idolmastercinderellagirls.zip) | ![bikini-680](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) | ![free-680](680/previews/free.png) | ![maid-680](680/previews/maid.png) | ![miko-680](680/previews/miko.png) | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) | ![suit-680](680/previews/suit.png) | ![yukata-680](680/previews/yukata.png) | | 340 | 0.312 | [Download](340/egami_tsubaki_idolmastercinderellagirls.zip) | ![bikini-340](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) | ![free-340](340/previews/free.png) | ![maid-340](340/previews/maid.png) | ![miko-340](340/previews/miko.png) | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) | ![suit-340](340/previews/suit.png) | ![yukata-340](340/previews/yukata.png) |
Ashyyy167/Kaeya
Ashyyy167
2023-09-23T14:05:50Z
0
0
null
[ "music", "text-to-video", "ja", "en", "dataset:b-mc2/sql-create-context", "region:us" ]
text-to-video
2023-09-23T14:01:20Z
--- datasets: - b-mc2/sql-create-context language: - ja - en metrics: - character pipeline_tag: text-to-video tags: - music ---
arjunssat/Llama-2-7b-chat-finetune
arjunssat
2023-09-23T13:55:39Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-09T12:05:55Z
--- license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lortigas/xlm-roberta-base-finetuned-panx-fr
lortigas
2023-09-23T13:49:32Z
106
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-09-22T02:57:42Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.927423283000487 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1052 - F1: 0.9274 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.596 | 1.0 | 191 | 0.2408 | 0.8329 | | 0.2555 | 2.0 | 382 | 0.1503 | 0.8932 | | 0.1718 | 3.0 | 573 | 0.1052 | 0.9274 | ### Framework versions - Transformers 4.16.2 - Pytorch 2.0.1 - Datasets 1.16.1 - Tokenizers 0.13.3
pzpupu/chatglm2-6b-lora_version
pzpupu
2023-09-23T13:48:22Z
1
0
peft
[ "peft", "region:us" ]
null
2023-09-23T13:48:19Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
csocsci/mt5-base-multi-label-en-iiib-02c
csocsci
2023-09-23T13:44:11Z
120
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "multilingual", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-09-22T12:58:51Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards license: mit language: - multilingual --- # Model Card for mt5-base-multi-label-en-iiib-02c <!-- Provide a quick summary of what the model is/does. --> This model is fine-tuned for multi-label text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents. ## Model Description The model was fine-tuned on a dataset of English Instant Messenger dialogs of Adolescents. The classification is multi-label and the model outputs any combination of the tags:'NO TAG', 'Informační podpora', 'Emocionální podpora', 'Začlenění do skupiny', 'Uznání', 'Nabídka pomoci': as a string joined with ', ' (ordered alphabetically). Each label indicates the presence of that category of Supportive Interactions: 'no tag', 'informational support', 'emocional support', 'social companionship', 'appraisal', 'instrumental support'. The inputs of the model are: a target utterance and its bi-directional context; the label of the example is determined by the label of the target utterance. - **Developed by:** Anonymous - **Language(s):** multilingual - **Finetuned from:** mt5-base ## Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/chi2024submission - **Paper:** Stay tuned! ## Usage Here is how to use this model to classify a context-window of a dialogue: ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch test_texts = ['Utterance2'] test_text_pairs = ['Utterance1;Utterance2;Utterance3'] checkpoint_path = "chi2024/mt5-base-multi-label-en-iiib-02c" model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint_path)\ .to("cuda" if torch.cuda.is_available() else "cpu") tokenizer = AutoTokenizer.from_pretrained(checkpoint_path) def verbalize_input(text: str, text_pair: str) -> str: return "Utterance: %s\nContext: %s" % (text, text_pair) def predict_one(text, pair): input_pair = verbalize_input(text, pair) inputs = tokenizer(input_pair, return_tensors="pt", padding=True, truncation=True, max_length=256).to(model.device) outputs = model.generate(**inputs) decoded = [text.split(",")[0].strip() for text in tokenizer.batch_decode(outputs, skip_special_tokens=True)] return decoded dec = predict_one(test_texts[0], test_text_pairs[0]) print(dec) ```
bongo2112/sdxl-db-rayvanny-headshot
bongo2112
2023-09-23T13:43:58Z
4
1
diffusers
[ "diffusers", "tensorboard", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2023-09-19T09:43:32Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: photo of rayvannychui man tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
zineddine/ppo-LunarLander-v2
zineddine
2023-09-23T13:40:55Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-23T13:40:36Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 276.12 +/- 11.71 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
csocsci/mt5-base-multi-label-cs-iiib-02c
csocsci
2023-09-23T13:40:51Z
112
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "cs", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-09-22T13:29:45Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards license: mit language: - cs --- # Model Card for mt5-base-multi-label-cs-iiib-02c <!-- Provide a quick summary of what the model is/does. --> This model is fine-tuned for multi-label text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents in Czech. ## Model Description The model was fine-tuned on a dataset of Czech Instant Messenger dialogs of Adolescents. The classification is multi-label and the model outputs any combination of the tags:'NO TAG', 'Informační podpora', 'Emocionální podpora', 'Začlenění do skupiny', 'Uznání', 'Nabídka pomoci': as a string joined with ', ' (ordered alphabetically). Each label indicates the presence of that category of Supportive Interactions: 'no tag', 'informational support', 'emocional support', 'social companionship', 'appraisal', 'instrumental support'. The inputs of the model are: a target utterance and its bi-directional context; the label of the example is determined by the label of the target utterance. - **Developed by:** Anonymous - **Language(s):** multilingual - **Finetuned from:** mt5-base ## Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/chi2024submission - **Paper:** Stay tuned! ## Usage Here is how to use this model to classify a context-window of a dialogue: ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch test_texts = ['Utterance2'] test_text_pairs = ['Utterance1;Utterance2;Utterance3'] checkpoint_path = "chi2024/mt5-base-multi-label-cs-iiib-02c" model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint_path)\ .to("cuda" if torch.cuda.is_available() else "cpu") tokenizer = AutoTokenizer.from_pretrained(checkpoint_path) def verbalize_input(text: str, text_pair: str) -> str: return "Utterance: %s\nContext: %s" % (text, text_pair) def predict_one(text, pair): input_pair = verbalize_input(text, pair) inputs = tokenizer(input_pair, return_tensors="pt", padding=True, truncation=True, max_length=256).to(model.device) outputs = model.generate(**inputs) decoded = [text.split(",")[0].strip() for text in tokenizer.batch_decode(outputs, skip_special_tokens=True)] return decoded dec = predict_one(test_texts[0], test_text_pairs[0]) print(dec) ```
surajyadav91/bloomz-560m_PROMPT_TUNING_CAUSAL_LM
surajyadav91
2023-09-23T13:11:27Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-22T21:43:02Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
JohanAR/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-GGUF
JohanAR
2023-09-23T12:43:20Z
75
7
null
[ "gguf", "license:other", "endpoints_compatible", "region:us" ]
null
2023-09-23T12:07:27Z
--- license: other --- These files are GGUF versions of [TheBloke's Wizard Vicuna 13B Uncensored SuperHOT 8k](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-GGML), they were converted from GGML using llama.cpp's scripts. See original model card for credits and usage instructions.
CyberHarem/mishiro_idolmastercinderellagirls
CyberHarem
2023-09-23T12:39:03Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/mishiro_idolmastercinderellagirls", "license:mit", "region:us" ]
text-to-image
2023-09-23T12:29:16Z
--- license: mit datasets: - CyberHarem/mishiro_idolmastercinderellagirls pipeline_tag: text-to-image tags: - art --- # Lora of mishiro_idolmastercinderellagirls This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 4760, you need to download `4760/mishiro_idolmastercinderellagirls.pt` as the embedding and `4760/mishiro_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 4760**, with the score of 0.980. The trigger words are: 1. `mishiro_idolmastercinderellagirls` 2. `black_hair, long_hair, earrings, jewelry, ponytail, hair_pulled_back, breasts, formal, suit, green_eyes, blush, large_breasts` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-----------------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-----------------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 5100 | 0.975 | [Download](5100/mishiro_idolmastercinderellagirls.zip) | ![bikini-5100](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | [<NSFW, click to see>](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | **4760** | **0.980** | [**Download**](4760/mishiro_idolmastercinderellagirls.zip) | ![bikini-4760](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) | [<NSFW, click to see>](4760/previews/free.png) | ![maid-4760](4760/previews/maid.png) | ![miko-4760](4760/previews/miko.png) | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) | ![suit-4760](4760/previews/suit.png) | ![yukata-4760](4760/previews/yukata.png) | | 4420 | 0.976 | [Download](4420/mishiro_idolmastercinderellagirls.zip) | ![bikini-4420](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) | [<NSFW, click to see>](4420/previews/free.png) | ![maid-4420](4420/previews/maid.png) | ![miko-4420](4420/previews/miko.png) | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) | ![suit-4420](4420/previews/suit.png) | ![yukata-4420](4420/previews/yukata.png) | | 4080 | 0.961 | [Download](4080/mishiro_idolmastercinderellagirls.zip) | ![bikini-4080](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | [<NSFW, click to see>](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | 3740 | 0.922 | [Download](3740/mishiro_idolmastercinderellagirls.zip) | ![bikini-3740](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) | [<NSFW, click to see>](3740/previews/free.png) | ![maid-3740](3740/previews/maid.png) | ![miko-3740](3740/previews/miko.png) | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) | ![suit-3740](3740/previews/suit.png) | ![yukata-3740](3740/previews/yukata.png) | | 3400 | 0.979 | [Download](3400/mishiro_idolmastercinderellagirls.zip) | ![bikini-3400](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) | [<NSFW, click to see>](3400/previews/free.png) | ![maid-3400](3400/previews/maid.png) | ![miko-3400](3400/previews/miko.png) | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) | ![suit-3400](3400/previews/suit.png) | ![yukata-3400](3400/previews/yukata.png) | | 3060 | 0.911 | [Download](3060/mishiro_idolmastercinderellagirls.zip) | ![bikini-3060](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | [<NSFW, click to see>](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2720 | 0.939 | [Download](2720/mishiro_idolmastercinderellagirls.zip) | ![bikini-2720](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) | [<NSFW, click to see>](2720/previews/free.png) | ![maid-2720](2720/previews/maid.png) | ![miko-2720](2720/previews/miko.png) | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) | ![suit-2720](2720/previews/suit.png) | ![yukata-2720](2720/previews/yukata.png) | | 2380 | 0.935 | [Download](2380/mishiro_idolmastercinderellagirls.zip) | ![bikini-2380](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) | [<NSFW, click to see>](2380/previews/free.png) | ![maid-2380](2380/previews/maid.png) | ![miko-2380](2380/previews/miko.png) | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) | ![suit-2380](2380/previews/suit.png) | ![yukata-2380](2380/previews/yukata.png) | | 2040 | 0.924 | [Download](2040/mishiro_idolmastercinderellagirls.zip) | ![bikini-2040](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | [<NSFW, click to see>](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1700 | 0.920 | [Download](1700/mishiro_idolmastercinderellagirls.zip) | ![bikini-1700](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) | [<NSFW, click to see>](1700/previews/free.png) | ![maid-1700](1700/previews/maid.png) | ![miko-1700](1700/previews/miko.png) | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) | ![suit-1700](1700/previews/suit.png) | ![yukata-1700](1700/previews/yukata.png) | | 1360 | 0.912 | [Download](1360/mishiro_idolmastercinderellagirls.zip) | ![bikini-1360](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) | [<NSFW, click to see>](1360/previews/free.png) | ![maid-1360](1360/previews/maid.png) | ![miko-1360](1360/previews/miko.png) | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) | ![suit-1360](1360/previews/suit.png) | ![yukata-1360](1360/previews/yukata.png) | | 1020 | 0.868 | [Download](1020/mishiro_idolmastercinderellagirls.zip) | ![bikini-1020](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | [<NSFW, click to see>](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) | | 680 | 0.880 | [Download](680/mishiro_idolmastercinderellagirls.zip) | ![bikini-680](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) | [<NSFW, click to see>](680/previews/free.png) | ![maid-680](680/previews/maid.png) | ![miko-680](680/previews/miko.png) | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) | ![suit-680](680/previews/suit.png) | ![yukata-680](680/previews/yukata.png) | | 340 | 0.852 | [Download](340/mishiro_idolmastercinderellagirls.zip) | ![bikini-340](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) | [<NSFW, click to see>](340/previews/free.png) | ![maid-340](340/previews/maid.png) | ![miko-340](340/previews/miko.png) | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) | ![suit-340](340/previews/suit.png) | ![yukata-340](340/previews/yukata.png) |
Doggerz/skldadu_2_1000
Doggerz
2023-09-23T12:35:01Z
1
0
diffusers
[ "diffusers", "text-to-image", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2023-09-23T12:20:24Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: photo of skl person tags: - text-to-image - diffusers inference: true ---
Andyrasika/q-FrozenLake-v1-4x4-noSlippery
Andyrasika
2023-09-23T12:33:04Z
0
1
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-09-23T12:33:00Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Andyrasika/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
lortigas/xlm-roberta-base-finetuned-panx-de-fr
lortigas
2023-09-23T12:32:29Z
105
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-09-21T17:46:58Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1625 - F1: 0.8612 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2891 | 1.0 | 715 | 0.1708 | 0.8345 | | 0.1464 | 2.0 | 1430 | 0.1558 | 0.8481 | | 0.0934 | 3.0 | 2145 | 0.1625 | 0.8612 | ### Framework versions - Transformers 4.16.2 - Pytorch 2.0.1 - Datasets 1.16.1 - Tokenizers 0.13.3
Deepak7376/translation-en-to-hi-transformer
Deepak7376
2023-09-23T11:51:44Z
0
1
null
[ "translation", "en", "hi", "license:mit", "region:us" ]
translation
2023-09-23T11:12:04Z
--- license: mit language: - en - hi metrics: - bleu - accuracy pipeline_tag: translation ---
Rbrq/dreambooth_dog
Rbrq
2023-09-23T11:29:33Z
45
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-23T05:44:17Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - Rbrq/dreambooth_dog This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: True.
enricmoreu/invoice
enricmoreu
2023-09-23T11:25:25Z
69
0
transformers
[ "transformers", "pytorch", "layoutlmv2", "document-question-answering", "generated_from_trainer", "base_model:microsoft/layoutlmv2-base-uncased", "base_model:finetune:microsoft/layoutlmv2-base-uncased", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
document-question-answering
2023-09-23T10:19:05Z
--- license: cc-by-nc-sa-4.0 base_model: microsoft/layoutlmv2-base-uncased tags: - generated_from_trainer model-index: - name: invoice results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # invoice This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results ### Framework versions - Transformers 4.33.2 - Pytorch 1.12.0+cu113 - Datasets 2.14.5 - Tokenizers 0.13.3
MarcosMunoz95/a2c-AntBulletEnv-v0
MarcosMunoz95
2023-09-23T11:19:54Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-29T16:27:52Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1759.19 +/- 50.29 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
yummy-slim-erfahrungen/yummy-slim-erfahrungen
yummy-slim-erfahrungen
2023-09-23T11:03:28Z
0
0
diffusers
[ "diffusers", "Yummy Slim Erfahrungen", "en", "license:cdla-sharing-1.0", "region:us" ]
null
2023-09-23T11:01:58Z
--- license: cdla-sharing-1.0 language: - en library_name: diffusers tags: - Yummy Slim Erfahrungen --- [Yummy Slim Erfahrungen](https://supplementtycoon.com/de/yummy-slim-erfahrungen/) The ketogenic diet is a low-carb, high-fat eating routine that expects to move the body into a condition of ketosis, where it consumes fat for fuel rather than sugars. Conventional gummies are normally high in sugar and carbs, which makes them unacceptable for a ketogenic diet.Keto gummies, then again, are made with low-carb and sans sugar fixings to guarantee they fit inside the macronutrient proportions of a ketogenic diet. VISIT HERE FOR OFFICIAL WEBSITE:-https://supplementtycoon.com/de/yummy-slim-erfahrungen/
UrbanJoe/llama2-llama-master
UrbanJoe
2023-09-23T10:40:04Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-23T10:39:50Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
r929rrjq/my-pet-dog-xzg
r929rrjq
2023-09-23T10:39:54Z
1
1
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-23T10:04:12Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog-XZG Dreambooth model trained by r929rrjq following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: VAST-54 Sample pictures of this concept: ![0](https://i.ibb.co/t3yVQCw/274116f1-164e-418d-942d-4ef44552ebbc.jpg)
Vaibhav9401/Llama-2-13b-chat-hf-toxic-75k-trained-v100-0.3ep
Vaibhav9401
2023-09-23T10:35:22Z
2
0
peft
[ "peft", "region:us" ]
null
2023-09-23T10:34:31Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
trieudemo11/llama_7b_attrb_cate_4m_11
trieudemo11
2023-09-23T10:30:54Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-23T10:30:37Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0
krvish28/paalu
krvish28
2023-09-23T10:24:04Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2023-09-23T10:22:11Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
whiteDandelion/wav2vec2-test
whiteDandelion
2023-09-23T09:49:51Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-09-23T09:16:59Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-test This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Niraya666/swin-large-patch4-window12-384-in22k-finetuned-ADC-4cls-0923
Niraya666
2023-09-23T09:46:26Z
213
0
transformers
[ "transformers", "pytorch", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-large-patch4-window12-384-in22k", "base_model:finetune:microsoft/swin-large-patch4-window12-384-in22k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-23T09:46:02Z
--- license: apache-2.0 base_model: microsoft/swin-large-patch4-window12-384-in22k tags: - generated_from_trainer datasets: - imagefolder model-index: - name: swin-large-patch4-window12-384-in22k-finetuned-ADC-4cls-0923 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-large-patch4-window12-384-in22k-finetuned-ADC-4cls-0923 This model is a fine-tuned version of [microsoft/swin-large-patch4-window12-384-in22k](https://huggingface.co/microsoft/swin-large-patch4-window12-384-in22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - eval_loss: 0.5621 - eval_accuracy: 0.8571 - eval_runtime: 6.0148 - eval_samples_per_second: 11.638 - eval_steps_per_second: 0.499 - epoch: 26.4 - step: 99 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 200 ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
HachiML/Llama-2-13b-hf-ja-qlora
HachiML
2023-09-23T09:25:09Z
2
0
peft
[ "peft", "dataset:HachiML/databricks-dolly-15k-ja-for-peft", "base_model:meta-llama/Llama-2-13b-hf", "base_model:adapter:meta-llama/Llama-2-13b-hf", "region:us" ]
null
2023-07-29T15:51:23Z
--- library_name: peft datasets: - HachiML/databricks-dolly-15k-ja-for-peft base_model: meta-llama/Llama-2-13b-hf --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
HachiML/Llama-2-13b-chat-hf-ja-qlora
HachiML
2023-09-23T09:24:14Z
6
0
peft
[ "peft", "dataset:HachiML/databricks-dolly-15k-ja-for-peft", "base_model:meta-llama/Llama-2-13b-chat-hf", "base_model:adapter:meta-llama/Llama-2-13b-chat-hf", "region:us" ]
null
2023-07-29T02:04:17Z
--- library_name: peft datasets: - HachiML/databricks-dolly-15k-ja-for-peft base_model: meta-llama/Llama-2-13b-chat-hf --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
Yntec/Citrus
Yntec
2023-09-23T09:23:41Z
677
2
diffusers
[ "diffusers", "safetensors", "anime", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-23T03:01:26Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - anime - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # Citrus A mix from the best of CitrineDreamMix and CetusRemix, with the Splash LoRA and 70% of Add_Detail baked in. Comparison: ![Comparison](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/dvkbTwZGl5vXMzVakwZVV.png) Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/gg0SgMrsKgnQQElnut3ob.png) pretty Tiny mischievous CUTE girl wearing a puffy teal jacket, DETAILED EYES, greatly drawn face, Magazine Ad, playing, lush market overgrown city, smooth, intricate, elegant, digital painting, artstation, concept art, sharp focus, illustration, art by sam spratt and ROSSDRAWS, valorant character Original pages: https://civitai.com/models/18116?modelVersionId=21839 (CitrineDreamMix) https://huggingface.co/Yntec/CetusRemix https://civitai.com/models/58390 (add_detail) For trigger words you can use, check the original page at: https://civitai.com/models/81619?modelVersionId=86612 (Splash 1.0) # CitrusDreamMix A version without Add_Detail, for alternate eyes and backgrounds. ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/jNjayIKCbww8mBoIm2YeY.png) # Recipe - SuperMerger Weight sum TrainDifference MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1 Model A: CetusRemix Model B: CitrineDreamMix Output: CetusDreamMix - Merge LoRA to checkpoint 1.0 Model A: CetusDreamMix LoRA: Splash 1.0 Output: CitrusDreamMix - Merge LoRA to checkpoint 0.7 Model A: CitrusDreamMix LoRA: Add_Detail Output: Citrus
royhu/distilbert-base-uncased-finetuned-adl_hw1
royhu
2023-09-23T09:08:33Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-23T08:55:58Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-adl_hw1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-adl_hw1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7796 - Accuracy: 0.0003 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.3098 | 1.0 | 938 | 1.7796 | 0.0003 | | 1.2231 | 2.0 | 1876 | 0.5218 | 0.0 | | 0.3281 | 3.0 | 2814 | 0.2732 | 0.0003 | | 0.1296 | 4.0 | 3752 | 0.2174 | 0.0003 | | 0.0729 | 5.0 | 4690 | 0.2060 | 0.0003 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
Gopikagirijan/ESSAY
Gopikagirijan
2023-09-23T08:55:17Z
0
0
fairseq
[ "fairseq", "art", "ak", "an", "dataset:Open-Orca/OpenOrca", "arxiv:1910.09700", "license:creativeml-openrail-m", "region:us" ]
null
2023-09-23T08:50:23Z
--- license: creativeml-openrail-m datasets: - Open-Orca/OpenOrca language: - ak - an metrics: - character library_name: fairseq tags: - art --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Niranjana2023/animalxzg
Niranjana2023
2023-09-23T08:53:15Z
0
0
asteroid
[ "asteroid", "music", "as", "dataset:fka/awesome-chatgpt-prompts", "arxiv:1910.09700", "license:artistic-2.0", "region:us" ]
null
2023-09-23T08:49:54Z
--- license: artistic-2.0 datasets: - fka/awesome-chatgpt-prompts language: - as metrics: - accuracy library_name: asteroid tags: - music --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ahmed000000000/phi-1_5-finetuned
ahmed000000000
2023-09-23T08:38:32Z
60
0
transformers
[ "transformers", "pytorch", "mixformer-sequential", "text-generation", "generated_from_trainer", "custom_code", "base_model:microsoft/phi-1_5", "base_model:finetune:microsoft/phi-1_5", "license:other", "autotrain_compatible", "region:us" ]
text-generation
2023-09-22T12:40:09Z
--- license: other base_model: microsoft/phi-1_5 tags: - generated_from_trainer model-index: - name: phi-1_5-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-1_5-finetuned This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
abhashpr/ppo-LunarLander-v2
abhashpr
2023-09-23T08:34:49Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-23T08:34:29Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 262.37 +/- 15.59 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Iftesha/miniLm-tuned-emotion
Iftesha
2023-09-23T08:32:00Z
107
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-22T17:26:30Z
--- license: mit base_model: microsoft/MiniLm-L12-H384-uncased tags: - generated_from_trainer datasets: - emotion metrics: - f1 model-index: - name: miniLm-tuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: F1 type: f1 value: 0.938381607408388 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # miniLm-tuned-emotion This model is a fine-tuned version of [microsoft/MiniLm-L12-H384-uncased](https://huggingface.co/microsoft/MiniLm-L12-H384-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1777 - F1: 0.9384 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0968 | 1.0 | 250 | 0.1843 | 0.9323 | | 0.0798 | 2.0 | 500 | 0.1784 | 0.9380 | | 0.0698 | 3.0 | 750 | 0.2012 | 0.9380 | | 0.0639 | 4.0 | 1000 | 0.1847 | 0.9420 | | 0.0593 | 5.0 | 1250 | 0.1777 | 0.9384 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
Gopikagirijan/Better_friend
Gopikagirijan
2023-09-23T08:22:52Z
0
0
nemo
[ "nemo", "music", "art", "text-generation", "ar", "as", "dataset:allenai/dolma", "arxiv:1910.09700", "license:artistic-2.0", "region:us" ]
text-generation
2023-09-23T08:20:06Z
--- license: artistic-2.0 datasets: - allenai/dolma language: - ar - as metrics: - character library_name: nemo pipeline_tag: text-generation tags: - music - art --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yupyub/distilbert-base-uncased-finetuned-emotion
yupyub
2023-09-23T08:16:51Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-23T08:06:14Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9245 - name: F1 type: f1 value: 0.9247090923152081 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2099 - Accuracy: 0.9245 - F1: 0.9247 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8 | 1.0 | 250 | 0.3084 | 0.9035 | 0.9026 | | 0.2414 | 2.0 | 500 | 0.2099 | 0.9245 | 0.9247 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
legacy107/multi-qa-mpnet-base-dot-v1-wikipedia-search
legacy107
2023-09-23T08:13:55Z
25
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-09-19T13:01:47Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3746 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 500, "evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1873, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jauntybrain/musicgen-small
jauntybrain
2023-09-23T08:03:00Z
7
0
transformers
[ "transformers", "pytorch", "safetensors", "musicgen", "text-to-audio", "arxiv:2306.05284", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-audio
2023-09-23T04:37:52Z
--- inference: false tags: - musicgen license: cc-by-nc-4.0 --- # MusicGen - Small - 300M MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts. It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz. Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass. By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio. MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*. Four checkpoints are released: - [**small** (this checkpoint)](https://huggingface.co/facebook/musicgen-small) - [medium](https://huggingface.co/facebook/musicgen-medium) - [large](https://huggingface.co/facebook/musicgen-large) - [melody](https://huggingface.co/facebook/musicgen-melody) ## Example Try out MusicGen yourself! * Audiocraft Colab: <a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> * Hugging Face Colab: <a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> * Hugging Face Demo: <a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen"> <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/> </a> ## 🤗 Transformers Usage You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards. 1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) from main: ``` pip install git+https://github.com/huggingface/transformers.git ``` 2. Run the following Python code to generate text-conditional audio samples: ```py from transformers import AutoProcessor, MusicgenForConditionalGeneration processor = AutoProcessor.from_pretrained("facebook/musicgen-small") model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") inputs = processor( text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"], padding=True, return_tensors="pt", ) audio_values = model.generate(**inputs, max_new_tokens=256) ``` 3. Listen to the audio samples either in an ipynb notebook: ```py from IPython.display import Audio sampling_rate = model.config.audio_encoder.sampling_rate Audio(audio_values[0].numpy(), rate=sampling_rate) ``` Or save them as a `.wav` file using a third-party library, e.g. `scipy`: ```py import scipy sampling_rate = model.config.audio_encoder.sampling_rate scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy()) ``` For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen). ## Audiocraft Usage You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft): 1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft) ``` pip install git+https://github.com/facebookresearch/audiocraft.git ``` 2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed: ``` apt get install ffmpeg ``` 3. Run the following Python code: ```py from audiocraft.models import MusicGen from audiocraft.data.audio import audio_write model = MusicGen.get_pretrained("small") model.set_generation_params(duration=8) # generate 8 seconds. descriptions = ["happy rock", "energetic EDM"] wav = model.generate(descriptions) # generates 2 samples. for idx, one_wav in enumerate(wav): # Will save under {idx}.wav, with loudness normalization at -14 db LUFS. audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness") ``` ## Model details **Organization developing the model:** The FAIR team of Meta AI. **Model date:** MusicGen was trained between April 2023 and May 2023. **Model version:** This is the version 1 of the model. **Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation. **Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284). **Citation details:** ``` @misc{copet2023simple, title={Simple and Controllable Music Generation}, author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez}, year={2023}, eprint={2306.05284}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` **License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0. **Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue. ## Intended use **Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including: - Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science - Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs **Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models. **Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. ## Metrics **Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark: - Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish) - Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST) - CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes: - Overall quality of the music samples; - Text relevance to the provided text input; - Adherence to the melody for melody-guided music generation. More details on performance measures and human studies can be found in the paper. **Decision thresholds:** Not applicable. ## Evaluation datasets The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set. ## Training datasets The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing. ## Evaluation results Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper. | Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity | |---|---|---|---|---| | **facebook/musicgen-small** | 4.88 | 1.42 | 0.27 | - | | facebook/musicgen-medium | 5.14 | 1.38 | 0.28 | - | | facebook/musicgen-large | 5.48 | 1.37 | 0.28 | - | | facebook/musicgen-melody | 4.93 | 1.41 | 0.27 | 0.44 | More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284), in the Results section. ## Limitations and biases **Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model. **Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs). **Limitations:** - The model is not able to generate realistic vocals. - The model has been trained with English descriptions and will not perform as well in other languages. - The model does not perform equally well for all music styles and cultures. - The model sometimes generates end of songs, collapsing to silence. - It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results. **Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive. **Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data. **Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
nagyrobert97/swin-tiny-patch4-window7-224-finetuned-eurosat
nagyrobert97
2023-09-23T07:38:33Z
211
0
transformers
[ "transformers", "pytorch", "swin", "image-classification", "generated_from_trainer", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-22T14:19:50Z
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0977 - Accuracy: 0.9644 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5012 | 1.0 | 351 | 0.1447 | 0.9502 | | 0.3732 | 2.0 | 703 | 0.1068 | 0.9626 | | 0.3398 | 2.99 | 1053 | 0.0977 | 0.9644 | ### Framework versions - Transformers 4.34.0.dev0 - Pytorch 2.0.1+cu118 - Tokenizers 0.14.0
DriveMyScream/News_Summarization_Model
DriveMyScream
2023-09-23T07:36:18Z
0
0
keras
[ "keras", "tf-keras", "region:us" ]
null
2023-09-23T07:33:20Z
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | weight_decay | None | | clipnorm | None | | global_clipnorm | None | | clipvalue | None | | use_ema | False | | ema_momentum | 0.99 | | ema_overwrite_frequency | None | | jit_compile | True | | is_legacy_optimizer | False | | learning_rate | 1.9999999494757503e-05 | | beta_1 | 0.9 | | beta_2 | 0.999 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | float32 | ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
wesley7137/phi-1_5-finetuned-neuroscience-python
wesley7137
2023-09-23T07:15:01Z
0
0
null
[ "generated_from_trainer", "base_model:microsoft/phi-1_5", "base_model:finetune:microsoft/phi-1_5", "license:other", "region:us" ]
null
2023-09-23T07:04:31Z
--- license: other base_model: microsoft/phi-1_5 tags: - generated_from_trainer model-index: - name: phi-1_5-finetuned-neuroscience-python results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-1_5-finetuned-neuroscience-python This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 1000 ### Training results ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
am-infoweb/QA_SYNTH_22_SEPT_WITH_FINETUNE_1.0
am-infoweb
2023-09-23T07:10:17Z
105
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2023-09-22T16:36:58Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer model-index: - name: QA_SYNTH_22_SEPT_WITH_FINETUNE_1.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # QA_SYNTH_22_SEPT_WITH_FINETUNE_1.0 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0005 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 0.0686 | 1.0 | 83700 | 0.0122 | | 0.0591 | 2.0 | 167400 | 0.0046 | | 0.0176 | 3.0 | 251100 | 0.0059 | | 0.0003 | 4.0 | 334800 | 0.0046 | | 0.0 | 5.0 | 418500 | 0.0018 | | 0.0 | 6.0 | 502200 | 0.0023 | | 0.0233 | 7.0 | 585900 | 0.0006 | | 0.0 | 8.0 | 669600 | 0.0008 | | 0.0 | 9.0 | 753300 | 0.0006 | | 0.0 | 10.0 | 837000 | 0.0005 | ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
Ransaka/a2c-PandaReachDense-v2
Ransaka
2023-09-23T07:05:41Z
2
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "arxiv:2106.13687", "model-index", "region:us" ]
reinforcement-learning
2023-02-14T07:33:46Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.21 +/- 0.21 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ``` Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
SagarDas07/era
SagarDas07
2023-09-23T06:31:55Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-22T18:25:40Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
dminhk/dog-example-xl-lora
dminhk
2023-09-23T06:30:35Z
1
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-09-23T05:52:28Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks dog tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - dminhk/dog-example-xl-lora These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.