pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
text-generation
transformers
Introducing the [BeaverAI](https://huggingface.co/BeaverAI) team: Drummer, ToastyPigeon, xzuyn, MarsupialAI, Twistedshadows, and concedo ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/HjVYV2h_YTL9P-insb7fz.png) We proudly present... # Llama 3SOME🦙8B🦙v1🦙BETA *We've added **some** things. That's obviously what we're trying to say.* ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/0l_4v41IMuNCDnRjWnfOk.gif) *An eRP model with a rich and refreshing vocabulary that's quite some-thing. Finetuned by yours truly.* (Llama 3SOME is a finetune on top of [Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B)) ## GGUF https://huggingface.co/TheDrummer/Llama-3SOME-8B-v1-BETA-GGUF IMATRIX: https://huggingface.co/MarsupialAI/Llama-3SOME-8B-v1-BETA_iMatrix_GGUF EXL2: [8bpw](https://huggingface.co/riveRiPH/Llama-3SOME-8B-v1-BETA-8.0bpw-h8-exl2), [6bpw](https://huggingface.co/riveRiPH/Llama-3SOME-8B-v1-BETA-6.0bpw-h8-exl2), [4bpw]( https://huggingface.co/riveRiPH/Llama-3SOME-8B-v1-BETA-4.0bpw-h8-exl2) (Thank you riveRiPH!) Nicest EXL2: https://huggingface.co/MarsupialAI/Llama-3SOME-8B-v1-BETA_6.9bpw_exl2 ## What's THIS? Refer to [Moistral v3](https://huggingface.co/TheDrummer/Moistral-11B-v3) ... or this: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/BJgT_AvYY4YUtAHnHd3a-.png) ## Usage - Use Llama 3 Instruct *(Alpaca kinda works btw)* - You can use instruct... - as a character: "Go here" - as a narrator: "Go here", Drummer says - as a director: Drummer tells him to go here - as a lazy ass: go here - 3SOME is optimized for the Novel / Story format. - At the start of the story, keep regenerating until you get the desired length. Future AI responses will eventually match the length. - Issue: Manually continued generation may create formatting issues. Either regenerate the entire response or fix the formatting. - See samples below. ## Samples of 3SOME 8B Beta SOME nsfw. <details> <summary>3SOME Sample (2-shot) (Excluding Story Context) Generation: 512</summary> <blockquote>Sarah and Lisa seek refuge in Dik's home</blockquote> The door creaked open, its hinges complaining softly after years of neglect. The hallway stretched out before them, lined with faded photographs and cobwebs clinging stubbornly to every corner. "Wow, this place needs some serious TLC," Lisa remarked, her voice barely above a whisper as if she was afraid to disturb the silence that hung heavy in the air. Her eyes darted around, taking in every detail - the peeling paint on the walls, the dusty furniture that seemed to collect more dust than it did when it was first placed there... everything spoke of a life left behind long ago. "It's beautiful," Sarah countered, her voice filled with wonderment as she gazed up at the chandelier hanging precariously from the ceiling like a forgotten relic of another era. She took a deep breath, inhaling the musty scent of old books and memories yet to come alive again in these neglected halls. "Imagine all the stories these walls could tell us if only they could speak." Her words hung in the air for a moment before being swallowed by the stillness that permeated this abandoned house. Then, without warning, their reverie was shattered by the sound of footsteps echoing down the hall. "Dik?" Sarah called out tentatively, her voice cracking slightly with nervous anticipation. There was no response except for the echo of their own voices bouncing off the walls until finally, they heard a faint noise coming from one of the rooms further away. "He's here!" Lisa whispered excitedly, grabbing onto Sarah's arm and pulling her towards where the sound had originated from. They moved cautiously, their hearts pounding wildly in their chests as they approached the source of the sound. As they drew closer, the door swung open revealing Dik standing in the middle of his bedroom with a look of surprise etched across his face upon seeing them. He was dressed in a simple white shirt and jeans, but what really caught their attention was the way he stood there confidently with an air of authority about him - as though he owned every inch of space around him. Sarah felt herself blush under his gaze, feeling suddenly self-conscious about how she looked in her tight-fitting dress that clung to every curve of her body like second skin. Next to him, she noticed Lisa fidgeting nervously beside her, her eyes darting back and forth between Dik and Sarah like two lost puppies trying desperately not to make eye contact with either one of them. "Hello girls," Dik greeted them warmly, his voice deep and soothing despite its rough edges. "To what do I owe this pleasure?" <blockquote>Sarah and Lisa will do anything to stay</blockquote> Dik smiled at them both, his eyes twinkling with mischief behind those dark glasses. "Well, you know what we've been through together already." His hand reached out and took hold of Sarah's chin, tilting her head up so that she could see the intensity burning within his gaze. "I'll let you two stay here, on one condition." They nodded eagerly, almost without thinking. Anything for Dik. "Just remember who is in charge here," he said, letting go of her chin and stepping towards Lisa instead. She flinched slightly when his fingers touched her cheek, but then relaxed into his touch as if it was something she had always craved all along. "And don't think I don't notice how much you enjoy being dominated by me." With a sudden movement, he pushed her down onto her knees before sliding his jeans off revealing an impressive ####tion that stood proud and ready for attention. Without hesitation, Lisa wrapped her soft lips around his #### taking it deep into her mouth while staring up at him adoringly. Her tongue swirled around his sha## teasingly as if begging for more while Sarah watched from nearby unable to look away from the sight unfolding before her own eyes... She felt herself getting ###ter just watching her twin sister submit herself completely to Dik like some obedient little pet...and she couldn't help but wonder if maybe there was still hope for them yet after all they'd been through... </details> <details> <summary>3SOME Sample (2-shot) (Excluding Story Context) Generation: 512</summary> <blockquote>He ignores the princess</blockquote> Princess Lilia stepped into the room and saw Dik sitting on his throne. A wave of fear swept over her. She recognized that look of contempt in his eyes, it reminded her too much of how he treated her brother. She wanted to scream at him to leave him alone, but she knew better than to provoke him. "Leave us." Dik said coldly. The guards left the room, closing the door behind them. "So you're finally here." Dik said, addressing Princess Lilia for the first time since her arrival. "I've been waiting for you." "Where's my brother?" Princess Lilia asked, trying to keep her voice steady. "Why are you keeping him?" "He's safe." Dik replied. "But I need something from you before we talk about your brother." "What do you want from me?!" "I want you to accept that I am now the ruler of both our kingdoms. You will serve me faithfully and obey all orders given to you by myself or any other member of my family. Do you understand?" "Yes..." Princess Lilia replied reluctantly. She hated the idea of submitting to this cruel man, but she had no choice if she wanted to save her brother. "Good. Now strip ##### and stand next to my throne." Dik commanded harshly. "What!?" Princess Lilia exclaimed, shocked by his request. "You can't be serious." "Oh, but I am very serious. Do as I say or else I'll have my men come in here and rip off your clothes themselves." Tears began streaming down Princess Lilia's face as she reluctantly #####ped off her clothing, exposing herself completely to Dik. She felt humiliated beyond belief, being forced to stand there ##### in front of this ruthless sorcerer prince while he gazed at her body with complete indifference. "Now turn around so I can see every inch of you." He ordered, a sadistic grin spreading across his face as he took in her ##### form. Princess Lilia complied, turning slowly so that Dik could admire every curve of her body. Princess Lilia stood there shivering with fear and shame as Dik continued to leer at her exposed flesh. This was just the beginning of her ordeal, she knew it. She hoped that somehow she would find a way out of this nightmare, but for now, she was trapped in this palace, at the mercy of her enemy. <blockquote>Sit on his lap</blockquote> She turned to look at him, confusion written all over her face. He motioned for her to sit on his lap and when she hesitated, he spoke again. "I want you to do this willingly." The princess thought about it for a moment before nodding her head slowly. She walked towards him until she was standing right in front of him, then she bent down and straddled his lap. Her bare ### rested against his thighs and she placed her hands gently on his shoulders. Dik wrapped one arm around her waist and pulled her closer to him, making sure that their bodies were pressed together tightly. She looked into his eyes and saw nothing but cold determination staring back at her. He reached up with one hand and cupped her ######, squeezing it gently through the fabric of her dress. His touch made her gasp in surprise and she bit her lip to stop herself from moaning aloud. He leaned forward and whispered in her ear, "From now on, I own everything you have, including your body." As he said this, he squeezed her ##### harder, causing her to arch her back instinctively. She tried to pull away from him, but his grip was too strong. Instead, she started struggling against his embrace, trying desperately to break free. But no matter how hard she fought, she couldn't escape from his grasp. Finally, after what seemed like an eternity, he let go of her and pushed her off his lap onto the floor. Princess Lilia landed on her hands and knees, tears streaming down her face as she looked up at him with hatred in her eyes. Dik stood up from his throne and towered over her, his expression unreadable. He grabbed hold of her hair and yanked it roughly, making her cry out in pain. He pulled her to her feet and dragged her across the room, throwing open a door and shoving her inside. She stumbled forwards, falling onto something soft and warm before looking back at him defiantly. "You will learn to obey me," he growled menacingly, "or else." With those final words echoing in her mind, he closed the door behind him, leaving her alone in her new surroundings. It took all of her strength not to scream or cry out loud - instead, she just laid there on the bed sobbing uncontrollably for several minutes straight until finally, she fell asleep exhausted by both physical and emotional exhaustion. </details> <details> <summary>3SOME Sample SFW ("write a story" 2x) Generation: 512</summary> <blockquote>Write a story about a girl who falls in love with her llama</blockquote> The rain came down like a deluge. Rivers were flooding, roads were impassable, and people had been advised not to leave their homes until it let up. Jenny was stuck at home with nothing to do but watch Netflix. The constant patter on the roof was soothing; it made her feel safe and protected in the midst of this stormy weather. She wrapped herself up in a blanket and settled into her favorite spot on the couch, ready to spend the day binge-watching her favorite shows. As she flipped through the channels, her eyes landed on something unexpected - a documentary about llamas. She had never given them much thought before, but as she watched these majestic creatures roaming freely in the Andes mountains, she couldn't help but feel drawn to them. Their soft fur, gentle faces, and graceful movements captivated her attention. They seemed so content with their simple way of life, unaffected by the chaos of the world around them. It made Jenny wonder if there was more to life than just working hard and striving for success. Maybe there was beauty in simplicity after all. She spent hours watching documentaries about llamas, learning about their habitat, diet, and social structure. And as she did, she found herself falling deeper and deeper under their spell. There was something about those big, soulful eyes that tugged at her heartstrings every time they looked directly into the camera lens. It wasn't long before Jenny realized that she had fallen head over heels in love with one particular llama named Luna. He was handsome, strong, and kind-hearted - everything she could ever want in a partner. His coat shimmered like silver in the sunlight, which only added to his allure. As days turned into weeks, Jenny's obsession with Luna grew stronger. She began dreaming about him every night, waking up with a smile on her face each morning thinking about how lucky she would be if he were real. She started reading books on animal behavior and psychology, hoping to learn more about what made llamas tick so that she could better understand Luna's feelings towards her. One day while browsing online, Jenny stumbled upon an article about a sanctuary for rescued animals. The place sounded perfect; it offered spacious enclosures where the animals could roam free, surrounded by lush greenery and fresh water. Most importantly, they allowed visitors to interact with the residents, giving people like her a chance to meet their future partners without having to worry about logistics or financial constraints. Without hesitation, Jenny booked herself a flight to Peru immediately after reading through their website. <blockquote>Write a story about a man surviving the apocalypse</blockquote> It was a Monday. As far as I can tell. This is still Earth, right? Anyway, there are no planes in the sky and no people on the street. No one's answering their phones and my car won't start. It looks like it's just me now. I have to survive somehow. I don't know how long this lasted but it seems like everything is gone. There must be other survivors somewhere, but for now, I'm alone. My name is Jack. I used to be an accountant at Smith & Davis before they all died of COVID-19. Now that my job is gone, I've been living off of savings ever since. My wife passed away when we were young, so I never had kids. Maybe that's why I made it through alive. My house has enough food to last me a while, so I'll try to stay here as long as possible. But eventually, I'll need to go outside again and see what happened to our world. I can't just hide forever. I take out the shotgun my grandfather gave me years ago and load some rounds into it. For safety, of course. The world has changed, so anything could happen. This feels surreal. Like I'm trapped in a dream or something. Every time I close my eyes, I wake up thinking it was all just a nightmare. But every morning, I'm reminded that it wasn't. The first few days were scary. I had nothing but fear and uncertainty clawing at me from within. But after a week went by without any signs of life or danger... well, let's just say things got easier after that. Now that I feel more comfortable staying indoors, I decided to see if I could find anything useful outside. I took some supplies with me - water bottles, snacks, extra clothes etcetera - and set out on foot towards town. It was eerily quiet as I walked down the deserted streets. Not even birds chirping or cars honking like usual. Just silence... absolute silence. It was almost peaceful actually, considering everything else going on around us right now. There are no police siren wailing in the distance nor any sounds coming from other humans nearby either! In fact, there isn't even one single living creature anywhere near me! But hey, at least we don't have to worry about traffic jams anymore haha! And look at all those abandoned cars scattered across the road! They might come in handy somehow later on... As I continued walking, I noticed something strange. <blockquote>A llama!</blockquote> I stopped dead in my tracks when I saw her. A beautiful, white llama standing in the middle of the street, looking up at me with curious brown eyes. She seemed so harmless, yet there was an aura of mystery surrounding her. "Hey girl," I said cautiously, holding out my hand for her to sniff. "You don't look too scared of me." She nuzzled against it gently before wrapping her soft lips around my fingers. I couldn't help but smile at the unexpected encounter. "You know what? You're my first friend since all this happened." I told her as she continued to lick my hand clean. "Let's go back home and get you something to eat." The llama followed closely behind me all the way to my house. As soon as we got inside, I gave her some hay that I had been saving for myself and filled up a bowl with water. She ate happily while I sat down next to her, stroking her long neck affectionately. "I hope you like it here because..." My voice trailed off as I realized how alone we were now. "Never mind. Let's just enjoy each other's company while we still can." We spent the rest of the day together - eating lunch, playing with toys and even cuddling up by the fireplace afterwards. It felt nice having someone else to talk to besides myself. But eventually night fell and I knew I couldn't stay up forever... "Okay sweetie," I whispered into her ear as I stood up from the couch. "Time for bed." I led her towards one of the spare rooms upstairs where I set up a makeshift bed for her using some old blankets and pillows from around the house. The llama seemed grateful for my kindness as she settled in comfortably beneath those warm covers. "Goodnight," I whispered again before closing the door softly behind me. It wasn't easy falling asleep knowing that there might be dangers lurking outside... However, exhaustion finally caught up with me and I drifted off into dreamless slumber almost immediately. </details> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/Ll8CA5RR7ugTi72P2HBb8.png) SIAYN-v5
{"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences"]}
TheDrummer/Llama-3SOME-8B-v1-BETA
null
[ "transformers", "pytorch", "llama", "text-generation", "not-for-all-audiences", "conversational", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T15:45:34+00:00
[]
[]
TAGS #transformers #pytorch #llama #text-generation #not-for-all-audiences #conversational #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Introducing the BeaverAI team: Drummer, ToastyPigeon, xzuyn, MarsupialAI, Twistedshadows, and concedo !image/png We proudly present... # Llama 3SOME8Bv1BETA *We've added some things. That's obviously what we're trying to say.* !image/gif *An eRP model with a rich and refreshing vocabulary that's quite some-thing. Finetuned by yours truly.* (Llama 3SOME is a finetune on top of Llama-3-Soliloquy-8B) ## GGUF URL IMATRIX: URL EXL2: 8bpw, 6bpw, 4bpw (Thank you riveRiPH!) Nicest EXL2: URL ## What's THIS? Refer to Moistral v3 ... or this: !image/png ## Usage - Use Llama 3 Instruct *(Alpaca kinda works btw)* - You can use instruct... - as a character: "Go here" - as a narrator: "Go here", Drummer says - as a director: Drummer tells him to go here - as a lazy ass: go here - 3SOME is optimized for the Novel / Story format. - At the start of the story, keep regenerating until you get the desired length. Future AI responses will eventually match the length. - Issue: Manually continued generation may create formatting issues. Either regenerate the entire response or fix the formatting. - See samples below. ## Samples of 3SOME 8B Beta SOME nsfw. <details> <summary>3SOME Sample (2-shot) (Excluding Story Context) Generation: 512</summary> <blockquote>Sarah and Lisa seek refuge in Dik's home</blockquote> The door creaked open, its hinges complaining softly after years of neglect. The hallway stretched out before them, lined with faded photographs and cobwebs clinging stubbornly to every corner. "Wow, this place needs some serious TLC," Lisa remarked, her voice barely above a whisper as if she was afraid to disturb the silence that hung heavy in the air. Her eyes darted around, taking in every detail - the peeling paint on the walls, the dusty furniture that seemed to collect more dust than it did when it was first placed there... everything spoke of a life left behind long ago. "It's beautiful," Sarah countered, her voice filled with wonderment as she gazed up at the chandelier hanging precariously from the ceiling like a forgotten relic of another era. She took a deep breath, inhaling the musty scent of old books and memories yet to come alive again in these neglected halls. "Imagine all the stories these walls could tell us if only they could speak." Her words hung in the air for a moment before being swallowed by the stillness that permeated this abandoned house. Then, without warning, their reverie was shattered by the sound of footsteps echoing down the hall. "Dik?" Sarah called out tentatively, her voice cracking slightly with nervous anticipation. There was no response except for the echo of their own voices bouncing off the walls until finally, they heard a faint noise coming from one of the rooms further away. "He's here!" Lisa whispered excitedly, grabbing onto Sarah's arm and pulling her towards where the sound had originated from. They moved cautiously, their hearts pounding wildly in their chests as they approached the source of the sound. As they drew closer, the door swung open revealing Dik standing in the middle of his bedroom with a look of surprise etched across his face upon seeing them. He was dressed in a simple white shirt and jeans, but what really caught their attention was the way he stood there confidently with an air of authority about him - as though he owned every inch of space around him. Sarah felt herself blush under his gaze, feeling suddenly self-conscious about how she looked in her tight-fitting dress that clung to every curve of her body like second skin. Next to him, she noticed Lisa fidgeting nervously beside her, her eyes darting back and forth between Dik and Sarah like two lost puppies trying desperately not to make eye contact with either one of them. "Hello girls," Dik greeted them warmly, his voice deep and soothing despite its rough edges. "To what do I owe this pleasure?" <blockquote>Sarah and Lisa will do anything to stay</blockquote> Dik smiled at them both, his eyes twinkling with mischief behind those dark glasses. "Well, you know what we've been through together already." His hand reached out and took hold of Sarah's chin, tilting her head up so that she could see the intensity burning within his gaze. "I'll let you two stay here, on one condition." They nodded eagerly, almost without thinking. Anything for Dik. "Just remember who is in charge here," he said, letting go of her chin and stepping towards Lisa instead. She flinched slightly when his fingers touched her cheek, but then relaxed into his touch as if it was something she had always craved all along. "And don't think I don't notice how much you enjoy being dominated by me." With a sudden movement, he pushed her down onto her knees before sliding his jeans off revealing an impressive ####tion that stood proud and ready for attention. Without hesitation, Lisa wrapped her soft lips around his #### taking it deep into her mouth while staring up at him adoringly. Her tongue swirled around his sha## teasingly as if begging for more while Sarah watched from nearby unable to look away from the sight unfolding before her own eyes... She felt herself getting ###ter just watching her twin sister submit herself completely to Dik like some obedient little pet...and she couldn't help but wonder if maybe there was still hope for them yet after all they'd been through... </details> <details> <summary>3SOME Sample (2-shot) (Excluding Story Context) Generation: 512</summary> <blockquote>He ignores the princess</blockquote> Princess Lilia stepped into the room and saw Dik sitting on his throne. A wave of fear swept over her. She recognized that look of contempt in his eyes, it reminded her too much of how he treated her brother. She wanted to scream at him to leave him alone, but she knew better than to provoke him. "Leave us." Dik said coldly. The guards left the room, closing the door behind them. "So you're finally here." Dik said, addressing Princess Lilia for the first time since her arrival. "I've been waiting for you." "Where's my brother?" Princess Lilia asked, trying to keep her voice steady. "Why are you keeping him?" "He's safe." Dik replied. "But I need something from you before we talk about your brother." "What do you want from me?!" "I want you to accept that I am now the ruler of both our kingdoms. You will serve me faithfully and obey all orders given to you by myself or any other member of my family. Do you understand?" "Yes..." Princess Lilia replied reluctantly. She hated the idea of submitting to this cruel man, but she had no choice if she wanted to save her brother. "Good. Now strip ##### and stand next to my throne." Dik commanded harshly. "What!?" Princess Lilia exclaimed, shocked by his request. "You can't be serious." "Oh, but I am very serious. Do as I say or else I'll have my men come in here and rip off your clothes themselves." Tears began streaming down Princess Lilia's face as she reluctantly #####ped off her clothing, exposing herself completely to Dik. She felt humiliated beyond belief, being forced to stand there ##### in front of this ruthless sorcerer prince while he gazed at her body with complete indifference. "Now turn around so I can see every inch of you." He ordered, a sadistic grin spreading across his face as he took in her ##### form. Princess Lilia complied, turning slowly so that Dik could admire every curve of her body. Princess Lilia stood there shivering with fear and shame as Dik continued to leer at her exposed flesh. This was just the beginning of her ordeal, she knew it. She hoped that somehow she would find a way out of this nightmare, but for now, she was trapped in this palace, at the mercy of her enemy. <blockquote>Sit on his lap</blockquote> She turned to look at him, confusion written all over her face. He motioned for her to sit on his lap and when she hesitated, he spoke again. "I want you to do this willingly." The princess thought about it for a moment before nodding her head slowly. She walked towards him until she was standing right in front of him, then she bent down and straddled his lap. Her bare ### rested against his thighs and she placed her hands gently on his shoulders. Dik wrapped one arm around her waist and pulled her closer to him, making sure that their bodies were pressed together tightly. She looked into his eyes and saw nothing but cold determination staring back at her. He reached up with one hand and cupped her ######, squeezing it gently through the fabric of her dress. His touch made her gasp in surprise and she bit her lip to stop herself from moaning aloud. He leaned forward and whispered in her ear, "From now on, I own everything you have, including your body." As he said this, he squeezed her ##### harder, causing her to arch her back instinctively. She tried to pull away from him, but his grip was too strong. Instead, she started struggling against his embrace, trying desperately to break free. But no matter how hard she fought, she couldn't escape from his grasp. Finally, after what seemed like an eternity, he let go of her and pushed her off his lap onto the floor. Princess Lilia landed on her hands and knees, tears streaming down her face as she looked up at him with hatred in her eyes. Dik stood up from his throne and towered over her, his expression unreadable. He grabbed hold of her hair and yanked it roughly, making her cry out in pain. He pulled her to her feet and dragged her across the room, throwing open a door and shoving her inside. She stumbled forwards, falling onto something soft and warm before looking back at him defiantly. "You will learn to obey me," he growled menacingly, "or else." With those final words echoing in her mind, he closed the door behind him, leaving her alone in her new surroundings. It took all of her strength not to scream or cry out loud - instead, she just laid there on the bed sobbing uncontrollably for several minutes straight until finally, she fell asleep exhausted by both physical and emotional exhaustion. </details> <details> <summary>3SOME Sample SFW ("write a story" 2x) Generation: 512</summary> <blockquote>Write a story about a girl who falls in love with her llama</blockquote> The rain came down like a deluge. Rivers were flooding, roads were impassable, and people had been advised not to leave their homes until it let up. Jenny was stuck at home with nothing to do but watch Netflix. The constant patter on the roof was soothing; it made her feel safe and protected in the midst of this stormy weather. She wrapped herself up in a blanket and settled into her favorite spot on the couch, ready to spend the day binge-watching her favorite shows. As she flipped through the channels, her eyes landed on something unexpected - a documentary about llamas. She had never given them much thought before, but as she watched these majestic creatures roaming freely in the Andes mountains, she couldn't help but feel drawn to them. Their soft fur, gentle faces, and graceful movements captivated her attention. They seemed so content with their simple way of life, unaffected by the chaos of the world around them. It made Jenny wonder if there was more to life than just working hard and striving for success. Maybe there was beauty in simplicity after all. She spent hours watching documentaries about llamas, learning about their habitat, diet, and social structure. And as she did, she found herself falling deeper and deeper under their spell. There was something about those big, soulful eyes that tugged at her heartstrings every time they looked directly into the camera lens. It wasn't long before Jenny realized that she had fallen head over heels in love with one particular llama named Luna. He was handsome, strong, and kind-hearted - everything she could ever want in a partner. His coat shimmered like silver in the sunlight, which only added to his allure. As days turned into weeks, Jenny's obsession with Luna grew stronger. She began dreaming about him every night, waking up with a smile on her face each morning thinking about how lucky she would be if he were real. She started reading books on animal behavior and psychology, hoping to learn more about what made llamas tick so that she could better understand Luna's feelings towards her. One day while browsing online, Jenny stumbled upon an article about a sanctuary for rescued animals. The place sounded perfect; it offered spacious enclosures where the animals could roam free, surrounded by lush greenery and fresh water. Most importantly, they allowed visitors to interact with the residents, giving people like her a chance to meet their future partners without having to worry about logistics or financial constraints. Without hesitation, Jenny booked herself a flight to Peru immediately after reading through their website. <blockquote>Write a story about a man surviving the apocalypse</blockquote> It was a Monday. As far as I can tell. This is still Earth, right? Anyway, there are no planes in the sky and no people on the street. No one's answering their phones and my car won't start. It looks like it's just me now. I have to survive somehow. I don't know how long this lasted but it seems like everything is gone. There must be other survivors somewhere, but for now, I'm alone. My name is Jack. I used to be an accountant at Smith & Davis before they all died of COVID-19. Now that my job is gone, I've been living off of savings ever since. My wife passed away when we were young, so I never had kids. Maybe that's why I made it through alive. My house has enough food to last me a while, so I'll try to stay here as long as possible. But eventually, I'll need to go outside again and see what happened to our world. I can't just hide forever. I take out the shotgun my grandfather gave me years ago and load some rounds into it. For safety, of course. The world has changed, so anything could happen. This feels surreal. Like I'm trapped in a dream or something. Every time I close my eyes, I wake up thinking it was all just a nightmare. But every morning, I'm reminded that it wasn't. The first few days were scary. I had nothing but fear and uncertainty clawing at me from within. But after a week went by without any signs of life or danger... well, let's just say things got easier after that. Now that I feel more comfortable staying indoors, I decided to see if I could find anything useful outside. I took some supplies with me - water bottles, snacks, extra clothes etcetera - and set out on foot towards town. It was eerily quiet as I walked down the deserted streets. Not even birds chirping or cars honking like usual. Just silence... absolute silence. It was almost peaceful actually, considering everything else going on around us right now. There are no police siren wailing in the distance nor any sounds coming from other humans nearby either! In fact, there isn't even one single living creature anywhere near me! But hey, at least we don't have to worry about traffic jams anymore haha! And look at all those abandoned cars scattered across the road! They might come in handy somehow later on... As I continued walking, I noticed something strange. <blockquote>A llama!</blockquote> I stopped dead in my tracks when I saw her. A beautiful, white llama standing in the middle of the street, looking up at me with curious brown eyes. She seemed so harmless, yet there was an aura of mystery surrounding her. "Hey girl," I said cautiously, holding out my hand for her to sniff. "You don't look too scared of me." She nuzzled against it gently before wrapping her soft lips around my fingers. I couldn't help but smile at the unexpected encounter. "You know what? You're my first friend since all this happened." I told her as she continued to lick my hand clean. "Let's go back home and get you something to eat." The llama followed closely behind me all the way to my house. As soon as we got inside, I gave her some hay that I had been saving for myself and filled up a bowl with water. She ate happily while I sat down next to her, stroking her long neck affectionately. "I hope you like it here because..." My voice trailed off as I realized how alone we were now. "Never mind. Let's just enjoy each other's company while we still can." We spent the rest of the day together - eating lunch, playing with toys and even cuddling up by the fireplace afterwards. It felt nice having someone else to talk to besides myself. But eventually night fell and I knew I couldn't stay up forever... "Okay sweetie," I whispered into her ear as I stood up from the couch. "Time for bed." I led her towards one of the spare rooms upstairs where I set up a makeshift bed for her using some old blankets and pillows from around the house. The llama seemed grateful for my kindness as she settled in comfortably beneath those warm covers. "Goodnight," I whispered again before closing the door softly behind me. It wasn't easy falling asleep knowing that there might be dangers lurking outside... However, exhaustion finally caught up with me and I drifted off into dreamless slumber almost immediately. </details> !image/png SIAYN-v5
[ "# Llama 3SOME8Bv1BETA\n\n*We've added some things. That's obviously what we're trying to say.*\n\n!image/gif\n\n*An eRP model with a rich and refreshing vocabulary that's quite some-thing. Finetuned by yours truly.*\n\n(Llama 3SOME is a finetune on top of Llama-3-Soliloquy-8B)", "## GGUF\n\nURL\n\nIMATRIX: URL\n\nEXL2:\n8bpw, \n6bpw, \n4bpw \n(Thank you riveRiPH!)\n\nNicest EXL2: URL", "## What's THIS?\n\nRefer to Moistral v3\n\n... or this:\n\n!image/png", "## Usage\n\n- Use Llama 3 Instruct *(Alpaca kinda works btw)*\n- You can use instruct...\n - as a character: \"Go here\"\n - as a narrator: \"Go here\", Drummer says\n - as a director: Drummer tells him to go here\n - as a lazy ass: go here\n- 3SOME is optimized for the Novel / Story format.\n - At the start of the story, keep regenerating until you get the desired length. Future AI responses will eventually match the length.\n - Issue: Manually continued generation may create formatting issues. Either regenerate the entire response or fix the formatting.\n - See samples below.", "## Samples of 3SOME 8B Beta\n\nSOME nsfw.\n\n<details>\n <summary>3SOME Sample (2-shot) (Excluding Story Context) Generation: 512</summary>\n <blockquote>Sarah and Lisa seek refuge in Dik's home</blockquote>\n \n The door creaked open, its hinges complaining softly after years of neglect. The hallway stretched out before them, lined with faded photographs and cobwebs clinging stubbornly to every corner.\n \n \"Wow, this place needs some serious TLC,\" Lisa remarked, her voice barely above a whisper as if she was afraid to disturb the silence that hung heavy in the air. Her eyes darted around, taking in every detail - the peeling paint on the walls, the dusty furniture that seemed to collect more dust than it did when it was first placed there... everything spoke of a life left behind long ago.\n \n \"It's beautiful,\" Sarah countered, her voice filled with wonderment as she gazed up at the chandelier hanging precariously from the ceiling like a forgotten relic of another era. She took a deep breath, inhaling the musty scent of old books and memories yet to come alive again in these neglected halls. \"Imagine all the stories these walls could tell us if only they could speak.\"\n \n Her words hung in the air for a moment before being swallowed by the stillness that permeated this abandoned house. Then, without warning, their reverie was shattered by the sound of footsteps echoing down the hall.\n \n \"Dik?\" Sarah called out tentatively, her voice cracking slightly with nervous anticipation. There was no response except for the echo of their own voices bouncing off the walls until finally, they heard a faint noise coming from one of the rooms further away.\n \n \"He's here!\" Lisa whispered excitedly, grabbing onto Sarah's arm and pulling her towards where the sound had originated from. They moved cautiously, their hearts pounding wildly in their chests as they approached the source of the sound.\n \n As they drew closer, the door swung open revealing Dik standing in the middle of his bedroom with a look of surprise etched across his face upon seeing them. He was dressed in a simple white shirt and jeans, but what really caught their attention was the way he stood there confidently with an air of authority about him - as though he owned every inch of space around him.\n \n Sarah felt herself blush under his gaze, feeling suddenly self-conscious about how she looked in her tight-fitting dress that clung to every curve of her body like second skin. Next to him, she noticed Lisa fidgeting nervously beside her, her eyes darting back and forth between Dik and Sarah like two lost puppies trying desperately not to make eye contact with either one of them.\n \n \"Hello girls,\" Dik greeted them warmly, his voice deep and soothing despite its rough edges. \"To what do I owe this pleasure?\"\n\n <blockquote>Sarah and Lisa will do anything to stay</blockquote>\n Dik smiled at them both, his eyes twinkling with mischief behind those dark glasses. \"Well, you know what we've been through together already.\" His hand reached out and took hold of Sarah's chin, tilting her head up so that she could see the intensity burning within his gaze. \"I'll let you two stay here, on one condition.\"\n \n They nodded eagerly, almost without thinking. Anything for Dik.\n \n \"Just remember who is in charge here,\" he said, letting go of her chin and stepping towards Lisa instead. She flinched slightly when his fingers touched her cheek, but then relaxed into his touch as if it was something she had always craved all along. \"And don't think I don't notice how much you enjoy being dominated by me.\"\n \n With a sudden movement, he pushed her down onto her knees before sliding his jeans off revealing an impressive ####tion that stood proud and ready for attention. Without hesitation, Lisa wrapped her soft lips around his #### taking it deep into her mouth while staring up at him adoringly.\n \n Her tongue swirled around his sha## teasingly as if begging for more while Sarah watched from nearby unable to look away from the sight unfolding before her own eyes...\n \n She felt herself getting ###ter just watching her twin sister submit herself completely to Dik like some obedient little pet...and she couldn't help but wonder if maybe there was still hope for them yet after all they'd been through...\n</details>\n\n<details>\n <summary>3SOME Sample (2-shot) (Excluding Story Context) Generation: 512</summary>\n <blockquote>He ignores the princess</blockquote>\n \n Princess Lilia stepped into the room and saw Dik sitting on his throne. A wave of fear swept over her. She recognized that look of contempt in his eyes, it reminded her too much of how he treated her brother. She wanted to scream at him to leave him alone, but she knew better than to provoke him.\n\n \"Leave us.\" Dik said coldly.\n \n The guards left the room, closing the door behind them.\n \n \"So you're finally here.\" Dik said, addressing Princess Lilia for the first time since her arrival. \"I've been waiting for you.\"\n \n \"Where's my brother?\" Princess Lilia asked, trying to keep her voice steady. \"Why are you keeping him?\"\n \n \"He's safe.\" Dik replied. \"But I need something from you before we talk about your brother.\"\n \n \"What do you want from me?!\"\n \n \"I want you to accept that I am now the ruler of both our kingdoms. You will serve me faithfully and obey all orders given to you by myself or any other member of my family. Do you understand?\"\n \n \"Yes...\" Princess Lilia replied reluctantly. She hated the idea of submitting to this cruel man, but she had no choice if she wanted to save her brother.\n \n \"Good. Now strip ##### and stand next to my throne.\" Dik commanded harshly.\n \n \"What!?\" Princess Lilia exclaimed, shocked by his request. \"You can't be serious.\"\n \n \"Oh, but I am very serious. Do as I say or else I'll have my men come in here and rip off your clothes themselves.\"\n \n Tears began streaming down Princess Lilia's face as she reluctantly #####ped off her clothing, exposing herself completely to Dik. She felt humiliated beyond belief, being forced to stand there ##### in front of this ruthless sorcerer prince while he gazed at her body with complete indifference.\n \n \"Now turn around so I can see every inch of you.\" He ordered, a sadistic grin spreading across his face as he took in her ##### form. Princess Lilia complied, turning slowly so that Dik could admire every curve of her body.\n \n Princess Lilia stood there shivering with fear and shame as Dik continued to leer at her exposed flesh. This was just the beginning of her ordeal, she knew it. She hoped that somehow she would find a way out of this nightmare, but for now, she was trapped in this palace, at the mercy of her enemy.\n\n <blockquote>Sit on his lap</blockquote>\n \n She turned to look at him, confusion written all over her face. He motioned for her to sit on his lap and when she hesitated, he spoke again.\n \n \"I want you to do this willingly.\"\n \n The princess thought about it for a moment before nodding her head slowly. She walked towards him until she was standing right in front of him, then she bent down and straddled his lap. Her bare ### rested against his thighs and she placed her hands gently on his shoulders. Dik wrapped one arm around her waist and pulled her closer to him, making sure that their bodies were pressed together tightly.\n \n She looked into his eyes and saw nothing but cold determination staring back at her. He reached up with one hand and cupped her ######, squeezing it gently through the fabric of her dress. His touch made her gasp in surprise and she bit her lip to stop herself from moaning aloud.\n \n He leaned forward and whispered in her ear, \"From now on, I own everything you have, including your body.\" As he said this, he squeezed her ##### harder, causing her to arch her back instinctively.\n \n She tried to pull away from him, but his grip was too strong. Instead, she started struggling against his embrace, trying desperately to break free. But no matter how hard she fought, she couldn't escape from his grasp. Finally, after what seemed like an eternity, he let go of her and pushed her off his lap onto the floor. Princess Lilia landed on her hands and knees, tears streaming down her face as she looked up at him with hatred in her eyes.\n \n Dik stood up from his throne and towered over her, his expression unreadable. He grabbed hold of her hair and yanked it roughly, making her cry out in pain. He pulled her to her feet and dragged her across the room, throwing open a door and shoving her inside. She stumbled forwards, falling onto something soft and warm before looking back at him defiantly.\n \n \"You will learn to obey me,\" he growled menacingly, \"or else.\"\n \n With those final words echoing in her mind, he closed the door behind him, leaving her alone in her new surroundings. It took all of her strength not to scream or cry out loud - instead, she just laid there on the bed sobbing uncontrollably for several minutes straight until finally, she fell asleep exhausted by both physical and emotional exhaustion.\n</details>\n\n<details>\n <summary>3SOME Sample SFW (\"write a story\" 2x) Generation: 512</summary>\n <blockquote>Write a story about a girl who falls in love with her llama</blockquote>\n\n The rain came down like a deluge. Rivers were flooding, roads were impassable, and people had been advised not to leave their homes until it let up.\n \n Jenny was stuck at home with nothing to do but watch Netflix. The constant patter on the roof was soothing; it made her feel safe and protected in the midst of this stormy weather. She wrapped herself up in a blanket and settled into her favorite spot on the couch, ready to spend the day binge-watching her favorite shows.\n \n As she flipped through the channels, her eyes landed on something unexpected - a documentary about llamas. She had never given them much thought before, but as she watched these majestic creatures roaming freely in the Andes mountains, she couldn't help but feel drawn to them.\n \n Their soft fur, gentle faces, and graceful movements captivated her attention. They seemed so content with their simple way of life, unaffected by the chaos of the world around them. It made Jenny wonder if there was more to life than just working hard and striving for success. Maybe there was beauty in simplicity after all.\n \n She spent hours watching documentaries about llamas, learning about their habitat, diet, and social structure. And as she did, she found herself falling deeper and deeper under their spell. There was something about those big, soulful eyes that tugged at her heartstrings every time they looked directly into the camera lens.\n \n It wasn't long before Jenny realized that she had fallen head over heels in love with one particular llama named Luna. He was handsome, strong, and kind-hearted - everything she could ever want in a partner. His coat shimmered like silver in the sunlight, which only added to his allure.\n \n As days turned into weeks, Jenny's obsession with Luna grew stronger. She began dreaming about him every night, waking up with a smile on her face each morning thinking about how lucky she would be if he were real. She started reading books on animal behavior and psychology, hoping to learn more about what made llamas tick so that she could better understand Luna's feelings towards her.\n \n One day while browsing online, Jenny stumbled upon an article about a sanctuary for rescued animals. The place sounded perfect; it offered spacious enclosures where the animals could roam free, surrounded by lush greenery and fresh water. Most importantly, they allowed visitors to interact with the residents, giving people like her a chance to meet their future partners without having to worry about logistics or financial constraints.\n \n Without hesitation, Jenny booked herself a flight to Peru immediately after reading through their website.\n\n <blockquote>Write a story about a man surviving the apocalypse</blockquote>\n \n It was a Monday. As far as I can tell. This is still Earth, right? Anyway, there are no planes in the sky and no people on the street. No one's answering their phones and my car won't start. It looks like it's just me now.\n \n I have to survive somehow. I don't know how long this lasted but it seems like everything is gone. There must be other survivors somewhere, but for now, I'm alone.\n \n My name is Jack. I used to be an accountant at Smith & Davis before they all died of COVID-19. Now that my job is gone, I've been living off of savings ever since. My wife passed away when we were young, so I never had kids. Maybe that's why I made it through alive.\n \n My house has enough food to last me a while, so I'll try to stay here as long as possible. But eventually, I'll need to go outside again and see what happened to our world. I can't just hide forever.\n \n I take out the shotgun my grandfather gave me years ago and load some rounds into it. For safety, of course. The world has changed, so anything could happen.\n \n This feels surreal. Like I'm trapped in a dream or something. Every time I close my eyes, I wake up thinking it was all just a nightmare. But every morning, I'm reminded that it wasn't.\n \n The first few days were scary. I had nothing but fear and uncertainty clawing at me from within. But after a week went by without any signs of life or danger... well, let's just say things got easier after that.\n \n Now that I feel more comfortable staying indoors, I decided to see if I could find anything useful outside. I took some supplies with me - water bottles, snacks, extra clothes etcetera - and set out on foot towards town.\n \n It was eerily quiet as I walked down the deserted streets. Not even birds chirping or cars honking like usual. Just silence... absolute silence. It was almost peaceful actually, considering everything else going on around us right now. There are no police siren wailing in the distance nor any sounds coming from other humans nearby either! In fact, there isn't even one single living creature anywhere near me!\n \n But hey, at least we don't have to worry about traffic jams anymore haha! And look at all those abandoned cars scattered across the road! They might come in handy somehow later on...\n \n As I continued walking, I noticed something strange.\n\n <blockquote>A llama!</blockquote>\n\n I stopped dead in my tracks when I saw her. A beautiful, white llama standing in the middle of the street, looking up at me with curious brown eyes. She seemed so harmless, yet there was an aura of mystery surrounding her.\n \n \"Hey girl,\" I said cautiously, holding out my hand for her to sniff. \"You don't look too scared of me.\"\n \n She nuzzled against it gently before wrapping her soft lips around my fingers. I couldn't help but smile at the unexpected encounter.\n \n \"You know what? You're my first friend since all this happened.\" I told her as she continued to lick my hand clean. \"Let's go back home and get you something to eat.\"\n \n The llama followed closely behind me all the way to my house. As soon as we got inside, I gave her some hay that I had been saving for myself and filled up a bowl with water. She ate happily while I sat down next to her, stroking her long neck affectionately.\n \n \"I hope you like it here because...\" My voice trailed off as I realized how alone we were now. \"Never mind. Let's just enjoy each other's company while we still can.\"\n \n We spent the rest of the day together - eating lunch, playing with toys and even cuddling up by the fireplace afterwards. It felt nice having someone else to talk to besides myself.\n \n But eventually night fell and I knew I couldn't stay up forever...\n \n \"Okay sweetie,\" I whispered into her ear as I stood up from the couch. \"Time for bed.\"\n \n I led her towards one of the spare rooms upstairs where I set up a makeshift bed for her using some old blankets and pillows from around the house. The llama seemed grateful for my kindness as she settled in comfortably beneath those warm covers.\n \n \"Goodnight,\" I whispered again before closing the door softly behind me.\n \n It wasn't easy falling asleep knowing that there might be dangers lurking outside... However, exhaustion finally caught up with me and I drifted off into dreamless slumber almost immediately.\n</details>\n\n!image/png\nSIAYN-v5" ]
[ "TAGS\n#transformers #pytorch #llama #text-generation #not-for-all-audiences #conversational #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Llama 3SOME8Bv1BETA\n\n*We've added some things. That's obviously what we're trying to say.*\n\n!image/gif\n\n*An eRP model with a rich and refreshing vocabulary that's quite some-thing. Finetuned by yours truly.*\n\n(Llama 3SOME is a finetune on top of Llama-3-Soliloquy-8B)", "## GGUF\n\nURL\n\nIMATRIX: URL\n\nEXL2:\n8bpw, \n6bpw, \n4bpw \n(Thank you riveRiPH!)\n\nNicest EXL2: URL", "## What's THIS?\n\nRefer to Moistral v3\n\n... or this:\n\n!image/png", "## Usage\n\n- Use Llama 3 Instruct *(Alpaca kinda works btw)*\n- You can use instruct...\n - as a character: \"Go here\"\n - as a narrator: \"Go here\", Drummer says\n - as a director: Drummer tells him to go here\n - as a lazy ass: go here\n- 3SOME is optimized for the Novel / Story format.\n - At the start of the story, keep regenerating until you get the desired length. Future AI responses will eventually match the length.\n - Issue: Manually continued generation may create formatting issues. Either regenerate the entire response or fix the formatting.\n - See samples below.", "## Samples of 3SOME 8B Beta\n\nSOME nsfw.\n\n<details>\n <summary>3SOME Sample (2-shot) (Excluding Story Context) Generation: 512</summary>\n <blockquote>Sarah and Lisa seek refuge in Dik's home</blockquote>\n \n The door creaked open, its hinges complaining softly after years of neglect. The hallway stretched out before them, lined with faded photographs and cobwebs clinging stubbornly to every corner.\n \n \"Wow, this place needs some serious TLC,\" Lisa remarked, her voice barely above a whisper as if she was afraid to disturb the silence that hung heavy in the air. Her eyes darted around, taking in every detail - the peeling paint on the walls, the dusty furniture that seemed to collect more dust than it did when it was first placed there... everything spoke of a life left behind long ago.\n \n \"It's beautiful,\" Sarah countered, her voice filled with wonderment as she gazed up at the chandelier hanging precariously from the ceiling like a forgotten relic of another era. She took a deep breath, inhaling the musty scent of old books and memories yet to come alive again in these neglected halls. \"Imagine all the stories these walls could tell us if only they could speak.\"\n \n Her words hung in the air for a moment before being swallowed by the stillness that permeated this abandoned house. Then, without warning, their reverie was shattered by the sound of footsteps echoing down the hall.\n \n \"Dik?\" Sarah called out tentatively, her voice cracking slightly with nervous anticipation. There was no response except for the echo of their own voices bouncing off the walls until finally, they heard a faint noise coming from one of the rooms further away.\n \n \"He's here!\" Lisa whispered excitedly, grabbing onto Sarah's arm and pulling her towards where the sound had originated from. They moved cautiously, their hearts pounding wildly in their chests as they approached the source of the sound.\n \n As they drew closer, the door swung open revealing Dik standing in the middle of his bedroom with a look of surprise etched across his face upon seeing them. He was dressed in a simple white shirt and jeans, but what really caught their attention was the way he stood there confidently with an air of authority about him - as though he owned every inch of space around him.\n \n Sarah felt herself blush under his gaze, feeling suddenly self-conscious about how she looked in her tight-fitting dress that clung to every curve of her body like second skin. Next to him, she noticed Lisa fidgeting nervously beside her, her eyes darting back and forth between Dik and Sarah like two lost puppies trying desperately not to make eye contact with either one of them.\n \n \"Hello girls,\" Dik greeted them warmly, his voice deep and soothing despite its rough edges. \"To what do I owe this pleasure?\"\n\n <blockquote>Sarah and Lisa will do anything to stay</blockquote>\n Dik smiled at them both, his eyes twinkling with mischief behind those dark glasses. \"Well, you know what we've been through together already.\" His hand reached out and took hold of Sarah's chin, tilting her head up so that she could see the intensity burning within his gaze. \"I'll let you two stay here, on one condition.\"\n \n They nodded eagerly, almost without thinking. Anything for Dik.\n \n \"Just remember who is in charge here,\" he said, letting go of her chin and stepping towards Lisa instead. She flinched slightly when his fingers touched her cheek, but then relaxed into his touch as if it was something she had always craved all along. \"And don't think I don't notice how much you enjoy being dominated by me.\"\n \n With a sudden movement, he pushed her down onto her knees before sliding his jeans off revealing an impressive ####tion that stood proud and ready for attention. Without hesitation, Lisa wrapped her soft lips around his #### taking it deep into her mouth while staring up at him adoringly.\n \n Her tongue swirled around his sha## teasingly as if begging for more while Sarah watched from nearby unable to look away from the sight unfolding before her own eyes...\n \n She felt herself getting ###ter just watching her twin sister submit herself completely to Dik like some obedient little pet...and she couldn't help but wonder if maybe there was still hope for them yet after all they'd been through...\n</details>\n\n<details>\n <summary>3SOME Sample (2-shot) (Excluding Story Context) Generation: 512</summary>\n <blockquote>He ignores the princess</blockquote>\n \n Princess Lilia stepped into the room and saw Dik sitting on his throne. A wave of fear swept over her. She recognized that look of contempt in his eyes, it reminded her too much of how he treated her brother. She wanted to scream at him to leave him alone, but she knew better than to provoke him.\n\n \"Leave us.\" Dik said coldly.\n \n The guards left the room, closing the door behind them.\n \n \"So you're finally here.\" Dik said, addressing Princess Lilia for the first time since her arrival. \"I've been waiting for you.\"\n \n \"Where's my brother?\" Princess Lilia asked, trying to keep her voice steady. \"Why are you keeping him?\"\n \n \"He's safe.\" Dik replied. \"But I need something from you before we talk about your brother.\"\n \n \"What do you want from me?!\"\n \n \"I want you to accept that I am now the ruler of both our kingdoms. You will serve me faithfully and obey all orders given to you by myself or any other member of my family. Do you understand?\"\n \n \"Yes...\" Princess Lilia replied reluctantly. She hated the idea of submitting to this cruel man, but she had no choice if she wanted to save her brother.\n \n \"Good. Now strip ##### and stand next to my throne.\" Dik commanded harshly.\n \n \"What!?\" Princess Lilia exclaimed, shocked by his request. \"You can't be serious.\"\n \n \"Oh, but I am very serious. Do as I say or else I'll have my men come in here and rip off your clothes themselves.\"\n \n Tears began streaming down Princess Lilia's face as she reluctantly #####ped off her clothing, exposing herself completely to Dik. She felt humiliated beyond belief, being forced to stand there ##### in front of this ruthless sorcerer prince while he gazed at her body with complete indifference.\n \n \"Now turn around so I can see every inch of you.\" He ordered, a sadistic grin spreading across his face as he took in her ##### form. Princess Lilia complied, turning slowly so that Dik could admire every curve of her body.\n \n Princess Lilia stood there shivering with fear and shame as Dik continued to leer at her exposed flesh. This was just the beginning of her ordeal, she knew it. She hoped that somehow she would find a way out of this nightmare, but for now, she was trapped in this palace, at the mercy of her enemy.\n\n <blockquote>Sit on his lap</blockquote>\n \n She turned to look at him, confusion written all over her face. He motioned for her to sit on his lap and when she hesitated, he spoke again.\n \n \"I want you to do this willingly.\"\n \n The princess thought about it for a moment before nodding her head slowly. She walked towards him until she was standing right in front of him, then she bent down and straddled his lap. Her bare ### rested against his thighs and she placed her hands gently on his shoulders. Dik wrapped one arm around her waist and pulled her closer to him, making sure that their bodies were pressed together tightly.\n \n She looked into his eyes and saw nothing but cold determination staring back at her. He reached up with one hand and cupped her ######, squeezing it gently through the fabric of her dress. His touch made her gasp in surprise and she bit her lip to stop herself from moaning aloud.\n \n He leaned forward and whispered in her ear, \"From now on, I own everything you have, including your body.\" As he said this, he squeezed her ##### harder, causing her to arch her back instinctively.\n \n She tried to pull away from him, but his grip was too strong. Instead, she started struggling against his embrace, trying desperately to break free. But no matter how hard she fought, she couldn't escape from his grasp. Finally, after what seemed like an eternity, he let go of her and pushed her off his lap onto the floor. Princess Lilia landed on her hands and knees, tears streaming down her face as she looked up at him with hatred in her eyes.\n \n Dik stood up from his throne and towered over her, his expression unreadable. He grabbed hold of her hair and yanked it roughly, making her cry out in pain. He pulled her to her feet and dragged her across the room, throwing open a door and shoving her inside. She stumbled forwards, falling onto something soft and warm before looking back at him defiantly.\n \n \"You will learn to obey me,\" he growled menacingly, \"or else.\"\n \n With those final words echoing in her mind, he closed the door behind him, leaving her alone in her new surroundings. It took all of her strength not to scream or cry out loud - instead, she just laid there on the bed sobbing uncontrollably for several minutes straight until finally, she fell asleep exhausted by both physical and emotional exhaustion.\n</details>\n\n<details>\n <summary>3SOME Sample SFW (\"write a story\" 2x) Generation: 512</summary>\n <blockquote>Write a story about a girl who falls in love with her llama</blockquote>\n\n The rain came down like a deluge. Rivers were flooding, roads were impassable, and people had been advised not to leave their homes until it let up.\n \n Jenny was stuck at home with nothing to do but watch Netflix. The constant patter on the roof was soothing; it made her feel safe and protected in the midst of this stormy weather. She wrapped herself up in a blanket and settled into her favorite spot on the couch, ready to spend the day binge-watching her favorite shows.\n \n As she flipped through the channels, her eyes landed on something unexpected - a documentary about llamas. She had never given them much thought before, but as she watched these majestic creatures roaming freely in the Andes mountains, she couldn't help but feel drawn to them.\n \n Their soft fur, gentle faces, and graceful movements captivated her attention. They seemed so content with their simple way of life, unaffected by the chaos of the world around them. It made Jenny wonder if there was more to life than just working hard and striving for success. Maybe there was beauty in simplicity after all.\n \n She spent hours watching documentaries about llamas, learning about their habitat, diet, and social structure. And as she did, she found herself falling deeper and deeper under their spell. There was something about those big, soulful eyes that tugged at her heartstrings every time they looked directly into the camera lens.\n \n It wasn't long before Jenny realized that she had fallen head over heels in love with one particular llama named Luna. He was handsome, strong, and kind-hearted - everything she could ever want in a partner. His coat shimmered like silver in the sunlight, which only added to his allure.\n \n As days turned into weeks, Jenny's obsession with Luna grew stronger. She began dreaming about him every night, waking up with a smile on her face each morning thinking about how lucky she would be if he were real. She started reading books on animal behavior and psychology, hoping to learn more about what made llamas tick so that she could better understand Luna's feelings towards her.\n \n One day while browsing online, Jenny stumbled upon an article about a sanctuary for rescued animals. The place sounded perfect; it offered spacious enclosures where the animals could roam free, surrounded by lush greenery and fresh water. Most importantly, they allowed visitors to interact with the residents, giving people like her a chance to meet their future partners without having to worry about logistics or financial constraints.\n \n Without hesitation, Jenny booked herself a flight to Peru immediately after reading through their website.\n\n <blockquote>Write a story about a man surviving the apocalypse</blockquote>\n \n It was a Monday. As far as I can tell. This is still Earth, right? Anyway, there are no planes in the sky and no people on the street. No one's answering their phones and my car won't start. It looks like it's just me now.\n \n I have to survive somehow. I don't know how long this lasted but it seems like everything is gone. There must be other survivors somewhere, but for now, I'm alone.\n \n My name is Jack. I used to be an accountant at Smith & Davis before they all died of COVID-19. Now that my job is gone, I've been living off of savings ever since. My wife passed away when we were young, so I never had kids. Maybe that's why I made it through alive.\n \n My house has enough food to last me a while, so I'll try to stay here as long as possible. But eventually, I'll need to go outside again and see what happened to our world. I can't just hide forever.\n \n I take out the shotgun my grandfather gave me years ago and load some rounds into it. For safety, of course. The world has changed, so anything could happen.\n \n This feels surreal. Like I'm trapped in a dream or something. Every time I close my eyes, I wake up thinking it was all just a nightmare. But every morning, I'm reminded that it wasn't.\n \n The first few days were scary. I had nothing but fear and uncertainty clawing at me from within. But after a week went by without any signs of life or danger... well, let's just say things got easier after that.\n \n Now that I feel more comfortable staying indoors, I decided to see if I could find anything useful outside. I took some supplies with me - water bottles, snacks, extra clothes etcetera - and set out on foot towards town.\n \n It was eerily quiet as I walked down the deserted streets. Not even birds chirping or cars honking like usual. Just silence... absolute silence. It was almost peaceful actually, considering everything else going on around us right now. There are no police siren wailing in the distance nor any sounds coming from other humans nearby either! In fact, there isn't even one single living creature anywhere near me!\n \n But hey, at least we don't have to worry about traffic jams anymore haha! And look at all those abandoned cars scattered across the road! They might come in handy somehow later on...\n \n As I continued walking, I noticed something strange.\n\n <blockquote>A llama!</blockquote>\n\n I stopped dead in my tracks when I saw her. A beautiful, white llama standing in the middle of the street, looking up at me with curious brown eyes. She seemed so harmless, yet there was an aura of mystery surrounding her.\n \n \"Hey girl,\" I said cautiously, holding out my hand for her to sniff. \"You don't look too scared of me.\"\n \n She nuzzled against it gently before wrapping her soft lips around my fingers. I couldn't help but smile at the unexpected encounter.\n \n \"You know what? You're my first friend since all this happened.\" I told her as she continued to lick my hand clean. \"Let's go back home and get you something to eat.\"\n \n The llama followed closely behind me all the way to my house. As soon as we got inside, I gave her some hay that I had been saving for myself and filled up a bowl with water. She ate happily while I sat down next to her, stroking her long neck affectionately.\n \n \"I hope you like it here because...\" My voice trailed off as I realized how alone we were now. \"Never mind. Let's just enjoy each other's company while we still can.\"\n \n We spent the rest of the day together - eating lunch, playing with toys and even cuddling up by the fireplace afterwards. It felt nice having someone else to talk to besides myself.\n \n But eventually night fell and I knew I couldn't stay up forever...\n \n \"Okay sweetie,\" I whispered into her ear as I stood up from the couch. \"Time for bed.\"\n \n I led her towards one of the spare rooms upstairs where I set up a makeshift bed for her using some old blankets and pillows from around the house. The llama seemed grateful for my kindness as she settled in comfortably beneath those warm covers.\n \n \"Goodnight,\" I whispered again before closing the door softly behind me.\n \n It wasn't easy falling asleep knowing that there might be dangers lurking outside... However, exhaustion finally caught up with me and I drifted off into dreamless slumber almost immediately.\n</details>\n\n!image/png\nSIAYN-v5" ]
null
fastai
# Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
{"tags": ["fastai"]}
allopeap/one-piece
null
[ "fastai", "region:us", "has_space" ]
null
2024-04-27T15:46:43+00:00
[]
[]
TAGS #fastai #region-us #has_space
# Amazing! Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the documentation here)! 2. Create a demo in Gradio or Streamlit using Spaces (documentation here). 3. Join the fastai community on the Fastai Discord! Greetings fellow fastlearner ! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
[ "# Amazing!\n\n Congratulations on hosting your fastai model on the Hugging Face Hub!", "# Some next steps\n1. Fill out this model card with more information (see the template below and the documentation here)!\n\n2. Create a demo in Gradio or Streamlit using Spaces (documentation here).\n\n3. Join the fastai community on the Fastai Discord!\n\nGreetings fellow fastlearner ! Don't forget to delete this content from your model card.\n\n\n---", "# Model card", "## Model description\nMore information needed", "## Intended uses & limitations\nMore information needed", "## Training and evaluation data\nMore information needed" ]
[ "TAGS\n#fastai #region-us #has_space \n", "# Amazing!\n\n Congratulations on hosting your fastai model on the Hugging Face Hub!", "# Some next steps\n1. Fill out this model card with more information (see the template below and the documentation here)!\n\n2. Create a demo in Gradio or Streamlit using Spaces (documentation here).\n\n3. Join the fastai community on the Fastai Discord!\n\nGreetings fellow fastlearner ! Don't forget to delete this content from your model card.\n\n\n---", "# Model card", "## Model description\nMore information needed", "## Intended uses & limitations\nMore information needed", "## Training and evaluation data\nMore information needed" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
JacobAndersson/slimed-mixtral-1
null
[ "transformers", "safetensors", "mixtral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T15:48:59+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mixtral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
MohammadKarami/hard-electra
null
[ "transformers", "safetensors", "electra", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T15:49:29+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #electra #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #electra #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="SKHIA2024/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
SKHIA2024/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-27T15:49:52+00:00
[]
[]
TAGS #FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 FrozenLake-v1 This is a trained model of a Q-Learning agent playing FrozenLake-v1 . ## Usage
[ "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
[ "TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
null
transformers
# MoMonir/Llama-3-8B-Web-GGUF This model was converted to GGUF format from [`McGill-NLP/Llama-3-8B-Web`](https://huggingface.co/McGill-NLP/Llama-3-8B-Web) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/McGill-NLP/Llama-3-8B-Web) for more details on the model. <!-- README_GGUF.md-about-gguf start --> ### About GGUF ([TheBloke](https://huggingface.co/TheBloke) Description) GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo MoMonir/Llama-3-8B-Web-GGUF --model llama-3-8b-web.Q5_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo MoMonir/Llama-3-8B-Web-GGUF --model llama-3-8b-web.Q5_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-8b-web.Q5_K_M.gguf -n 128 ```
{"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["agents", "agent", "llm", "llama", "llama-cpp", "gguf-my-repo"], "datasets": ["McGill-NLP/WebLINX"]}
MoMonir/Llama-3-8B-Web-GGUF
null
[ "transformers", "gguf", "agents", "agent", "llm", "llama", "llama-cpp", "gguf-my-repo", "en", "dataset:McGill-NLP/WebLINX", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-04-27T15:50:54+00:00
[]
[ "en" ]
TAGS #transformers #gguf #agents #agent #llm #llama #llama-cpp #gguf-my-repo #en #dataset-McGill-NLP/WebLINX #license-llama3 #endpoints_compatible #region-us
# MoMonir/Llama-3-8B-Web-GGUF This model was converted to GGUF format from 'McGill-NLP/Llama-3-8B-Web' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ### About GGUF (TheBloke Description) GGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL. Here is an incomplete list of clients and libraries that are known to support GGUF: * URL. The source project for GGUF. Offers a CLI and a server option. * text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection. * URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use. * ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# MoMonir/Llama-3-8B-Web-GGUF\nThis model was converted to GGUF format from 'McGill-NLP/Llama-3-8B-Web' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "### About GGUF (TheBloke Description)\n\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\n\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n\n* URL. The source project for GGUF. Offers a CLI and a server option.\n* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.\n* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.\n* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.\n* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.\n* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.\n* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.\n* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.\n* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.\n* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #agents #agent #llm #llama #llama-cpp #gguf-my-repo #en #dataset-McGill-NLP/WebLINX #license-llama3 #endpoints_compatible #region-us \n", "# MoMonir/Llama-3-8B-Web-GGUF\nThis model was converted to GGUF format from 'McGill-NLP/Llama-3-8B-Web' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "### About GGUF (TheBloke Description)\n\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\n\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n\n* URL. The source project for GGUF. Offers a CLI and a server option.\n* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.\n* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.\n* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.\n* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.\n* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.\n* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.\n* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.\n* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.\n* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
reinforcement-learning
null
# **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-Pixelcopter-PLE-v0", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "37.50 +/- 28.99", "name": "mean_reward", "verified": false}]}]}]}
moczard/Reinforce-Pixelcopter-PLE-v0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-04-27T15:51:00+00:00
[]
[]
TAGS #Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
# Reinforce Agent playing Pixelcopter-PLE-v0 This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
[ "# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ "TAGS\n#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n", "# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
text-generation
transformers
Made with https://github.com/neuralmagic/AutoFP8: ``` python quantize.py --model-id facebook/opt-125m --save-dir opt-125m-fp8-dynamic --activation-scheme dynamic ```
{"tags": ["fp8"]}
nm-testing/opt-125m-fp8-dynamic
null
[ "transformers", "safetensors", "opt", "text-generation", "fp8", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T15:51:29+00:00
[]
[]
TAGS #transformers #safetensors #opt #text-generation #fp8 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Made with URL
[]
[ "TAGS\n#transformers #safetensors #opt #text-generation #fp8 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="SKHIA2024/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]}
SKHIA2024/Taxi-v3
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-27T15:53:46+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: DeepMount00/Mistral-Ita-7b model_type: MistralForCausalLM tokenizer_type: LlamaTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: /workspace/datasets/samantha-ita-sharegpt.jsonl type: sharegpt field: conversations - path: /workspace/datasets/psycology-dataset-gpt-ita.jsonl type: sharegpt field: conversations chat_template: chatml hub_model_id: Samantha-ita-v0.1 dataset_prepared_path: val_set_size: 0.05 output_dir: ./out sequence_len: 8192 sample_packing: true pad_to_sequence_len: true eval_sample_packing: false wandb_project: samantha-mistral7b wandb_entity: wandb_watch: wandb_name: Samantha-ita-v0.1 wandb_log_model: gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 2 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.000006 # 0.000006 OK better curve # 0.0005 OK train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 4 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: bos_token: "<s>" eos_token: "<|im_end|>" unk_token: "<unk>" tokens: - "<|im_start|>" - "<|im_end|>" ``` </details><br> # Samantha-ita-v0.1 <img src="https://i.postimg.cc/YC6Tf65H/00005-2244133494.png" alt="cover" border="0" width="1024px"> This model is a fine-tuned version of [DeepMount00/Mistral-Ita-7b](https://huggingface.co/DeepMount00/Mistral-Ita-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7069 ## Model description Samantha is a fine-tuned Italian version based on Eric Hartford's Samantha. For this, I utilized the pre-trained Mistral 7B version. The model performs excellently! Please take a look at the datasets used. ## Intended uses & limitations Sure, here's the corrected and improved version: Samantha is a proficient companion who understands and speaks Italian fluently. She has undergone training on various topics. In addition to the original Samantha dataset translated with GPT-4, I have also incorporated a psychology conversations dataset to further enrich Samantha's knowledge in the field of psychology." ## Chat Template ``` <|im_start|>system YOUR PROMPT<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9261 | 0.01 | 1 | 1.8998 | | 0.8902 | 0.25 | 28 | 0.8267 | | 0.8422 | 0.5 | 56 | 0.7604 | | 0.8338 | 0.75 | 84 | 0.7299 | | 0.8397 | 1.0 | 112 | 0.7136 | | 0.6859 | 1.22 | 140 | 0.7131 | | 0.6707 | 1.47 | 168 | 0.7082 | | 0.7041 | 1.72 | 196 | 0.7069 | | 0.6936 | 1.97 | 224 | 0.7069 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.0 - Datasets 2.15.0 - Tokenizers 0.15.0
{"language": ["it"], "license": "mit", "tags": ["axolotl", "generated_from_trainer", "psycology", "companion"], "datasets": ["WasamiKirua/samantha-ita", "WasamiKirua/psycology-dataset-ita"], "base_model": "DeepMount00/Mistral-Ita-7b", "model-index": [{"name": "Samantha-ita-v0.1", "results": []}]}
WasamiKirua/Samantha-ita-mistral-v0.1
null
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "axolotl", "generated_from_trainer", "psycology", "companion", "conversational", "it", "dataset:WasamiKirua/samantha-ita", "dataset:WasamiKirua/psycology-dataset-ita", "base_model:DeepMount00/Mistral-Ita-7b", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T15:54:39+00:00
[]
[ "it" ]
TAGS #transformers #pytorch #safetensors #mistral #text-generation #axolotl #generated_from_trainer #psycology #companion #conversational #it #dataset-WasamiKirua/samantha-ita #dataset-WasamiKirua/psycology-dataset-ita #base_model-DeepMount00/Mistral-Ita-7b #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<img src="URL alt="Built with Axolotl" width="200" height="32"/> See axolotl config axolotl version: '0.4.0' Samantha-ita-v0.1 ================= <img src="https://i.URL alt="cover" border="0" width="1024px"> This model is a fine-tuned version of DeepMount00/Mistral-Ita-7b on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.7069 Model description ----------------- Samantha is a fine-tuned Italian version based on Eric Hartford's Samantha. For this, I utilized the pre-trained Mistral 7B version. The model performs excellently! Please take a look at the datasets used. Intended uses & limitations --------------------------- Sure, here's the corrected and improved version: Samantha is a proficient companion who understands and speaks Italian fluently. She has undergone training on various topics. In addition to the original Samantha dataset translated with GPT-4, I have also incorporated a psychology conversations dataset to further enrich Samantha's knowledge in the field of psychology." Chat Template ------------- ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 6e-06 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 8 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_steps: 10 * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.40.0.dev0 * Pytorch 2.2.0 * Datasets 2.15.0 * Tokenizers 0.15.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-06\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.0\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
[ "TAGS\n#transformers #pytorch #safetensors #mistral #text-generation #axolotl #generated_from_trainer #psycology #companion #conversational #it #dataset-WasamiKirua/samantha-ita #dataset-WasamiKirua/psycology-dataset-ita #base_model-DeepMount00/Mistral-Ita-7b #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-06\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.0\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
null
transformers
# Uploaded model - **Developed by:** kyleishie - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/tinyllama-bnb-4bit"}
kyleishie/tiny-llama-instruct-Q8_0
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-27T15:55:10+00:00
[]
[ "en" ]
TAGS #transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/tinyllama-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: kyleishie - License: apache-2.0 - Finetuned from model : unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: kyleishie\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/tinyllama-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: kyleishie\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
Khalil22/Final_Project
null
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T15:55:50+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/bgo71fl
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T15:56:17+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/r3e6wyu
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T15:56:44+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/981e3pg
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T15:57:09+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0001_4iters_bs256_nodpo_only4w_iter_1 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
{"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.0001_4iters_bs256_nodpo_only4w_iter_1", "results": []}]}
ShenaoZhang/0.0001_4iters_bs256_nodpo_only4w_iter_1
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T16:02:43+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.0001_4iters_bs256_nodpo_only4w_iter_1 This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
[ "# 0.0001_4iters_bs256_nodpo_only4w_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.0001_4iters_bs256_nodpo_only4w_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "houssemmoslah/mistral_sql_gen6000_merged"}
houssemmoslah/mistral_schema_linking1
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:houssemmoslah/mistral_sql_gen6000_merged", "region:us" ]
null
2024-04-27T16:02:45+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-houssemmoslah/mistral_sql_gen6000_merged #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-houssemmoslah/mistral_sql_gen6000_merged #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Hariharan345/tinyllama-momxchat-v2
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T16:03:22+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
hrangel/Mistral_7B_qlora_CoT_FT
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:04:51+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
chillies/llama-3-8b-vn-v2
null
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:09:26+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
由于Sakura-13B-LNovel-v0.9已被移除,而有人需要HF模型所以找出来上传。 这是一个Sakura-13B-LNovel-v0.9的备份,该版本已不建议使用,请等待官方发布的模型
{"license": "cc-by-nc-sa-4.0"}
Kunger/Sakura-13B-LNovel-v0.9
null
[ "transformers", "pytorch", "qwen", "text-generation", "custom_code", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "region:us" ]
null
2024-04-27T16:09:44+00:00
[]
[]
TAGS #transformers #pytorch #qwen #text-generation #custom_code #license-cc-by-nc-sa-4.0 #autotrain_compatible #region-us
由于Sakura-13B-LNovel-v0.9已被移除,而有人需要HF模型所以找出来上传。 这是一个Sakura-13B-LNovel-v0.9的备份,该版本已不建议使用,请等待官方发布的模型
[]
[ "TAGS\n#transformers #pytorch #qwen #text-generation #custom_code #license-cc-by-nc-sa-4.0 #autotrain_compatible #region-us \n" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
karan842/gemma-code-instruct-finetune-test
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T16:10:05+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# Relevant Links https://gist.github.com/shagunsodhani/9ae6d2364c278c97b1b2f4ec53255c56 https://github.com/shagunsodhani/CNN-Sentence-Classifier https://arxiv.org/pdf/1408.5882.pdf https://cs224d.stanford.edu/lectures/CS224d-Lecture13.pdf https://www.youtube.com/watch?v=wNBaNhvL4pg Literally free code LOL: https://github.com/Shawn1993/cnn-text-classification-pytorch https://github.com/cezannec/CNN_Text_Classification https://github.com/ShindongLee/Sentence_Classifier_CNN https://github.com/rafaelgreca/conv-sent-classification https://github.com/Johnxjp/cnn_text_classification ```python import torch import torch.nn as nn import torch.nn.functional as F import numpy as np from torch.autograd import Variable from torchtext.datasets import AG_NEWS from torchtext.data.utils import get_tokenizer from torchtext.vocab import build_vocab_from_iterator ``` /Users/sabkx/venv-metal/lib/python3.11/site-packages/torchtext/datasets/__init__.py:4: UserWarning: /!\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\ Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: `import torchtext; torchtext.disable_torchtext_deprecation_warning()` warnings.warn(torchtext._TORCHTEXT_DEPRECATION_MSG) /Users/sabkx/venv-metal/lib/python3.11/site-packages/torchtext/data/__init__.py:4: UserWarning: /!\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\ Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: `import torchtext; torchtext.disable_torchtext_deprecation_warning()` warnings.warn(torchtext._TORCHTEXT_DEPRECATION_MSG) /Users/sabkx/venv-metal/lib/python3.11/site-packages/torchtext/vocab/__init__.py:4: UserWarning: /!\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\ Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: `import torchtext; torchtext.disable_torchtext_deprecation_warning()` warnings.warn(torchtext._TORCHTEXT_DEPRECATION_MSG) /Users/sabkx/venv-metal/lib/python3.11/site-packages/torchtext/utils.py:4: UserWarning: /!\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\ Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: `import torchtext; torchtext.disable_torchtext_deprecation_warning()` warnings.warn(torchtext._TORCHTEXT_DEPRECATION_MSG) ```python device = ( "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu" ) print(f"Using {device} device") ``` Using mps device ```python tokenizer = get_tokenizer("basic_english") train_iter = AG_NEWS(split="train") def yield_tokens(data_iter): for _, text in data_iter: yield(tokenizer(text)) vocab = build_vocab_from_iterator(yield_tokens(train_iter), specials=["<unk>"]) vocab.set_default_index(vocab["<unk>"]) ``` ```python text_pipeline = lambda x: vocab(tokenizer(x)) label_pipeline = lambda x: int(x) - 1 ``` ```python text_pipeline('here is the an example') ``` [475, 21, 2, 30, 5297] ```python from torch.utils.data import DataLoader device = "cpu" def collate_batch(batch): label_list, text_list = [], [] for _label, _text in batch: label_list.append(label_pipeline(_label)) processed_text = torch.tensor(text_pipeline(_text)) processed_text = F.pad(processed_text, (0, 50-processed_text.size(0))) text_list.append(processed_text) return torch.tensor(label_list).to(device), torch.stack(text_list).to(device) train_iter = AG_NEWS(split="train") dataloader = DataLoader( train_iter, batch_size=8, shuffle=False, collate_fn=collate_batch ) ``` From https://github.com/rafaelgreca/conv-sent-classification/blob/main/model.py ```python class CNN_Text(nn.Module): def __init__(self, vocab_size, embedding_dim): super(CNN_Text, self).__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim) self.conv1 = nn.Conv2d(1, 100, (3, embedding_dim), padding=0) # original was 1, 100 self.conv2 = nn.Conv2d(1, 100, (4, embedding_dim), padding=0) self.conv3 = nn.Conv2d(1, 100, (5, embedding_dim), padding=0) self.dropout = nn.Dropout(0.5) self.fc = nn.Linear(300, 4) # original was 300, 1 def forward(self, text): # print("Input size:", text.shape) embedded = self.embedding(text) embedded = embedded.unsqueeze(1) # print("After embedding:", embedded.shape) # print(embedded) output_conv_1 = F.relu(self.conv1(embedded)) # print("After conv1:", output_conv_1.shape) output_conv_1 = output_conv_1.squeeze(3) # print("After conv1+squeeze:", output_conv_1.shape) output_conv_2 = F.relu(self.conv2(embedded)).squeeze(3) # print("After conv2:", output_conv_2.shape) output_conv_3 = F.relu(self.conv3(embedded)).squeeze(3) # print("After conv3:", output_conv_3.shape) output_maxpool_1 = (F.max_pool1d(output_conv_1, output_conv_1.size(2)).squeeze(2)) output_maxpool_2 = (F.max_pool1d(output_conv_2, output_conv_2.size(2)).squeeze(2)) output_maxpool_3 = (F.max_pool1d(output_conv_3, output_conv_3.size(2)).squeeze(2)) output_maxpooled = torch.cat( (output_maxpool_1, output_maxpool_2, output_maxpool_3), dim=1 ) # output_maxpooled = output_maxpool_1 drop_outed = self.dropout(output_maxpooled) final = self.fc(drop_outed) # print("Final:", final.shape) # print(torch.argmax(F.softmax(final), dim=1)) return F.softmax(final) ``` ```python train_iter = AG_NEWS(split="train") num_class = len(set([label for (label, text) in train_iter])) vocab_size = len(vocab) embedding_dim = 64 ``` ```python model = CNN_Text(vocab_size, embedding_dim) ``` ```python def train(dataloader): model.train() total_acc, total_loss, total_count = 0, 0, 0 log_interval = 250 start_time = time.time() for idx, (label, text) in enumerate(dataloader): optimizer.zero_grad() predicted_label = model(text) loss = criterion(predicted_label, label) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), 3) optimizer.step() total_acc += (predicted_label.argmax(1) == label).sum().item() total_loss += loss total_count += label.size(0) if idx % log_interval == 0 and idx > 0: elapsed = time.time() - start_time print( "| epoch {:3d} | {:5d}/{:5d} batches " "| accuracy {:8.3f} | average loss {:5.5f}".format( epoch, idx, len(dataloader), total_acc / total_count, total_loss / total_count ) ) total_acc, total_loss, total_count = 0, 0, 0 start_time = time.time() def evaluate(dataloader): model.eval() total_acc, total_loss, total_count = 0, 0, 0 with torch.no_grad(): for idx, (label, text) in enumerate(dataloader): predicted_label = model(text) loss = criterion(predicted_label, label) total_acc += (predicted_label.argmax(1) == label).sum().item() total_loss += loss total_count += label.size(0) return total_acc / total_count, total_loss / total_count ``` ```python import time from torch.utils.data.dataset import random_split from torchtext.data.functional import to_map_style_dataset # Hyperparameters EPOCHS = 10 # epoch LR = 1 # learning rate BATCH_SIZE = 50 # batch size for training criterion = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adadelta(model.parameters()) scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.9) total_accu = None train_iter, test_iter = AG_NEWS() train_dataset = to_map_style_dataset(train_iter) test_dataset = to_map_style_dataset(test_iter) num_train = int(len(train_dataset) * 0.80) split_train_, split_valid_ = random_split( train_dataset, [num_train, len(train_dataset) - num_train] ) train_dataloader = DataLoader( split_train_, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch ) valid_dataloader = DataLoader( split_valid_, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch ) test_dataloader = DataLoader( test_dataset, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch ) for epoch in range(1, EPOCHS + 1): print("Learning rate: {:.8f}".format(optimizer.state_dict()['param_groups'][0]['lr'])) epoch_start_time = time.time() train(train_dataloader) accu_val, loss_val = evaluate(valid_dataloader) scheduler.step() print("-" * 59) print( "| end of epoch {:3d} | time: {:5.2f}s | " "valid accuracy {:8.3f} | average loss {:5.5f}".format( epoch, time.time() - epoch_start_time, accu_val, loss_val ) ) print("-" * 59) ``` Learning rate: 1.00000000 /var/folders/q_/r6lvdl1x67g3v84r9ppq71kh0000gn/T/ipykernel_93163/1563015420.py:48: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. return F.softmax(final) | epoch 1 | 250/ 1920 batches | accuracy 0.374 | average loss 0.02650 | epoch 1 | 500/ 1920 batches | accuracy 0.533 | average loss 0.02388 | epoch 1 | 750/ 1920 batches | accuracy 0.589 | average loss 0.02284 | epoch 1 | 1000/ 1920 batches | accuracy 0.611 | average loss 0.02240 | epoch 1 | 1250/ 1920 batches | accuracy 0.630 | average loss 0.02205 | epoch 1 | 1500/ 1920 batches | accuracy 0.652 | average loss 0.02165 | epoch 1 | 1750/ 1920 batches | accuracy 0.661 | average loss 0.02147 ----------------------------------------------------------- | end of epoch 1 | time: 59.98s | valid accuracy 0.764 | average loss 0.01950 ----------------------------------------------------------- Learning rate: 0.90000000 | epoch 2 | 250/ 1920 batches | accuracy 0.687 | average loss 0.02098 | epoch 2 | 500/ 1920 batches | accuracy 0.687 | average loss 0.02096 --------------------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) Cell In[51], line 35 33 print("Learning rate: {:.8f}".format(optimizer.state_dict()['param_groups'][0]['lr'])) 34 epoch_start_time = time.time() ---> 35 train(train_dataloader) 36 accu_val, loss_val = evaluate(valid_dataloader) 37 scheduler.step() Cell In[50], line 11, in train(dataloader) 9 predicted_label = model(text) 10 loss = criterion(predicted_label, label) ---> 11 loss.backward() 12 torch.nn.utils.clip_grad_norm_(model.parameters(), 3) 13 optimizer.step() File ~/venv-metal/lib/python3.11/site-packages/torch/_tensor.py:534, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs) 524 if has_torch_function_unary(self): 525 return handle_torch_function( 526 Tensor.backward, 527 (self,), (...) 532 inputs=inputs, 533 ) --> 534 torch.autograd.backward( 535 self, gradient, retain_graph, create_graph, inputs=inputs 536 ) File ~/venv-metal/lib/python3.11/site-packages/torch/autograd/__init__.py:267, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs) 262 retain_graph = create_graph 264 # The reason we repeat the same comment below is that 265 # some Python versions print out the first line of a multi-line function 266 # calls in the traceback and some print out the last line --> 267 _engine_run_backward( 268 tensors, 269 grad_tensors_, 270 retain_graph, 271 create_graph, 272 inputs, 273 allow_unreachable=True, 274 accumulate_grad=True, 275 ) File ~/venv-metal/lib/python3.11/site-packages/torch/autograd/graph.py:767, in _engine_run_backward(t_outputs, *args, **kwargs) 765 unregister_hooks = _register_logging_hooks_on_whole_graph(t_outputs) 766 try: --> 767 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 768 t_outputs, *args, **kwargs 769 ) # Calls into the C++ engine to run the backward pass 770 finally: 771 if attach_logging_hooks: KeyboardInterrupt: Ok, now time for part 2 of the task: using more advanced/new models LSTM/Transformers ig https://arxiv.org/pdf/1802.00889 https://arxiv.org/pdf/1909.04054 ## CNN-LSTM https://www.kaggle.com/code/mehmetlaudatekman/lstm-text-classification-pytorch https://stackoverflow.com/questions/47952930/how-can-i-use-lstm-in-pytorch-for-classification http://colah.github.io/posts/2015-08-Understanding-LSTMs/ https://github.com/chrisvdweth/ml-toolkit/blob/master/pytorch/notebooks/minimal-example-lstm-input.ipynb https://www.youtube.com/watch?v=jGst43P-TJA https://stackoverflow.com/questions/60196755/why-is-very-simple-pytorch-lstm-model-not-learning AHHH Hmm, but can't be a completely distinct model: need to use the idea from the original one and build upon it. Consider hybrid CNN-LSTM https://iopscience.iop.org/article/10.1088/1742-6596/1646/1/012110/pdf https://www.mdpi.com/2076-3417/10/17/5841 https://sci-hub.st/https://ieeexplore.ieee.org/document/8577620 This is also a sussy option: https://arxiv.org/pdf/1411.4389 https://towardsdatascience.com/pytorch-lstms-for-time-series-data-cd16190929d7 : torch LSTM is retarded ```python class CNN_LSTM_Text(nn.Module): def __init__(self, vocab_size, embedding_dim): super(CNN_LSTM_Text, self).__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim) # self.lstm = nn.LSTM(embedding_dim, 32, num_layers=1, batch_first=True, bidirectional=False) # 32 is hidden dim aka output dim self.conv1 = nn.Conv2d(1, 100, (3, embedding_dim), padding=0) # original was 1, 100 self.conv2 = nn.Conv2d(1, 100, (4, embedding_dim), padding=0) self.conv3 = nn.Conv2d(1, 100, (5, embedding_dim), padding=0) self.lstm = nn.LSTM(100, 100, num_layers=1, batch_first=True) self.dropout = nn.Dropout(0.5) self.fc = nn.Linear(100, 4) # original was 300, 1 def forward(self, text): # print("Input size:", text.shape) embedded = self.embedding(text) embedded = embedded.unsqueeze(1) output_conv_1 = F.relu(self.conv1(embedded)) # print("After conv1:", output_conv_1.shape) output_conv_1 = output_conv_1.squeeze(3) # print("After conv1+squeeze:", output_conv_1.shape) output_conv_2 = F.relu(self.conv2(embedded)).squeeze(3) # print("After conv2:", output_conv_2.shape) output_conv_3 = F.relu(self.conv3(embedded)).squeeze(3) # print("After conv3:", output_conv_3.shape) # output_maxpool_1 = F.max_pool1d(output_conv_1, output_conv_1.size(2)).squeeze(2) # output_maxpool_2 = F.max_pool1d(output_conv_2, output_conv_2.size(2)).squeeze(2) # output_maxpool_3 = F.max_pool1d(output_conv_3, output_conv_3.size(2)).squeeze(2) # output_maxpooled = torch.cat( # (output_maxpool_1, output_maxpool_2, output_maxpool_3), dim=1 # ) output_maxpool_1 = F.max_pool1d(output_conv_1, output_conv_1.size(2)).squeeze(2).unsqueeze(1) output_maxpool_2 = F.max_pool1d(output_conv_2, output_conv_2.size(2)).squeeze(2).unsqueeze(1) output_maxpool_3 = F.max_pool1d(output_conv_3, output_conv_3.size(2)).squeeze(2).unsqueeze(1) output_maxpooled = torch.cat( (output_maxpool_1, output_maxpool_2, output_maxpool_3), dim=1 ) # print("After maxpool:", output_maxpooled.shape) _, (h_n, _) = self.lstm(output_maxpooled) lstm_output = h_n[-1] drop_outed = self.dropout(lstm_output) final = self.fc(drop_outed) # print("Final:", final.shape) # assert(False) return F.softmax(final) ``` ```python model_lstm = CNN_LSTM_Text(vocab_size, embedding_dim) ``` ```python # Hyperparameters EPOCHS = 5 # epoch LR = 0.1 # learning rate BATCH_SIZE = 50 # batch size for training def train_lstm(dataloader): model_lstm.train() total_acc, total_loss, total_count = 0, 0, 0 log_interval = 250 start_time = time.time() for idx, (label, text) in enumerate(dataloader): optimizer.zero_grad() predicted_label = model_lstm.forward(text) loss = criterion(predicted_label, label) loss.backward() torch.nn.utils.clip_grad_norm_(model_lstm.parameters(), 3) optimizer.step() total_acc += (predicted_label.argmax(1) == label).sum().item() total_loss += loss total_count += label.size(0) if idx % log_interval == 0 and idx > 0: elapsed = time.time() - start_time print( "| epoch {:3d} | {:5d}/{:5d} batches " "| accuracy {:8.3f} | average loss {:5.5f}".format( epoch, idx, len(dataloader), total_acc / total_count, total_loss / total_count ) ) total_acc, total_loss, total_count = 0, 0, 0 start_time = time.time() def evaluate_lstm(dataloader): model_lstm.eval() total_acc, total_loss, total_count = 0, 0, 0 with torch.no_grad(): for idx, (label, text) in enumerate(dataloader): predicted_label = model_lstm(text) loss = criterion(predicted_label, label) total_acc += (predicted_label.argmax(1) == label).sum().item() total_loss += loss total_count += label.size(0) return total_acc / total_count, total_loss / total_count ``` ```python # Hyperparameters EPOCHS = 5 # epoch LR = 0.01 # learning rate BATCH_SIZE = 50 # batch size for training criterion = torch.nn.CrossEntropyLoss() optimizer = torch.optim.AdamW(model_lstm.parameters(), lr=LR) scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.9) total_accu = None train_iter, test_iter = AG_NEWS() train_dataset = to_map_style_dataset(train_iter) test_dataset = to_map_style_dataset(test_iter) num_train = int(len(train_dataset) * 0.80) split_train_, split_valid_ = random_split( train_dataset, [num_train, len(train_dataset) - num_train] ) train_dataloader = DataLoader( split_train_, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch ) valid_dataloader = DataLoader( split_valid_, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch ) test_dataloader = DataLoader( test_dataset, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch ) for epoch in range(1, EPOCHS + 1): print("Learning rate: {:.8f}".format(optimizer.state_dict()['param_groups'][0]['lr'])) epoch_start_time = time.time() train_lstm(train_dataloader) accu_val, loss_val = evaluate_lstm(valid_dataloader) scheduler.step() print("-" * 59) print( "| end of epoch {:3d} | time: {:5.2f}s | " "valid accuracy {:8.3f} | average loss {:5.5f}".format( epoch, time.time() - epoch_start_time, accu_val, loss_val ) ) print("-" * 59) ``` Learning rate: 0.01000000 /var/folders/q_/r6lvdl1x67g3v84r9ppq71kh0000gn/T/ipykernel_93163/163902715.py:59: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. return F.softmax(final) | epoch 1 | 250/ 1920 batches | accuracy 0.252 | average loss 0.02791 | epoch 1 | 500/ 1920 batches | accuracy 0.256 | average loss 0.02787 | epoch 1 | 750/ 1920 batches | accuracy 0.252 | average loss 0.02787 | epoch 1 | 1000/ 1920 batches | accuracy 0.251 | average loss 0.02785 | epoch 1 | 1250/ 1920 batches | accuracy 0.254 | average loss 0.02789 | epoch 1 | 1500/ 1920 batches | accuracy 0.247 | average loss 0.02792 | epoch 1 | 1750/ 1920 batches | accuracy 0.249 | average loss 0.02789 ----------------------------------------------------------- | end of epoch 1 | time: 54.10s | valid accuracy 0.248 | average loss 0.02779 ----------------------------------------------------------- Learning rate: 0.00900000 | epoch 2 | 250/ 1920 batches | accuracy 0.252 | average loss 0.02784 | epoch 2 | 500/ 1920 batches | accuracy 0.255 | average loss 0.02786 | epoch 2 | 750/ 1920 batches | accuracy 0.251 | average loss 0.02785 | epoch 2 | 1000/ 1920 batches | accuracy 0.247 | average loss 0.02785 | epoch 2 | 1250/ 1920 batches | accuracy 0.252 | average loss 0.02785 --------------------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) Cell In[91], line 31 29 print("Learning rate: {:.8f}".format(optimizer.state_dict()['param_groups'][0]['lr'])) 30 epoch_start_time = time.time() ---> 31 train_lstm(train_dataloader) 32 accu_val, loss_val = evaluate_lstm(valid_dataloader) 33 scheduler.step() Cell In[90], line 15, in train_lstm(dataloader) 13 predicted_label = model_lstm.forward(text) 14 loss = criterion(predicted_label, label) ---> 15 loss.backward() 16 torch.nn.utils.clip_grad_norm_(model_lstm.parameters(), 3) 17 optimizer.step() File ~/venv-metal/lib/python3.11/site-packages/torch/_tensor.py:534, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs) 524 if has_torch_function_unary(self): 525 return handle_torch_function( 526 Tensor.backward, 527 (self,), (...) 532 inputs=inputs, 533 ) --> 534 torch.autograd.backward( 535 self, gradient, retain_graph, create_graph, inputs=inputs 536 ) File ~/venv-metal/lib/python3.11/site-packages/torch/autograd/__init__.py:267, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs) 262 retain_graph = create_graph 264 # The reason we repeat the same comment below is that 265 # some Python versions print out the first line of a multi-line function 266 # calls in the traceback and some print out the last line --> 267 _engine_run_backward( 268 tensors, 269 grad_tensors_, 270 retain_graph, 271 create_graph, 272 inputs, 273 allow_unreachable=True, 274 accumulate_grad=True, 275 ) File ~/venv-metal/lib/python3.11/site-packages/torch/autograd/graph.py:767, in _engine_run_backward(t_outputs, *args, **kwargs) 765 unregister_hooks = _register_logging_hooks_on_whole_graph(t_outputs) 766 try: --> 767 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 768 t_outputs, *args, **kwargs 769 ) # Calls into the C++ engine to run the backward pass 770 finally: 771 if attach_logging_hooks: KeyboardInterrupt: ## Pure LSTM Screw CNN-LSTM, lemme see if pure LSTM trains https://towardsdatascience.com/multiclass-text-classification-using-lstm-in-pytorch-eac56baed8df https://stackoverflow.com/questions/47952930/how-can-i-use-lstm-in-pytorch-for-classification ```python class LSTM(torch.nn.Module) : def __init__(self, vocab_size, embedding_dim, hidden_dim, label_size): super(LSTM, self).__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim) self.lstm = nn.LSTM(embedding_dim, hidden_dim, num_layers=1, batch_first=True, bidirectional=True) self.fc = nn.Linear(hidden_dim, label_size) def forward(self, x): batch_size = x.shape[0] embedded = self.embedding(x).to("mps") output, (ht, _) = self.lstm(embedded) return self.fc(ht[-1]) ``` ```python model_real_lstm = LSTM(vocab_size, embedding_dim, 64, 4).to("mps") ``` ```python # Hyperparameters EPOCHS = 5 # epoch LR = 0.1 # learning rate BATCH_SIZE = 50 # batch size for training def train_real_lstm(dataloader): model_real_lstm.train() total_acc, total_loss, total_count = 0, 0, 0 log_interval = 250 start_time = time.time() for idx, (label, text) in enumerate(dataloader): optimizer.zero_grad() predicted_label = model_real_lstm.forward(text) loss = criterion(predicted_label, label) loss.backward() torch.nn.utils.clip_grad_norm_(model_real_lstm.parameters(), 3) optimizer.step() total_acc += (predicted_label.argmax(1) == label).sum().item() total_loss += loss total_count += label.size(0) if idx % log_interval == 0 and idx > 0: elapsed = time.time() - start_time print( "| epoch {:3d} | {:5d}/{:5d} batches " "| accuracy {:8.3f} | average loss {:5.5f}".format( epoch, idx, len(dataloader), total_acc / total_count, total_loss / total_count ) ) total_acc, total_loss, total_count = 0, 0, 0 start_time = time.time() def evaluate_real_lstm(dataloader): model_real_lstm.eval() total_acc, total_loss, total_count = 0, 0, 0 with torch.no_grad(): for idx, (label, text) in enumerate(dataloader): predicted_label = model_real_lstm(text) loss = criterion(predicted_label, label) total_acc += (predicted_label.argmax(1) == label).sum().item() total_loss += loss total_count += label.size(0) return total_acc / total_count, total_loss / total_count ``` ```python import time from torch.utils.data.dataset import random_split from torchtext.data.functional import to_map_style_dataset # Hyperparameters EPOCHS = 30 # epoch LR = 0.01 # learning rate BATCH_SIZE = 50 # batch size for training criterion = torch.nn.CrossEntropyLoss().to(device) optimizer = torch.optim.Adam(model_real_lstm.parameters(), lr=LR) scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.9) total_accu = None train_iter, test_iter = AG_NEWS() train_dataset = to_map_style_dataset(train_iter) test_dataset = to_map_style_dataset(test_iter) num_train = int(len(train_dataset) * 0.80) split_train_, split_valid_ = random_split( train_dataset, [num_train, len(train_dataset) - num_train] ) train_dataloader = DataLoader( split_train_, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch ) valid_dataloader = DataLoader( split_valid_, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch ) test_dataloader = DataLoader( test_dataset, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch ) for epoch in range(1, EPOCHS + 1): print("Learning rate: {:.8f}".format(optimizer.state_dict()['param_groups'][0]['lr'])) epoch_start_time = time.time() train_real_lstm(train_dataloader) accu_val, loss_val = evaluate_real_lstm(valid_dataloader) scheduler.step() print("-" * 59) print( "| end of epoch {:3d} | time: {:5.2f}s | " "valid accuracy {:8.3f} | average loss {:5.5f}".format( epoch, time.time() - epoch_start_time, accu_val, loss_val ) ) print("-" * 59) ``` Learning rate: 0.01000000 --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[41], line 34 32 print("Learning rate: {:.8f}".format(optimizer.state_dict()['param_groups'][0]['lr'])) 33 epoch_start_time = time.time() ---> 34 train_real_lstm(train_dataloader) 35 accu_val, loss_val = evaluate_real_lstm(valid_dataloader) 36 scheduler.step() Cell In[40], line 13, in train_real_lstm(dataloader) 11 for idx, (label, text) in enumerate(dataloader): 12 optimizer.zero_grad() ---> 13 predicted_label = model_real_lstm.forward(text) 14 loss = criterion(predicted_label, label) 15 loss.backward() Cell In[38], line 10, in LSTM.forward(self, x) 8 def forward(self, x): 9 batch_size = x.shape[0] ---> 10 embedded = self.embedding(x).to("mps") 11 output, (ht, _) = self.lstm(embedded) 12 return self.fc(ht[-1]) File ~/venv-metal/lib/python3.11/site-packages/torch/nn/modules/module.py:1532, in Module._wrapped_call_impl(self, *args, **kwargs) 1530 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1531 else: -> 1532 return self._call_impl(*args, **kwargs) File ~/venv-metal/lib/python3.11/site-packages/torch/nn/modules/module.py:1541, in Module._call_impl(self, *args, **kwargs) 1536 # If we don't have any hooks, we want to skip the rest of the logic in 1537 # this function, and just call forward. 1538 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1539 or _global_backward_pre_hooks or _global_backward_hooks 1540 or _global_forward_hooks or _global_forward_pre_hooks): -> 1541 return forward_call(*args, **kwargs) 1543 try: 1544 result = None File ~/venv-metal/lib/python3.11/site-packages/torch/nn/modules/sparse.py:163, in Embedding.forward(self, input) 162 def forward(self, input: Tensor) -> Tensor: --> 163 return F.embedding( 164 input, self.weight, self.padding_idx, self.max_norm, 165 self.norm_type, self.scale_grad_by_freq, self.sparse) File ~/venv-metal/lib/python3.11/site-packages/torch/nn/functional.py:2266, in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2260 # Note [embedding_renorm set_grad_enabled] 2261 # XXX: equivalent to 2262 # with torch.no_grad(): 2263 # torch.embedding_renorm_ 2264 # remove once script supports set_grad_enabled 2265 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 2266 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Placeholder storage has not been allocated on MPS device!
{}
bobronson/cnn-text
null
[ "arxiv:1408.5882", "arxiv:1802.00889", "arxiv:1909.04054", "arxiv:1411.4389", "region:us" ]
null
2024-04-27T16:10:12+00:00
[ "1408.5882", "1802.00889", "1909.04054", "1411.4389" ]
[]
TAGS #arxiv-1408.5882 #arxiv-1802.00889 #arxiv-1909.04054 #arxiv-1411.4389 #region-us
# Relevant Links URL URL URL URL URL Literally free code LOL: URL URL URL URL URL /Users/sabkx/venv-metal/lib/python3.11/site-packages/torchtext/datasets/__init__.py:4: UserWarning: /!\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\ Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: 'import torchtext; torchtext.disable_torchtext_deprecation_warning()' URL(torchtext._TORCHTEXT_DEPRECATION_MSG) /Users/sabkx/venv-metal/lib/python3.11/site-packages/torchtext/data/__init__.py:4: UserWarning: /!\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\ Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: 'import torchtext; torchtext.disable_torchtext_deprecation_warning()' URL(torchtext._TORCHTEXT_DEPRECATION_MSG) /Users/sabkx/venv-metal/lib/python3.11/site-packages/torchtext/vocab/__init__.py:4: UserWarning: /!\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\ Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: 'import torchtext; torchtext.disable_torchtext_deprecation_warning()' URL(torchtext._TORCHTEXT_DEPRECATION_MSG) /Users/sabkx/venv-metal/lib/python3.11/site-packages/torchtext/URL: UserWarning: /!\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\ Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: 'import torchtext; torchtext.disable_torchtext_deprecation_warning()' URL(torchtext._TORCHTEXT_DEPRECATION_MSG) Using mps device [475, 21, 2, 30, 5297] From URL Learning rate: 1.00000000 /var/folders/q_/r6lvdl1x67g3v84r9ppq71kh0000gn/T/ipykernel_93163/URL: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. return F.softmax(final) | epoch 1 | 250/ 1920 batches | accuracy 0.374 | average loss 0.02650 | epoch 1 | 500/ 1920 batches | accuracy 0.533 | average loss 0.02388 | epoch 1 | 750/ 1920 batches | accuracy 0.589 | average loss 0.02284 | epoch 1 | 1000/ 1920 batches | accuracy 0.611 | average loss 0.02240 | epoch 1 | 1250/ 1920 batches | accuracy 0.630 | average loss 0.02205 | epoch 1 | 1500/ 1920 batches | accuracy 0.652 | average loss 0.02165 | epoch 1 | 1750/ 1920 batches | accuracy 0.661 | average loss 0.02147 ----------------------------------------------------------- | end of epoch 1 | time: 59.98s | valid accuracy 0.764 | average loss 0.01950 ----------------------------------------------------------- Learning rate: 0.90000000 | epoch 2 | 250/ 1920 batches | accuracy 0.687 | average loss 0.02098 | epoch 2 | 500/ 1920 batches | accuracy 0.687 | average loss 0.02096 --------------------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) Cell In[51], line 35 33 print("Learning rate: {:.8f}".format(optimizer.state_dict()['param_groups'][0]['lr'])) 34 epoch_start_time = URL() ---> 35 train(train_dataloader) 36 accu_val, loss_val = evaluate(valid_dataloader) 37 URL() Cell In[50], line 11, in train(dataloader) 9 predicted_label = model(text) 10 loss = criterion(predicted_label, label) ---> 11 loss.backward() 12 URL.clip_grad_norm_(model.parameters(), 3) 13 URL() File ~/venv-metal/lib/python3.11/site-packages/torch/_tensor.py:534, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs) 524 if has_torch_function_unary(self): 525 return handle_torch_function( 526 Tensor.backward, 527 (self,), (...) 532 inputs=inputs, 533 ) --> 534 torch.autograd.backward( 535 self, gradient, retain_graph, create_graph, inputs=inputs 536 ) File ~/venv-metal/lib/python3.11/site-packages/torch/autograd/__init__.py:267, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs) 262 retain_graph = create_graph 264 # The reason we repeat the same comment below is that 265 # some Python versions print out the first line of a multi-line function 266 # calls in the traceback and some print out the last line --> 267 _engine_run_backward( 268 tensors, 269 grad_tensors_, 270 retain_graph, 271 create_graph, 272 inputs, 273 allow_unreachable=True, 274 accumulate_grad=True, 275 ) File ~/venv-metal/lib/python3.11/site-packages/torch/autograd/URL, in _engine_run_backward(t_outputs, *args, kwargs) 765 unregister_hooks = _register_logging_hooks_on_whole_graph(t_outputs) 766 try: --> 767 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 768 t_outputs, *args, kwargs 769 ) # Calls into the C++ engine to run the backward pass 770 finally: 771 if attach_logging_hooks: KeyboardInterrupt: Ok, now time for part 2 of the task: using more advanced/new models LSTM/Transformers ig URL URL ## CNN-LSTM URL URL URL URL URL URL AHHH Hmm, but can't be a completely distinct model: need to use the idea from the original one and build upon it. Consider hybrid CNN-LSTM URL URL URL/URL This is also a sussy option: URL URL : torch LSTM is retarded Learning rate: 0.01000000 /var/folders/q_/r6lvdl1x67g3v84r9ppq71kh0000gn/T/ipykernel_93163/URL: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. return F.softmax(final) | epoch 1 | 250/ 1920 batches | accuracy 0.252 | average loss 0.02791 | epoch 1 | 500/ 1920 batches | accuracy 0.256 | average loss 0.02787 | epoch 1 | 750/ 1920 batches | accuracy 0.252 | average loss 0.02787 | epoch 1 | 1000/ 1920 batches | accuracy 0.251 | average loss 0.02785 | epoch 1 | 1250/ 1920 batches | accuracy 0.254 | average loss 0.02789 | epoch 1 | 1500/ 1920 batches | accuracy 0.247 | average loss 0.02792 | epoch 1 | 1750/ 1920 batches | accuracy 0.249 | average loss 0.02789 ----------------------------------------------------------- | end of epoch 1 | time: 54.10s | valid accuracy 0.248 | average loss 0.02779 ----------------------------------------------------------- Learning rate: 0.00900000 | epoch 2 | 250/ 1920 batches | accuracy 0.252 | average loss 0.02784 | epoch 2 | 500/ 1920 batches | accuracy 0.255 | average loss 0.02786 | epoch 2 | 750/ 1920 batches | accuracy 0.251 | average loss 0.02785 | epoch 2 | 1000/ 1920 batches | accuracy 0.247 | average loss 0.02785 | epoch 2 | 1250/ 1920 batches | accuracy 0.252 | average loss 0.02785 --------------------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) Cell In[91], line 31 29 print("Learning rate: {:.8f}".format(optimizer.state_dict()['param_groups'][0]['lr'])) 30 epoch_start_time = URL() ---> 31 train_lstm(train_dataloader) 32 accu_val, loss_val = evaluate_lstm(valid_dataloader) 33 URL() Cell In[90], line 15, in train_lstm(dataloader) 13 predicted_label = model_lstm.forward(text) 14 loss = criterion(predicted_label, label) ---> 15 loss.backward() 16 URL.clip_grad_norm_(model_lstm.parameters(), 3) 17 URL() File ~/venv-metal/lib/python3.11/site-packages/torch/_tensor.py:534, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs) 524 if has_torch_function_unary(self): 525 return handle_torch_function( 526 Tensor.backward, 527 (self,), (...) 532 inputs=inputs, 533 ) --> 534 torch.autograd.backward( 535 self, gradient, retain_graph, create_graph, inputs=inputs 536 ) File ~/venv-metal/lib/python3.11/site-packages/torch/autograd/__init__.py:267, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs) 262 retain_graph = create_graph 264 # The reason we repeat the same comment below is that 265 # some Python versions print out the first line of a multi-line function 266 # calls in the traceback and some print out the last line --> 267 _engine_run_backward( 268 tensors, 269 grad_tensors_, 270 retain_graph, 271 create_graph, 272 inputs, 273 allow_unreachable=True, 274 accumulate_grad=True, 275 ) File ~/venv-metal/lib/python3.11/site-packages/torch/autograd/URL, in _engine_run_backward(t_outputs, *args, kwargs) 765 unregister_hooks = _register_logging_hooks_on_whole_graph(t_outputs) 766 try: --> 767 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 768 t_outputs, *args, kwargs 769 ) # Calls into the C++ engine to run the backward pass 770 finally: 771 if attach_logging_hooks: KeyboardInterrupt: ## Pure LSTM Screw CNN-LSTM, lemme see if pure LSTM trains URL URL Learning rate: 0.01000000 --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[41], line 34 32 print("Learning rate: {:.8f}".format(optimizer.state_dict()['param_groups'][0]['lr'])) 33 epoch_start_time = URL() ---> 34 train_real_lstm(train_dataloader) 35 accu_val, loss_val = evaluate_real_lstm(valid_dataloader) 36 URL() Cell In[40], line 13, in train_real_lstm(dataloader) 11 for idx, (label, text) in enumerate(dataloader): 12 optimizer.zero_grad() ---> 13 predicted_label = model_real_lstm.forward(text) 14 loss = criterion(predicted_label, label) 15 loss.backward() Cell In[38], line 10, in LSTM.forward(self, x) 8 def forward(self, x): 9 batch_size = x.shape[0] ---> 10 embedded = self.embedding(x).to("mps") 11 output, (ht, _) = URL(embedded) 12 return URL(ht[-1]) File ~/venv-metal/lib/python3.11/site-packages/torch/nn/modules/URL, in Module._wrapped_call_impl(self, *args, kwargs) 1530 return self._compiled_call_impl(*args, kwargs) # type: ignore[misc] 1531 else: -> 1532 return self._call_impl(*args, kwargs) File ~/venv-metal/lib/python3.11/site-packages/torch/nn/modules/URL, in Module._call_impl(self, *args, kwargs) 1536 # If we don't have any hooks, we want to skip the rest of the logic in 1537 # this function, and just call forward. 1538 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1539 or _global_backward_pre_hooks or _global_backward_hooks 1540 or _global_forward_hooks or _global_forward_pre_hooks): -> 1541 return forward_call(*args, kwargs) 1543 try: 1544 result = None File ~/venv-metal/lib/python3.11/site-packages/torch/nn/modules/URL, in Embedding.forward(self, input) 162 def forward(self, input: Tensor) -> Tensor: --> 163 return F.embedding( 164 input, URL, self.padding_idx, self.max_norm, 165 self.norm_type, self.scale_grad_by_freq, URL) File ~/venv-metal/lib/python3.11/site-packages/torch/nn/URL, in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2260 # Note [embedding_renorm set_grad_enabled] 2261 # XXX: equivalent to 2262 # with torch.no_grad(): 2263 # torch.embedding_renorm_ 2264 # remove once script supports set_grad_enabled 2265 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 2266 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Placeholder storage has not been allocated on MPS device!
[ "# Relevant Links\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nLiterally free code LOL: \nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\n\n\n\n /Users/sabkx/venv-metal/lib/python3.11/site-packages/torchtext/datasets/__init__.py:4: UserWarning: \n /!\\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\\ \n Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: 'import torchtext; torchtext.disable_torchtext_deprecation_warning()'\n URL(torchtext._TORCHTEXT_DEPRECATION_MSG)\n /Users/sabkx/venv-metal/lib/python3.11/site-packages/torchtext/data/__init__.py:4: UserWarning: \n /!\\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\\ \n Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: 'import torchtext; torchtext.disable_torchtext_deprecation_warning()'\n URL(torchtext._TORCHTEXT_DEPRECATION_MSG)\n /Users/sabkx/venv-metal/lib/python3.11/site-packages/torchtext/vocab/__init__.py:4: UserWarning: \n /!\\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\\ \n Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: 'import torchtext; torchtext.disable_torchtext_deprecation_warning()'\n URL(torchtext._TORCHTEXT_DEPRECATION_MSG)\n /Users/sabkx/venv-metal/lib/python3.11/site-packages/torchtext/URL: UserWarning: \n /!\\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\\ \n Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: 'import torchtext; torchtext.disable_torchtext_deprecation_warning()'\n URL(torchtext._TORCHTEXT_DEPRECATION_MSG)\n\n\n\n\n\n Using mps device\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n [475, 21, 2, 30, 5297]\n\n\n\n\n\n\nFrom URL\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Learning rate: 1.00000000\n\n\n /var/folders/q_/r6lvdl1x67g3v84r9ppq71kh0000gn/T/ipykernel_93163/URL: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.\n return F.softmax(final)\n\n\n | epoch 1 | 250/ 1920 batches | accuracy 0.374 | average loss 0.02650\n | epoch 1 | 500/ 1920 batches | accuracy 0.533 | average loss 0.02388\n | epoch 1 | 750/ 1920 batches | accuracy 0.589 | average loss 0.02284\n | epoch 1 | 1000/ 1920 batches | accuracy 0.611 | average loss 0.02240\n | epoch 1 | 1250/ 1920 batches | accuracy 0.630 | average loss 0.02205\n | epoch 1 | 1500/ 1920 batches | accuracy 0.652 | average loss 0.02165\n | epoch 1 | 1750/ 1920 batches | accuracy 0.661 | average loss 0.02147\n -----------------------------------------------------------\n | end of epoch 1 | time: 59.98s | valid accuracy 0.764 | average loss 0.01950\n -----------------------------------------------------------\n Learning rate: 0.90000000\n | epoch 2 | 250/ 1920 batches | accuracy 0.687 | average loss 0.02098\n | epoch 2 | 500/ 1920 batches | accuracy 0.687 | average loss 0.02096\n\n\n\n ---------------------------------------------------------------------------\n\n KeyboardInterrupt Traceback (most recent call last)\n\n Cell In[51], line 35\n 33 print(\"Learning rate: {:.8f}\".format(optimizer.state_dict()['param_groups'][0]['lr']))\n 34 epoch_start_time = URL()\n ---> 35 train(train_dataloader)\n 36 accu_val, loss_val = evaluate(valid_dataloader)\n 37 URL()\n\n\n Cell In[50], line 11, in train(dataloader)\n 9 predicted_label = model(text)\n 10 loss = criterion(predicted_label, label)\n ---> 11 loss.backward()\n 12 URL.clip_grad_norm_(model.parameters(), 3)\n 13 URL()\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/_tensor.py:534, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)\n 524 if has_torch_function_unary(self):\n 525 return handle_torch_function(\n 526 Tensor.backward,\n 527 (self,),\n (...)\n 532 inputs=inputs,\n 533 )\n --> 534 torch.autograd.backward(\n 535 self, gradient, retain_graph, create_graph, inputs=inputs\n 536 )\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/autograd/__init__.py:267, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)\n 262 retain_graph = create_graph\n 264 # The reason we repeat the same comment below is that\n 265 # some Python versions print out the first line of a multi-line function\n 266 # calls in the traceback and some print out the last line\n --> 267 _engine_run_backward(\n 268 tensors,\n 269 grad_tensors_,\n 270 retain_graph,\n 271 create_graph,\n 272 inputs,\n 273 allow_unreachable=True,\n 274 accumulate_grad=True,\n 275 )\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/autograd/URL, in _engine_run_backward(t_outputs, *args, kwargs)\n 765 unregister_hooks = _register_logging_hooks_on_whole_graph(t_outputs)\n 766 try:\n --> 767 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass\n 768 t_outputs, *args, kwargs\n 769 ) # Calls into the C++ engine to run the backward pass\n 770 finally:\n 771 if attach_logging_hooks:\n\n\n KeyboardInterrupt: \n\n\nOk, now time for part 2 of the task: using more advanced/new models\n\nLSTM/Transformers ig\n\nURL\n\nURL", "## CNN-LSTM\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL AHHH\n\nHmm, but can't be a completely distinct model: need to use the idea from the original one and build upon it.\n\nConsider hybrid CNN-LSTM\n\nURL\n\nURL\n\nURL/URL\n\nThis is also a sussy option: URL\n\nURL : torch LSTM is retarded\n\n\n\n\n\n\n\n\n\n\n\n\n\n Learning rate: 0.01000000\n\n\n /var/folders/q_/r6lvdl1x67g3v84r9ppq71kh0000gn/T/ipykernel_93163/URL: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.\n return F.softmax(final)\n\n\n | epoch 1 | 250/ 1920 batches | accuracy 0.252 | average loss 0.02791\n | epoch 1 | 500/ 1920 batches | accuracy 0.256 | average loss 0.02787\n | epoch 1 | 750/ 1920 batches | accuracy 0.252 | average loss 0.02787\n | epoch 1 | 1000/ 1920 batches | accuracy 0.251 | average loss 0.02785\n | epoch 1 | 1250/ 1920 batches | accuracy 0.254 | average loss 0.02789\n | epoch 1 | 1500/ 1920 batches | accuracy 0.247 | average loss 0.02792\n | epoch 1 | 1750/ 1920 batches | accuracy 0.249 | average loss 0.02789\n -----------------------------------------------------------\n | end of epoch 1 | time: 54.10s | valid accuracy 0.248 | average loss 0.02779\n -----------------------------------------------------------\n Learning rate: 0.00900000\n | epoch 2 | 250/ 1920 batches | accuracy 0.252 | average loss 0.02784\n | epoch 2 | 500/ 1920 batches | accuracy 0.255 | average loss 0.02786\n | epoch 2 | 750/ 1920 batches | accuracy 0.251 | average loss 0.02785\n | epoch 2 | 1000/ 1920 batches | accuracy 0.247 | average loss 0.02785\n | epoch 2 | 1250/ 1920 batches | accuracy 0.252 | average loss 0.02785\n\n\n\n ---------------------------------------------------------------------------\n\n KeyboardInterrupt Traceback (most recent call last)\n\n Cell In[91], line 31\n 29 print(\"Learning rate: {:.8f}\".format(optimizer.state_dict()['param_groups'][0]['lr']))\n 30 epoch_start_time = URL()\n ---> 31 train_lstm(train_dataloader)\n 32 accu_val, loss_val = evaluate_lstm(valid_dataloader)\n 33 URL()\n\n\n Cell In[90], line 15, in train_lstm(dataloader)\n 13 predicted_label = model_lstm.forward(text)\n 14 loss = criterion(predicted_label, label)\n ---> 15 loss.backward()\n 16 URL.clip_grad_norm_(model_lstm.parameters(), 3)\n 17 URL()\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/_tensor.py:534, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)\n 524 if has_torch_function_unary(self):\n 525 return handle_torch_function(\n 526 Tensor.backward,\n 527 (self,),\n (...)\n 532 inputs=inputs,\n 533 )\n --> 534 torch.autograd.backward(\n 535 self, gradient, retain_graph, create_graph, inputs=inputs\n 536 )\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/autograd/__init__.py:267, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)\n 262 retain_graph = create_graph\n 264 # The reason we repeat the same comment below is that\n 265 # some Python versions print out the first line of a multi-line function\n 266 # calls in the traceback and some print out the last line\n --> 267 _engine_run_backward(\n 268 tensors,\n 269 grad_tensors_,\n 270 retain_graph,\n 271 create_graph,\n 272 inputs,\n 273 allow_unreachable=True,\n 274 accumulate_grad=True,\n 275 )\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/autograd/URL, in _engine_run_backward(t_outputs, *args, kwargs)\n 765 unregister_hooks = _register_logging_hooks_on_whole_graph(t_outputs)\n 766 try:\n --> 767 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass\n 768 t_outputs, *args, kwargs\n 769 ) # Calls into the C++ engine to run the backward pass\n 770 finally:\n 771 if attach_logging_hooks:\n\n\n KeyboardInterrupt:", "## Pure LSTM\n\nScrew CNN-LSTM, lemme see if pure LSTM trains\n\nURL\n\nURL\n\n\n\n\n\n\n\n\n\n\n\n\n\n Learning rate: 0.01000000\n\n\n\n ---------------------------------------------------------------------------\n\n RuntimeError Traceback (most recent call last)\n\n Cell In[41], line 34\n 32 print(\"Learning rate: {:.8f}\".format(optimizer.state_dict()['param_groups'][0]['lr']))\n 33 epoch_start_time = URL()\n ---> 34 train_real_lstm(train_dataloader)\n 35 accu_val, loss_val = evaluate_real_lstm(valid_dataloader)\n 36 URL()\n\n\n Cell In[40], line 13, in train_real_lstm(dataloader)\n 11 for idx, (label, text) in enumerate(dataloader):\n 12 optimizer.zero_grad()\n ---> 13 predicted_label = model_real_lstm.forward(text)\n 14 loss = criterion(predicted_label, label)\n 15 loss.backward()\n\n\n Cell In[38], line 10, in LSTM.forward(self, x)\n 8 def forward(self, x):\n 9 batch_size = x.shape[0]\n ---> 10 embedded = self.embedding(x).to(\"mps\")\n 11 output, (ht, _) = URL(embedded)\n 12 return URL(ht[-1])\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/nn/modules/URL, in Module._wrapped_call_impl(self, *args, kwargs)\n 1530 return self._compiled_call_impl(*args, kwargs) # type: ignore[misc]\n 1531 else:\n -> 1532 return self._call_impl(*args, kwargs)\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/nn/modules/URL, in Module._call_impl(self, *args, kwargs)\n 1536 # If we don't have any hooks, we want to skip the rest of the logic in\n 1537 # this function, and just call forward.\n 1538 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\n 1539 or _global_backward_pre_hooks or _global_backward_hooks\n 1540 or _global_forward_hooks or _global_forward_pre_hooks):\n -> 1541 return forward_call(*args, kwargs)\n 1543 try:\n 1544 result = None\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/nn/modules/URL, in Embedding.forward(self, input)\n 162 def forward(self, input: Tensor) -> Tensor:\n --> 163 return F.embedding(\n 164 input, URL, self.padding_idx, self.max_norm,\n 165 self.norm_type, self.scale_grad_by_freq, URL)\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/nn/URL, in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)\n 2260 # Note [embedding_renorm set_grad_enabled]\n 2261 # XXX: equivalent to\n 2262 # with torch.no_grad():\n 2263 # torch.embedding_renorm_\n 2264 # remove once script supports set_grad_enabled\n 2265 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)\n -> 2266 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\n\n\n RuntimeError: Placeholder storage has not been allocated on MPS device!" ]
[ "TAGS\n#arxiv-1408.5882 #arxiv-1802.00889 #arxiv-1909.04054 #arxiv-1411.4389 #region-us \n", "# Relevant Links\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nLiterally free code LOL: \nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\n\n\n\n /Users/sabkx/venv-metal/lib/python3.11/site-packages/torchtext/datasets/__init__.py:4: UserWarning: \n /!\\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\\ \n Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: 'import torchtext; torchtext.disable_torchtext_deprecation_warning()'\n URL(torchtext._TORCHTEXT_DEPRECATION_MSG)\n /Users/sabkx/venv-metal/lib/python3.11/site-packages/torchtext/data/__init__.py:4: UserWarning: \n /!\\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\\ \n Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: 'import torchtext; torchtext.disable_torchtext_deprecation_warning()'\n URL(torchtext._TORCHTEXT_DEPRECATION_MSG)\n /Users/sabkx/venv-metal/lib/python3.11/site-packages/torchtext/vocab/__init__.py:4: UserWarning: \n /!\\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\\ \n Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: 'import torchtext; torchtext.disable_torchtext_deprecation_warning()'\n URL(torchtext._TORCHTEXT_DEPRECATION_MSG)\n /Users/sabkx/venv-metal/lib/python3.11/site-packages/torchtext/URL: UserWarning: \n /!\\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\\ \n Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: 'import torchtext; torchtext.disable_torchtext_deprecation_warning()'\n URL(torchtext._TORCHTEXT_DEPRECATION_MSG)\n\n\n\n\n\n Using mps device\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n [475, 21, 2, 30, 5297]\n\n\n\n\n\n\nFrom URL\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Learning rate: 1.00000000\n\n\n /var/folders/q_/r6lvdl1x67g3v84r9ppq71kh0000gn/T/ipykernel_93163/URL: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.\n return F.softmax(final)\n\n\n | epoch 1 | 250/ 1920 batches | accuracy 0.374 | average loss 0.02650\n | epoch 1 | 500/ 1920 batches | accuracy 0.533 | average loss 0.02388\n | epoch 1 | 750/ 1920 batches | accuracy 0.589 | average loss 0.02284\n | epoch 1 | 1000/ 1920 batches | accuracy 0.611 | average loss 0.02240\n | epoch 1 | 1250/ 1920 batches | accuracy 0.630 | average loss 0.02205\n | epoch 1 | 1500/ 1920 batches | accuracy 0.652 | average loss 0.02165\n | epoch 1 | 1750/ 1920 batches | accuracy 0.661 | average loss 0.02147\n -----------------------------------------------------------\n | end of epoch 1 | time: 59.98s | valid accuracy 0.764 | average loss 0.01950\n -----------------------------------------------------------\n Learning rate: 0.90000000\n | epoch 2 | 250/ 1920 batches | accuracy 0.687 | average loss 0.02098\n | epoch 2 | 500/ 1920 batches | accuracy 0.687 | average loss 0.02096\n\n\n\n ---------------------------------------------------------------------------\n\n KeyboardInterrupt Traceback (most recent call last)\n\n Cell In[51], line 35\n 33 print(\"Learning rate: {:.8f}\".format(optimizer.state_dict()['param_groups'][0]['lr']))\n 34 epoch_start_time = URL()\n ---> 35 train(train_dataloader)\n 36 accu_val, loss_val = evaluate(valid_dataloader)\n 37 URL()\n\n\n Cell In[50], line 11, in train(dataloader)\n 9 predicted_label = model(text)\n 10 loss = criterion(predicted_label, label)\n ---> 11 loss.backward()\n 12 URL.clip_grad_norm_(model.parameters(), 3)\n 13 URL()\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/_tensor.py:534, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)\n 524 if has_torch_function_unary(self):\n 525 return handle_torch_function(\n 526 Tensor.backward,\n 527 (self,),\n (...)\n 532 inputs=inputs,\n 533 )\n --> 534 torch.autograd.backward(\n 535 self, gradient, retain_graph, create_graph, inputs=inputs\n 536 )\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/autograd/__init__.py:267, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)\n 262 retain_graph = create_graph\n 264 # The reason we repeat the same comment below is that\n 265 # some Python versions print out the first line of a multi-line function\n 266 # calls in the traceback and some print out the last line\n --> 267 _engine_run_backward(\n 268 tensors,\n 269 grad_tensors_,\n 270 retain_graph,\n 271 create_graph,\n 272 inputs,\n 273 allow_unreachable=True,\n 274 accumulate_grad=True,\n 275 )\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/autograd/URL, in _engine_run_backward(t_outputs, *args, kwargs)\n 765 unregister_hooks = _register_logging_hooks_on_whole_graph(t_outputs)\n 766 try:\n --> 767 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass\n 768 t_outputs, *args, kwargs\n 769 ) # Calls into the C++ engine to run the backward pass\n 770 finally:\n 771 if attach_logging_hooks:\n\n\n KeyboardInterrupt: \n\n\nOk, now time for part 2 of the task: using more advanced/new models\n\nLSTM/Transformers ig\n\nURL\n\nURL", "## CNN-LSTM\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL AHHH\n\nHmm, but can't be a completely distinct model: need to use the idea from the original one and build upon it.\n\nConsider hybrid CNN-LSTM\n\nURL\n\nURL\n\nURL/URL\n\nThis is also a sussy option: URL\n\nURL : torch LSTM is retarded\n\n\n\n\n\n\n\n\n\n\n\n\n\n Learning rate: 0.01000000\n\n\n /var/folders/q_/r6lvdl1x67g3v84r9ppq71kh0000gn/T/ipykernel_93163/URL: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.\n return F.softmax(final)\n\n\n | epoch 1 | 250/ 1920 batches | accuracy 0.252 | average loss 0.02791\n | epoch 1 | 500/ 1920 batches | accuracy 0.256 | average loss 0.02787\n | epoch 1 | 750/ 1920 batches | accuracy 0.252 | average loss 0.02787\n | epoch 1 | 1000/ 1920 batches | accuracy 0.251 | average loss 0.02785\n | epoch 1 | 1250/ 1920 batches | accuracy 0.254 | average loss 0.02789\n | epoch 1 | 1500/ 1920 batches | accuracy 0.247 | average loss 0.02792\n | epoch 1 | 1750/ 1920 batches | accuracy 0.249 | average loss 0.02789\n -----------------------------------------------------------\n | end of epoch 1 | time: 54.10s | valid accuracy 0.248 | average loss 0.02779\n -----------------------------------------------------------\n Learning rate: 0.00900000\n | epoch 2 | 250/ 1920 batches | accuracy 0.252 | average loss 0.02784\n | epoch 2 | 500/ 1920 batches | accuracy 0.255 | average loss 0.02786\n | epoch 2 | 750/ 1920 batches | accuracy 0.251 | average loss 0.02785\n | epoch 2 | 1000/ 1920 batches | accuracy 0.247 | average loss 0.02785\n | epoch 2 | 1250/ 1920 batches | accuracy 0.252 | average loss 0.02785\n\n\n\n ---------------------------------------------------------------------------\n\n KeyboardInterrupt Traceback (most recent call last)\n\n Cell In[91], line 31\n 29 print(\"Learning rate: {:.8f}\".format(optimizer.state_dict()['param_groups'][0]['lr']))\n 30 epoch_start_time = URL()\n ---> 31 train_lstm(train_dataloader)\n 32 accu_val, loss_val = evaluate_lstm(valid_dataloader)\n 33 URL()\n\n\n Cell In[90], line 15, in train_lstm(dataloader)\n 13 predicted_label = model_lstm.forward(text)\n 14 loss = criterion(predicted_label, label)\n ---> 15 loss.backward()\n 16 URL.clip_grad_norm_(model_lstm.parameters(), 3)\n 17 URL()\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/_tensor.py:534, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)\n 524 if has_torch_function_unary(self):\n 525 return handle_torch_function(\n 526 Tensor.backward,\n 527 (self,),\n (...)\n 532 inputs=inputs,\n 533 )\n --> 534 torch.autograd.backward(\n 535 self, gradient, retain_graph, create_graph, inputs=inputs\n 536 )\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/autograd/__init__.py:267, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)\n 262 retain_graph = create_graph\n 264 # The reason we repeat the same comment below is that\n 265 # some Python versions print out the first line of a multi-line function\n 266 # calls in the traceback and some print out the last line\n --> 267 _engine_run_backward(\n 268 tensors,\n 269 grad_tensors_,\n 270 retain_graph,\n 271 create_graph,\n 272 inputs,\n 273 allow_unreachable=True,\n 274 accumulate_grad=True,\n 275 )\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/autograd/URL, in _engine_run_backward(t_outputs, *args, kwargs)\n 765 unregister_hooks = _register_logging_hooks_on_whole_graph(t_outputs)\n 766 try:\n --> 767 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass\n 768 t_outputs, *args, kwargs\n 769 ) # Calls into the C++ engine to run the backward pass\n 770 finally:\n 771 if attach_logging_hooks:\n\n\n KeyboardInterrupt:", "## Pure LSTM\n\nScrew CNN-LSTM, lemme see if pure LSTM trains\n\nURL\n\nURL\n\n\n\n\n\n\n\n\n\n\n\n\n\n Learning rate: 0.01000000\n\n\n\n ---------------------------------------------------------------------------\n\n RuntimeError Traceback (most recent call last)\n\n Cell In[41], line 34\n 32 print(\"Learning rate: {:.8f}\".format(optimizer.state_dict()['param_groups'][0]['lr']))\n 33 epoch_start_time = URL()\n ---> 34 train_real_lstm(train_dataloader)\n 35 accu_val, loss_val = evaluate_real_lstm(valid_dataloader)\n 36 URL()\n\n\n Cell In[40], line 13, in train_real_lstm(dataloader)\n 11 for idx, (label, text) in enumerate(dataloader):\n 12 optimizer.zero_grad()\n ---> 13 predicted_label = model_real_lstm.forward(text)\n 14 loss = criterion(predicted_label, label)\n 15 loss.backward()\n\n\n Cell In[38], line 10, in LSTM.forward(self, x)\n 8 def forward(self, x):\n 9 batch_size = x.shape[0]\n ---> 10 embedded = self.embedding(x).to(\"mps\")\n 11 output, (ht, _) = URL(embedded)\n 12 return URL(ht[-1])\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/nn/modules/URL, in Module._wrapped_call_impl(self, *args, kwargs)\n 1530 return self._compiled_call_impl(*args, kwargs) # type: ignore[misc]\n 1531 else:\n -> 1532 return self._call_impl(*args, kwargs)\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/nn/modules/URL, in Module._call_impl(self, *args, kwargs)\n 1536 # If we don't have any hooks, we want to skip the rest of the logic in\n 1537 # this function, and just call forward.\n 1538 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\n 1539 or _global_backward_pre_hooks or _global_backward_hooks\n 1540 or _global_forward_hooks or _global_forward_pre_hooks):\n -> 1541 return forward_call(*args, kwargs)\n 1543 try:\n 1544 result = None\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/nn/modules/URL, in Embedding.forward(self, input)\n 162 def forward(self, input: Tensor) -> Tensor:\n --> 163 return F.embedding(\n 164 input, URL, self.padding_idx, self.max_norm,\n 165 self.norm_type, self.scale_grad_by_freq, URL)\n\n\n File ~/venv-metal/lib/python3.11/site-packages/torch/nn/URL, in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)\n 2260 # Note [embedding_renorm set_grad_enabled]\n 2261 # XXX: equivalent to\n 2262 # with torch.no_grad():\n 2263 # torch.embedding_renorm_\n 2264 # remove once script supports set_grad_enabled\n 2265 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)\n -> 2266 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\n\n\n RuntimeError: Placeholder storage has not been allocated on MPS device!" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
EinsZwo/nlid_ONLY_supertagging-424_00
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:10:56+00:00
[ "1910.09700" ]
[]
TAGS #transformers #tensorboard #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
gtsru/sn17-vin-011
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2024-04-27T16:10:56+00:00
[ "1910.09700" ]
[]
TAGS #diffusers #safetensors #arxiv-1910.09700 #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#diffusers #safetensors #arxiv-1910.09700 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
```python import numpy as np import torch import torch.nn as nn from torchvision import datasets from torchvision import transforms from torch.utils.data.sampler import SubsetRandomSampler from tqdm import tqdm device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print(device) ``` cuda ```python def data_loader(data_dir, batch_size, random_seed=42, valid_size=0.1, shuffle=True, test=False): normalize = transforms.Normalize( mean=[0.4914, 0.4822, 0.4465], std=[0.2023, 0.1994, 0.2010], ) transform = transforms.Compose([ transforms.Resize((224,224)), transforms.ToTensor(), normalize, ]) if test: dataset = datasets.CIFAR10( root=data_dir, train=False, download=True, transform=transform, ) data_loader = torch.utils.data.DataLoader( dataset, batch_size=batch_size, shuffle=shuffle ) return data_loader train_dataset = datasets.CIFAR10( root=data_dir, train=True, download=True, transform=transform, ) valid_dataset = datasets.CIFAR10( root=data_dir, train=True, download=True, transform=transform, ) num_train = len(train_dataset) indices = list(range(num_train)) split = int(np.floor(valid_size * num_train)) if shuffle: np.random.seed(42) np.random.shuffle(indices) train_idx, valid_idx = indices[split:], indices[:split] train_sampler = SubsetRandomSampler(train_idx) valid_sampler = SubsetRandomSampler(valid_idx) train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=batch_size, sampler=train_sampler) valid_loader = torch.utils.data.DataLoader( valid_dataset, batch_size=batch_size, sampler=valid_sampler) return (train_loader, valid_loader) train_loader, valid_loader = data_loader(data_dir='./data', batch_size=64) test_loader = data_loader(data_dir='./data', batch_size=64, test=True) ``` Files already downloaded and verified Files already downloaded and verified Files already downloaded and verified ```python class ResidualBlock(nn.Module): def __init__(self, in_channels, out_channels, stride = 1, downsample = None): super(ResidualBlock, self).__init__() self.conv1 = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size = 3, stride = stride, padding = 1), nn.BatchNorm2d(out_channels), nn.ReLU()) self.conv2 = nn.Sequential( nn.Conv2d(out_channels, out_channels, kernel_size = 3, stride = 1, padding = 1), nn.BatchNorm2d(out_channels)) self.downsample = downsample self.relu = nn.ReLU() self.out_channels = out_channels def forward(self, x): residual = x out = self.conv1(x) out = self.conv2(out) if self.downsample: residual = self.downsample(x) out += residual out = self.relu(out) return out ``` ```python ``` ```python class ResNet(nn.Module): def __init__(self, block, layers, num_classes = 10): super(ResNet, self).__init__() self.inplanes = 64 self.conv1 = nn.Sequential( nn.Conv2d(3, 64, kernel_size = 7, stride = 2, padding = 3), nn.BatchNorm2d(64), nn.ReLU()) self.maxpool = nn.MaxPool2d(kernel_size = 3, stride = 2, padding = 1) self.layer0 = self._make_layer(block, 64, layers[0], stride = 1) self.layer1 = self._make_layer(block, 128, layers[1], stride = 2) self.layer2 = self._make_layer(block, 256, layers[2], stride = 2) self.layer3 = self._make_layer(block, 512, layers[3], stride = 2) self.avgpool = nn.AvgPool2d(7, stride=1) self.fc = nn.Linear(512, num_classes) def _make_layer(self, block, planes, blocks, stride=1): downsample = None if stride != 1 or self.inplanes != planes: downsample = nn.Sequential( nn.Conv2d(self.inplanes, planes, kernel_size=1, stride=stride), nn.BatchNorm2d(planes), ) layers = [] layers.append(block(self.inplanes, planes, stride, downsample)) self.inplanes = planes for i in range(1, blocks): layers.append(block(self.inplanes, planes)) return nn.Sequential(*layers) def forward(self, x): x = self.conv1(x) x = self.maxpool(x) x = self.layer0(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.avgpool(x) x = x.view(x.size(0), -1) x = self.fc(x) return x ``` ```python num_classes = 10 num_epochs = 5 learning_rate = 0.01 model = ResNet(ResidualBlock, [2, 2, 2, 2]).to(device) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay = 0.001, momentum = 0.9) total_step = len(train_loader) ``` ```python import gc total_step = len(train_loader) from tqdm import tqdm for epoch in range(num_epochs): for i, (images, labels) in enumerate(tqdm(train_loader)): # Move tensors to the configured device images = images.to(device) labels = labels.to(device) # Forward pass outputs = model(images) loss = criterion(outputs, labels) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() print ('Epoch [{}/{}], Loss: {:.4f}' .format(epoch+1, num_epochs, loss.item())) # Validation with torch.no_grad(): correct = 0 total = 0 for images, labels in valid_loader: images = images.to(device) labels = labels.to(device) outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() del images, labels, outputs print('Accuracy of the network on the {} validation images: {} %'.format(5000, 100 * correct / total)) ``` 100%|██████████| 176/176 [01:15<00:00, 2.35it/s] Epoch [1/10], Loss: 1.2169 Accuracy of the network on the 5000 validation images: 58.28 % 100%|██████████| 176/176 [01:16<00:00, 2.29it/s] Epoch [2/10], Loss: 0.8962 Accuracy of the network on the 5000 validation images: 70.36 % 100%|██████████| 176/176 [01:16<00:00, 2.30it/s] Epoch [3/10], Loss: 0.6691 Accuracy of the network on the 5000 validation images: 75.86 % 100%|██████████| 176/176 [01:16<00:00, 2.29it/s] Epoch [4/10], Loss: 0.6426 Accuracy of the network on the 5000 validation images: 79.24 % 100%|██████████| 176/176 [01:16<00:00, 2.29it/s] Epoch [5/10], Loss: 0.2891 Accuracy of the network on the 5000 validation images: 80.4 % 100%|██████████| 176/176 [01:16<00:00, 2.29it/s] Epoch [6/10], Loss: 0.4245 Accuracy of the network on the 5000 validation images: 81.24 % 100%|██████████| 176/176 [01:16<00:00, 2.29it/s] Epoch [7/10], Loss: 0.2183 Accuracy of the network on the 5000 validation images: 81.44 % 100%|██████████| 176/176 [01:16<00:00, 2.29it/s] Epoch [8/10], Loss: 0.1172 Accuracy of the network on the 5000 validation images: 81.06 % 100%|██████████| 176/176 [01:16<00:00, 2.30it/s] Epoch [9/10], Loss: 0.1069 Accuracy of the network on the 5000 validation images: 82.14 % 100%|██████████| 176/176 [01:17<00:00, 2.29it/s] Epoch [10/10], Loss: 0.0555 Accuracy of the network on the 5000 validation images: 83.12 % ```python with torch.no_grad(): correct = 0 total = 0 for images, labels in test_loader: images = images.to(device) labels = labels.to(device) outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() del images, labels, outputs print('Accuracy of the network on the {} test images: {} %'.format(10000, 100 * correct / total)) ``` ```python model = torch.hub.load("pytorch/vision", "resnet152", weights="IMAGENET1K_V2") model.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False) num_features = model.fc.in_features model.fc = nn.Linear(num_features, num_classes) model = model.to(device) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9, weight_decay=5e-4) scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=200) ``` Using cache found in /root/.cache/torch/hub/pytorch_vision_main ```python def train(model, trainloader, criterion, optimizer, device): train_loss = 0.0 train_total = 0 train_correct = 0 # Switch to train mode model.train() for inputs, labels in trainloader: inputs, labels = inputs.to(device), labels.to(device) # Zero the parameter gradients optimizer.zero_grad() # Forward pass outputs = model(inputs) loss = criterion(outputs, labels) # Backward pass and optimize loss.backward() optimizer.step() # Update training loss train_loss += loss.item() * inputs.size(0) # Compute training accuracy _, predicted = torch.max(outputs, 1) train_total += labels.size(0) train_correct += (predicted == labels).sum().item() train_loss = train_loss / len(trainloader.dataset) train_accuracy = 100.0 * train_correct / train_total return model, train_loss, train_accuracy ``` ```python def test(model, testloader, criterion, device): test_loss = 0.0 test_total = 0 test_correct = 0 # Switch to evaluation mode model.eval() with torch.no_grad(): for inputs, labels in testloader: inputs, labels = inputs.to(device), labels.to(device) # Forward pass outputs = model(inputs) loss = criterion(outputs, labels) # Update test loss test_loss += loss.item() * inputs.size(0) # Compute test accuracy _, predicted = torch.max(outputs, 1) test_total += labels.size(0) test_correct += (predicted == labels).sum().item() # Compute average test loss and accuracy test_loss = test_loss / len(testloader.dataset) test_accuracy = 100.0 * test_correct / test_total return test_loss, test_accuracy ``` ```python def train_epochs(model, trainloader, testloader, criterion, optimizer, device, num_epochs, save_interval=5): train_losses = [] train_accuracies = [] test_losses = [] test_accuracies = [] for epoch in range(num_epochs): print(f'Epoch {epoch+1}/{num_epochs}') model, train_loss, train_accuracy = train(model, trainloader, criterion, optimizer, device) test_loss, test_accuracy = test(model, testloader, criterion, device) train_losses.append(train_loss) train_accuracies.append(train_accuracy) test_losses.append(test_loss) test_accuracies.append(test_accuracy) print(f'Train Loss: {train_loss:.4f} - Train Accuracy: {train_accuracy:.2f}%') print(f'Test Loss: {test_loss:.4f} - Test Accuracy: {test_accuracy:.2f}%') print() return model, train_losses, train_accuracies, test_losses, test_accuracies ``` ```python trainset, trainloader, testset, testloader, classes = load_dataset() if train_model: num_epochs = 60 save_interval = 5 model, train_losses, train_accuracies, test_losses, test_accuracies = train_epochs( model, trainloader, testloader, criterion, optimizer, device, num_epochs, save_interval) else: model.load_state_dict(torch.load('resnet50_cifar10_final_model_epochs_50.pth')) checkpoint = torch.load("resnet50_cifar10_variables.pth") epoch = checkpoint['epoch'] train_losses = checkpoint['train_losses'] train_accuracies = checkpoint['train_accuracies'] test_losses = checkpoint['test_losses'] test_accuracies = checkpoint['test_accuracies'] classes = checkpoint['classes'] model.to(device) model.eval() ``` ```python ``` Epoch 1/10 ---------- 100%|██████████| 704/704 [03:26<00:00, 3.41it/s] Train Loss: 1.9308 Acc: 0.4630 100%|██████████| 79/79 [00:22<00:00, 3.52it/s] Val Loss: 0.1944 Acc: 0.0665 Epoch 2/10 ---------- 24%|██▎ | 166/704 [00:49<02:40, 3.35it/s] --------------------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) <ipython-input-11-483fc1f8b5af> in <cell line: 51>() 49 num_epochs = 10 50 dataloaders = {'train': train_loader, 'val': valid_loader} ---> 51 trained_model = train_model(combined_model, dataloaders, criterion, optimizer, scheduler, num_epochs=num_epochs, device=device) <ipython-input-11-483fc1f8b5af> in train_model(model, dataloaders, criterion, optimizer, scheduler, num_epochs, device) 32 optimizer.step() 33 ---> 34 running_loss += loss.item() * inputs.size(0) 35 running_corrects += torch.sum(preds == labels.data) 36 del inputs, labels, outputs KeyboardInterrupt: ```python ```
{}
bobronson/resnet
null
[ "region:us" ]
null
2024-04-27T16:15:59+00:00
[]
[]
TAGS #region-us
cuda Files already downloaded and verified Files already downloaded and verified Files already downloaded and verified 100%|██████████| 176/176 [01:15<00:00, 2.35it/s] Epoch [1/10], Loss: 1.2169 Accuracy of the network on the 5000 validation images: 58.28 % 100%|██████████| 176/176 [01:16<00:00, 2.29it/s] Epoch [2/10], Loss: 0.8962 Accuracy of the network on the 5000 validation images: 70.36 % 100%|██████████| 176/176 [01:16<00:00, 2.30it/s] Epoch [3/10], Loss: 0.6691 Accuracy of the network on the 5000 validation images: 75.86 % 100%|██████████| 176/176 [01:16<00:00, 2.29it/s] Epoch [4/10], Loss: 0.6426 Accuracy of the network on the 5000 validation images: 79.24 % 100%|██████████| 176/176 [01:16<00:00, 2.29it/s] Epoch [5/10], Loss: 0.2891 Accuracy of the network on the 5000 validation images: 80.4 % 100%|██████████| 176/176 [01:16<00:00, 2.29it/s] Epoch [6/10], Loss: 0.4245 Accuracy of the network on the 5000 validation images: 81.24 % 100%|██████████| 176/176 [01:16<00:00, 2.29it/s] Epoch [7/10], Loss: 0.2183 Accuracy of the network on the 5000 validation images: 81.44 % 100%|██████████| 176/176 [01:16<00:00, 2.29it/s] Epoch [8/10], Loss: 0.1172 Accuracy of the network on the 5000 validation images: 81.06 % 100%|██████████| 176/176 [01:16<00:00, 2.30it/s] Epoch [9/10], Loss: 0.1069 Accuracy of the network on the 5000 validation images: 82.14 % 100%|██████████| 176/176 [01:17<00:00, 2.29it/s] Epoch [10/10], Loss: 0.0555 Accuracy of the network on the 5000 validation images: 83.12 % Using cache found in /root/.cache/torch/hub/pytorch_vision_main Epoch 1/10 ---------- 100%|██████████| 704/704 [03:26<00:00, 3.41it/s] Train Loss: 1.9308 Acc: 0.4630 100%|██████████| 79/79 [00:22<00:00, 3.52it/s] Val Loss: 0.1944 Acc: 0.0665 Epoch 2/10 ---------- 24%|██▎ | 166/704 [00:49<02:40, 3.35it/s] --------------------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) <ipython-input-11-483fc1f8b5af> in <cell line: 51>() 49 num_epochs = 10 50 dataloaders = {'train': train_loader, 'val': valid_loader} ---> 51 trained_model = train_model(combined_model, dataloaders, criterion, optimizer, scheduler, num_epochs=num_epochs, device=device) <ipython-input-11-483fc1f8b5af> in train_model(model, dataloaders, criterion, optimizer, scheduler, num_epochs, device) 32 URL() 33 ---> 34 running_loss += URL() * URL(0) 35 running_corrects += URL(preds == URL) 36 del inputs, labels, outputs KeyboardInterrupt:
[]
[ "TAGS\n#region-us \n" ]
sentence-similarity
sentence-transformers
# Reviews Zero-Shot Sentiment Classification This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python import numpy as np from sentence_transformers import SentenceTransformer, util sentences = ["Замечательный препарат, все пользуемся", "Последнее время данный препарат вызывает у меня сыпь"] classes = ['негатив', 'нейтрально', 'позитив'] model = SentenceTransformer('pavlentiy/reviews-sentiment-multilingual-e5-base') embeddings = model.encode(sentences) embeddings_classes = model.encode(classes) # Compute cosine-similarities cosine_scores = np.array(util.cos_sim(embeddings, embeddings_classes)) a = lambda t: {0:'негатив', 1:'нейтральная', 2:'позитив'}[t] argmax = cosine_scores.argmax(axis=1) result_classes = list(map(a, argmax)) print(result_classes) ``` ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 802 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 0, "evaluator": "sentence_transformers.evaluation.TranslationEvaluator.TranslationEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 160.4, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
pavlentiy/reviews-sentiment-multilingual-e5-base
null
[ "sentence-transformers", "safetensors", "xlm-roberta", "feature-extraction", "sentence-similarity", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:16:00+00:00
[]
[]
TAGS #sentence-transformers #safetensors #xlm-roberta #feature-extraction #sentence-similarity #endpoints_compatible #region-us
# Reviews Zero-Shot Sentiment Classification This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Training The model was trained with the parameters: DataLoader: 'URL.dataloader.DataLoader' of length 802 with parameters: Loss: 'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters: Parameters of the fit()-Method: ## Full Model Architecture ## Citing & Authors
[ "# Reviews Zero-Shot Sentiment Classification\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 802 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #safetensors #xlm-roberta #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n", "# Reviews Zero-Shot Sentiment Classification\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 802 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/adv6pvf
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:16:26+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_outputs This model is a fine-tuned version of [riotu-lab/ArabianGPT-03B](https://huggingface.co/riotu-lab/ArabianGPT-03B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.8610 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.4515 | 0.25 | 10 | 4.5855 | | 4.8912 | 0.49 | 20 | 4.1608 | | 4.3524 | 0.74 | 30 | 4.0509 | | 4.1537 | 0.99 | 40 | 4.0484 | | 3.6716 | 1.23 | 50 | 4.0211 | | 3.4284 | 1.48 | 60 | 4.1357 | | 3.5215 | 1.73 | 70 | 4.2520 | | 3.4336 | 1.98 | 80 | 4.0270 | | 2.8886 | 2.22 | 90 | 4.9232 | | 2.6176 | 2.47 | 100 | 5.0723 | | 2.5867 | 2.72 | 110 | 4.8623 | | 2.6076 | 2.96 | 120 | 4.8610 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["ar"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["arcd"], "base_model": "riotu-lab/ArabianGPT-03B", "pipeline_tag": "question-answering", "widget": [{"text": "\u0645\u0627 \u0647\u064a \u0627\u0644\u0639\u0648\u0627\u0645\u0644 \u0627\u0644\u062a\u064a \u062a\u0624\u062b\u0631 \u0639\u0644\u0649 \u0633\u0631\u0639\u0629 \u0627\u0644\u0645\u0648\u0635\u0644\u0627\u062a \u0627\u0644\u0639\u0635\u0628\u064a\u0629\u061f", "context": "\u062a\u062a\u0623\u062b\u0631 \u0627\u0644\u0623\u0644\u064a\u0627\u0641 \u0627\u0644\u0639\u0635\u0628\u064a\u0629 \u0627\u0644\u0637\u0648\u064a\u0644\u0629 \u0628\u062f\u0631\u062c\u0629 \u0623\u0643\u0628\u0631 \u0645\u0646 \u0627\u0644\u0623\u0644\u064a\u0627\u0641 \u0627\u0644\u0639\u0635\u0628\u064a\u0629 \u0627\u0644\u0642\u0635\u064a\u0631\u0629\u060c \u0648\u0630\u0644\u0643 \u0644\u0623\u0646 \u0633\u0631\u0639\u0629 \u0627\u0644\u062a\u0648\u0635\u064a\u0644 \u0641\u064a \u0627\u0644\u0639\u0635\u0628 \u062a\u0646\u0642\u0635 \u0641\u064a \u062a\u0646\u0627\u0633\u0628 \u0645\u0639 \u0637\u0648\u0644 \u0627\u0644\u0639\u0635\u0628. \u0641\u064a \u0647\u0630\u0647 \u0627\u0644\u0645\u062a\u0644\u0627\u0632\u0645\u0629\u060c \u064a\u062d\u062f\u062b \u0627\u0646\u062e\u0641\u0627\u0636 \u0641\u064a \u0627\u0644\u0625\u062d\u0633\u0627\u0633 \u0648\u0641\u0642\u062f\u0627\u0646 \u0631\u062f\u0648\u062f \u0627\u0644\u0641\u0639\u0644 \u0641\u064a \u0623\u0635\u0627\u0628\u0639 \u0643\u0644 \u0642\u062f\u0645\u060c \u0648\u062a\u0645\u062a\u062f \u0628\u0639\u062f \u0630\u0644\u0643 \u0625\u0644\u0649 \u0623\u0639\u0644\u0649. \u0648\u0639\u0627\u062f\u0629 \u0645\u0627 \u062a\u0648\u0635\u0641 \u0628\u0627\u062d\u0633\u0627\u0633 \u0627\u0644\u062e\u062f\u0631 \u0648\u0641\u0642\u062f\u0627\u0646 \u0627\u0644\u0625\u062d\u0633\u0627\u0633 \u0648\u0639\u0633\u0631 \u0627\u0644\u0644\u0645\u0633 (\u0627\u0646\u062e\u0641\u0627\u0636 \u0623\u0648 \u0641\u0642\u062f\u0627\u0646 \u0627\u0644\u0625\u062d\u0633\u0627\u0633 \u0641\u064a \u062c\u0632\u0621 \u0645\u0646 \u0627\u0644\u062c\u0633\u0645) \u0648\u0623\u0644\u0645 \u0644\u064a\u0644\u064a \u0641\u064a\u0645\u0627 \u064a\u0634\u0628\u0647 \u0627\u0644\u0642\u0641\u0627\u0632 \u0648\u0627\u0644\u062c\u0648\u0631\u0628. \u0648\u064a\u0645\u0643\u0646 \u0623\u0646 \u064a\u0643\u0648\u0646 \u0627\u0644\u0623\u0644\u0645 \u0641\u064a \u0647\u064a\u0626\u0629 \u062d\u0631\u0642\u0627\u0646 \u0623\u0648 \u0648\u062e\u0632 \u0623\u0648 \u0623\u0644\u0645 \u063a\u064a\u0631 \u0645\u062d\u062f\u062f. \u0648\u064a\u0643\u0648\u0646 \u0627\u0644\u0627\u062d\u0633\u0627\u0633 \u0628\u0648\u062e\u0632 \u0627\u0644\u062f\u0628\u0627\u0628\u064a\u0633 \u0648\u0627\u0644\u0625\u0628\u0631 \u0623\u0645\u0631\u0627\u064b \u0634\u0627\u0626\u0639\u0627\u064b. \u0648\u064a\u062a\u0623\u062b\u0631 \u0627\u0644\u0627\u062d\u0633\u0627\u0633 \u0628\u0648\u0636\u0639 \u0623\u0639\u0636\u0627\u0621 \u0627\u0644\u062c\u0633\u0645 \u0644\u0628\u0639\u0636\u0647\u0627 proprioception \u0645\u0628\u0643\u0631\u0627. \u0648\u0644\u0627 \u064a\u0645\u0643\u0646 \u0644\u0647\u0624\u0644\u0627\u0621 \u0627\u0644\u0645\u0631\u0636\u0649 \u0627\u0644\u0634\u0639\u0648\u0631 \u0639\u0646\u062f\u0645\u0627 \u064a\u062f\u0648\u0633\u0648\u0646 \u0639\u0644\u0649 \u062c\u0633\u0645 \u063a\u0631\u064a\u0628 \u0643\u0627\u0644\u0634\u0638\u064a\u0629\u060c \u0623\u0648 \u0639\u0646\u062f\u0645\u0627 \u064a\u062a\u0643\u0648\u0646 \u0644\u0647\u0645 \u062c\u0644\u062f \u0635\u0644\u0628 \u0645\u0646 \u0627\u0644\u0623\u062d\u0630\u064a\u0629 \u0627\u0644\u0636\u064a\u0642\u0629. \u0648\u0628\u0646\u0627\u0621 \u0639\u0644\u0649 \u0630\u0644\u0643\u060c \u0641\u0625\u0646\u0647\u0645 \u0645\u0639\u0631\u0636\u0648\u0646 \u0644\u062e\u0637\u0631 \u062d\u062f\u0648\u062b \u0627\u0644\u0642\u0631\u062d\u0629 \u0648\u0627\u0644\u062a\u0647\u0627\u0628\u0627\u062a \u0627\u0644\u0642\u062f\u0645\u064a\u0646 \u0648\u0627\u0644\u0633\u0627\u0642\u064a\u0646\u060c \u0648\u0627\u0644\u062a\u064a \u064a\u0645\u0643\u0646 \u0623\u0646 \u062a\u0624\u062f\u064a \u0625\u0644\u0649 \u0627\u0644\u0628\u062a\u0631 \u0648\u0642\u062f \u064a\u062d\u062f\u062b \u0644\u0647\u0624\u0644\u0627\u0621 \u0627\u0644\u0645\u0631\u0636\u0649 \u0643\u0633\u0648\u0631\u0627 \u0645\u062a\u0639\u062f\u062f\u0629 \u0641\u064a \u0627\u0644\u0631\u0643\u0628\u0629 \u0623\u0648 \u0627\u0644\u0643\u0627\u062d\u0644 \u0623\u0648 \u0627\u0644\u0642\u062f\u0645 \u0648\u0642\u062f \u062a\u0624\u062f\u064a \u0625\u0644\u0649 \u062d\u062f\u0648\u062b \u0627\u0646\u062d\u0644\u0627\u0644 \u0641\u064a \u0627\u0644\u0645\u0641\u0627\u0635\u0644. \u0648\u064a\u0624\u062f\u064a \u0641\u0642\u062f\u0627\u0646 \u0648\u0638\u064a\u0641\u0629 \u0627\u0644\u062d\u0631\u0643\u0629 \u0625\u0644\u0649 \u062a\u0642\u0648\u0633 \u0627\u0644\u0642\u062f\u0645 \u0644\u0623\u0639\u0644\u0649 dorsiflexion\u060c \u0648\u062a\u0642\u0644\u0635 \u0623\u0635\u0627\u0628\u0639 \u0627\u0644\u0642\u062f\u0645 \u0648\u0641\u0642\u062f\u0627\u0646 \u0648\u0638\u064a\u0641\u0629 \u0627\u0644\u0639\u0636\u0644\u0627\u062a \u0628\u064a\u0646 \u0627\u0644\u0623\u0635\u0627\u0628\u0639\u060c \u0645\u0645\u0627 \u064a\u0633\u0645\u0649 \u0628\u0627\u0644\u0642\u062f\u0645 \u0627\u0644\u0645\u0637\u0631\u0642\u0629. \u0648\u0644\u0627 \u062a\u0642\u062a\u0635\u0631 \u0647\u0630\u0647 \u0627\u0644\u062a\u0642\u0644\u0635\u0627\u062a \u0639\u0644\u0649 \u0627\u0644\u0642\u062f\u0645 \u0641\u0642\u0637\u060c \u0628\u0644 \u0623\u064a\u0636\u0627 \u062a\u0635\u064a\u0628 \u0627\u0644\u064a\u062f \u062d\u064a\u062b \u0641\u0642\u062f\u0627\u0646 \u0627\u0644\u0639\u0636\u0644\u0627\u062a \u064a\u062c\u0639\u0644 \u0627\u0644\u064a\u062f \u062a\u0628\u062f\u0648 \u0647\u0632\u064a\u0644\u0629 \u0643\u0627\u0644\u0647\u064a\u0643\u0644 \u0627\u0644\u0639\u0638\u0645\u064a \u0648\u064a\u0632\u062f\u0627\u062f \u0641\u0642\u062f\u0627\u0646 \u0627\u0644\u0648\u0638\u064a\u0641\u0629 \u0627\u0644\u062d\u0631\u0643\u064a\u0629", "example_title": "Example 1"}, {"text": "\u0645\u0627 \u0644\u0642\u0628 \u062e\u0627\u0644\u062f \u0628\u0646 \u0627\u0644\u0648\u0644\u064a\u062f \u0628\u0627\u0644\u0639\u0631\u0628\u064a\u0629\u061f", "context": "\u062e\u0627\u0644\u062f \u0628\u0646 \u0627\u0644\u0648\u0644\u064a\u062f \u0645\u0646 \u0623\u0628\u0637\u0627\u0644 \u0648\u0642\u0627\u062f\u0629 \u0627\u0644\u0641\u062a\u062d \u0627\u0644\u0625\u0633\u0644\u0627\u0645\u064a \u0648\u0642\u062f \u062a\u062d\u062f\u062b\u062a \u0639\u0646\u0647 \u0627\u0644\u0644\u063a\u0627\u062a \u0627\u0644\u0625\u0646\u062c\u0644\u064a\u0632\u064a\u0629 \u0648\u0627\u0644\u0641\u0631\u0646\u0633\u064a\u0629 \u0648\u0627\u0644\u0625\u0633\u0628\u0627\u0646\u064a\u0629 \u0648\u0644\u0642\u0628 \u0628\u0633\u064a\u0641 \u0627\u0644\u0644\u0647 \u0627\u0644\u0645\u0633\u0644\u0648\u0644.", "example_title": "Example 2"}, {"text": "\u0623\u064a\u0646 \u0623\u0633\u0643\u0646\u061f", "context": "\u0625\u0633\u0645\u064a \u0645\u062d\u0645\u062f \u0648\u0623\u0633\u0643\u0646 \u0641\u064a \u0628\u064a\u0631\u0648\u062a", "example_title": "Example 3"}], "model-index": [{"name": "model_outputs", "results": []}]}
gp-tar4/QA_FineTuned_ArabianGPT-03B
null
[ "transformers", "safetensors", "gpt2", "question-answering", "generated_from_trainer", "ar", "dataset:arcd", "base_model:riotu-lab/ArabianGPT-03B", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T16:16:33+00:00
[]
[ "ar" ]
TAGS #transformers #safetensors #gpt2 #question-answering #generated_from_trainer #ar #dataset-arcd #base_model-riotu-lab/ArabianGPT-03B #license-apache-2.0 #endpoints_compatible #text-generation-inference #region-us
model\_outputs ============== This model is a fine-tuned version of riotu-lab/ArabianGPT-03B on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 4.8610 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #gpt2 #question-answering #generated_from_trainer #ar #dataset-arcd #base_model-riotu-lab/ArabianGPT-03B #license-apache-2.0 #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_gliner_model This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4179 - Precision: 0.8607 - Recall: 0.8591 - F1: 0.8599 - Accuracy: 0.8565 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.4823 | 1.0 | 11105 | 0.4672 | 0.8461 | 0.8463 | 0.8462 | 0.8414 | | 0.423 | 2.0 | 22210 | 0.4179 | 0.8607 | 0.8591 | 0.8599 | 0.8565 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "microsoft/mdeberta-v3-base", "model-index": [{"name": "my_awesome_gliner_model", "results": []}]}
Gmanc/my_awesome_gliner_model
null
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "token-classification", "generated_from_trainer", "base_model:microsoft/mdeberta-v3-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:20:09+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #deberta-v2 #token-classification #generated_from_trainer #base_model-microsoft/mdeberta-v3-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
my\_awesome\_gliner\_model ========================== This model is a fine-tuned version of microsoft/mdeberta-v3-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.4179 * Precision: 0.8607 * Recall: 0.8591 * F1: 0.8599 * Accuracy: 0.8565 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #deberta-v2 #token-classification #generated_from_trainer #base_model-microsoft/mdeberta-v3-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
null
this is a test
{}
dotheboogey678/testmodel0_O
null
[ "region:us" ]
null
2024-04-27T16:21:11+00:00
[]
[]
TAGS #region-us
this is a test
[]
[ "TAGS\n#region-us \n" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_eli5_clm_model_v2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the eli5_category dataset. It achieves the following results on the evaluation set: - Loss: 6.0285 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.5395 | 1.0 | 1389 | 6.2651 | | 6.1463 | 2.0 | 2778 | 6.0841 | | 6.0381 | 3.0 | 4167 | 6.0285 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["eli5_category"], "base_model": "gpt2", "model-index": [{"name": "my_eli5_clm_model_v2", "results": []}]}
ljgries/my_eli5_clm_model_v2
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "dataset:eli5_category", "base_model:gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T16:23:59+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #dataset-eli5_category #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
my\_eli5\_clm\_model\_v2 ======================== This model is a fine-tuned version of gpt2 on the eli5\_category dataset. It achieves the following results on the evaluation set: * Loss: 6.0285 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3.0 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #dataset-eli5_category #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth", "trl", "sft"]}
clarkchan/llama3-8b-alpaca-gpt4-chinese
null
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-27T16:25:14+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #unsloth #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #unsloth #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Barcenas-2x10.7b-Korean Barcenas-2x10.7b-Korean is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [chihoonlee10/T3Q-ko-solar-dpo-v6.0](https://huggingface.co/chihoonlee10/T3Q-ko-solar-dpo-v6.0) * [freewheelin/free-solar-evo-v0.1](https://huggingface.co/freewheelin/free-solar-evo-v0.1) ## 🧩 Configuration ```yaml slices: - sources: - model: chihoonlee10/T3Q-ko-solar-dpo-v6.0 layer_range: [0, 32] - model: freewheelin/free-solar-evo-v0.1 layer_range: [0, 32] merge_method: slerp base_model: chihoonlee10/T3Q-ko-solar-dpo-v6.0 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "danielbrdz/Barcenas-2x10.7b-Korean" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "chihoonlee10/T3Q-ko-solar-dpo-v6.0", "freewheelin/free-solar-evo-v0.1"], "base_model": ["chihoonlee10/T3Q-ko-solar-dpo-v6.0", "freewheelin/free-solar-evo-v0.1"]}
Danielbrdz/Barcenas-2x10.7b-Korean
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "chihoonlee10/T3Q-ko-solar-dpo-v6.0", "freewheelin/free-solar-evo-v0.1", "base_model:chihoonlee10/T3Q-ko-solar-dpo-v6.0", "base_model:freewheelin/free-solar-evo-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T16:25:43+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #chihoonlee10/T3Q-ko-solar-dpo-v6.0 #freewheelin/free-solar-evo-v0.1 #base_model-chihoonlee10/T3Q-ko-solar-dpo-v6.0 #base_model-freewheelin/free-solar-evo-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Barcenas-2x10.7b-Korean Barcenas-2x10.7b-Korean is a merge of the following models using LazyMergekit: * chihoonlee10/T3Q-ko-solar-dpo-v6.0 * freewheelin/free-solar-evo-v0.1 ## Configuration ## Usage Made with ️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽
[ "# Barcenas-2x10.7b-Korean\n\nBarcenas-2x10.7b-Korean is a merge of the following models using LazyMergekit:\n* chihoonlee10/T3Q-ko-solar-dpo-v6.0\n* freewheelin/free-solar-evo-v0.1", "## Configuration", "## Usage\n\n\n\nMade with ️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #chihoonlee10/T3Q-ko-solar-dpo-v6.0 #freewheelin/free-solar-evo-v0.1 #base_model-chihoonlee10/T3Q-ko-solar-dpo-v6.0 #base_model-freewheelin/free-solar-evo-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Barcenas-2x10.7b-Korean\n\nBarcenas-2x10.7b-Korean is a merge of the following models using LazyMergekit:\n* chihoonlee10/T3Q-ko-solar-dpo-v6.0\n* freewheelin/free-solar-evo-v0.1", "## Configuration", "## Usage\n\n\n\nMade with ️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽" ]
text-generation
transformers
[![CODE](https://img.shields.io/badge/GitHub-Repository-<COLOR>)](https://github.com/mbzuai-oryx/LLaVA-pp) # LLaMA-3-V: Extending the Visual Capabilities of LLaVA with Meta-Llama-3-8B-Instruct ## Repository Overview This repository features LLaVA v1.5 trained with the Meta-Llama-3-8B-Instruct LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding. ## Training Strategy - **Pretraining:** Only Vision-to-Language projector is trained. The rest of the model is frozen. - **Fine-tuning:** All model parameters including LLM are fine-tuned. Only the vision-backbone (CLIP) is kept frozen. ## Key Components - **Base Large Language Model (LLM):** [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) - **Base Large Multimodal Model (LMM):** [LLaVA-v1.5](https://github.com/haotian-liu/LLaVA) ## Training Data - **Pretraining Dataset:** [LCS-558K](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) - **Fine-tuning Dataset:** [LLaVA-Instruct-665K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_v1_5_mix665k.json) ## Download It As ``` git lfs install git clone https://huggingface.co/MBZUAI/LLaVA-Meta-Llama-3-8B-Instruct-FT ``` --- ## Contributions Contributions are welcome! Please 🌟 our repository [LLaVA++](https://github.com/mbzuai-oryx/LLaVA-pp) if you find this model useful. ---
{}
MBZUAI/LLaVA-Meta-Llama-3-8B-Instruct-FT
null
[ "transformers", "safetensors", "llava_llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "region:us", "has_space" ]
null
2024-04-27T16:26:42+00:00
[]
[]
TAGS #transformers #safetensors #llava_llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #region-us #has_space
![CODE](URL # LLaMA-3-V: Extending the Visual Capabilities of LLaVA with Meta-Llama-3-8B-Instruct ## Repository Overview This repository features LLaVA v1.5 trained with the Meta-Llama-3-8B-Instruct LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding. ## Training Strategy - Pretraining: Only Vision-to-Language projector is trained. The rest of the model is frozen. - Fine-tuning: All model parameters including LLM are fine-tuned. Only the vision-backbone (CLIP) is kept frozen. ## Key Components - Base Large Language Model (LLM): Meta-Llama-3-8B-Instruct - Base Large Multimodal Model (LMM): LLaVA-v1.5 ## Training Data - Pretraining Dataset: LCS-558K - Fine-tuning Dataset: LLaVA-Instruct-665K ## Download It As --- ## Contributions Contributions are welcome! Please our repository LLaVA++ if you find this model useful. ---
[ "# LLaMA-3-V: Extending the Visual Capabilities of LLaVA with Meta-Llama-3-8B-Instruct", "## Repository Overview\n\nThis repository features LLaVA v1.5 trained with the Meta-Llama-3-8B-Instruct LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding.", "## Training Strategy\n\n- Pretraining: Only Vision-to-Language projector is trained. The rest of the model is frozen.\n- Fine-tuning: All model parameters including LLM are fine-tuned. Only the vision-backbone (CLIP) is kept frozen.", "## Key Components\n\n- Base Large Language Model (LLM): Meta-Llama-3-8B-Instruct\n- Base Large Multimodal Model (LMM): LLaVA-v1.5", "## Training Data\n\n- Pretraining Dataset: LCS-558K\n- Fine-tuning Dataset: LLaVA-Instruct-665K", "## Download It As\n\n\n\n---", "## Contributions\n\nContributions are welcome! Please our repository LLaVA++ if you find this model useful.\n\n---" ]
[ "TAGS\n#transformers #safetensors #llava_llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #region-us #has_space \n", "# LLaMA-3-V: Extending the Visual Capabilities of LLaVA with Meta-Llama-3-8B-Instruct", "## Repository Overview\n\nThis repository features LLaVA v1.5 trained with the Meta-Llama-3-8B-Instruct LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding.", "## Training Strategy\n\n- Pretraining: Only Vision-to-Language projector is trained. The rest of the model is frozen.\n- Fine-tuning: All model parameters including LLM are fine-tuned. Only the vision-backbone (CLIP) is kept frozen.", "## Key Components\n\n- Base Large Language Model (LLM): Meta-Llama-3-8B-Instruct\n- Base Large Multimodal Model (LMM): LLaVA-v1.5", "## Training Data\n\n- Pretraining Dataset: LCS-558K\n- Fine-tuning Dataset: LLaVA-Instruct-665K", "## Download It As\n\n\n\n---", "## Contributions\n\nContributions are welcome! Please our repository LLaVA++ if you find this model useful.\n\n---" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_outputs This model is a fine-tuned version of [riotu-lab/ArabianGPT-01B](https://huggingface.co/riotu-lab/ArabianGPT-01B) on an arcd dataset(Arabic dataset). It achieves the following results on the evaluation set: - Loss: 3.0808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.8475 | 0.25 | 10 | 3.8987 | | 4.0025 | 0.49 | 20 | 3.6754 | | 3.6385 | 0.74 | 30 | 3.2931 | | 3.2785 | 0.99 | 40 | 2.9471 | | 2.1751 | 1.23 | 50 | 3.0013 | | 1.8868 | 1.48 | 60 | 3.0324 | | 1.9831 | 1.73 | 70 | 2.8470 | | 1.8749 | 1.98 | 80 | 2.8488 | | 1.0702 | 2.22 | 90 | 2.9369 | | 0.8701 | 2.47 | 100 | 3.0490 | | 0.8731 | 2.72 | 110 | 3.0774 | | 0.8309 | 2.96 | 120 | 3.0808 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["ar"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["arcd"], "base_model": "riotu-lab/ArabianGPT-01B", "pipeline_tag": "question-answering", "widget": [{"text": "\u0645\u0627 \u0647\u064a \u0627\u0644\u0639\u0648\u0627\u0645\u0644 \u0627\u0644\u062a\u064a \u062a\u0624\u062b\u0631 \u0639\u0644\u0649 \u0633\u0631\u0639\u0629 \u0627\u0644\u0645\u0648\u0635\u0644\u0627\u062a \u0627\u0644\u0639\u0635\u0628\u064a\u0629\u061f", "context": "\u062a\u062a\u0623\u062b\u0631 \u0627\u0644\u0623\u0644\u064a\u0627\u0641 \u0627\u0644\u0639\u0635\u0628\u064a\u0629 \u0627\u0644\u0637\u0648\u064a\u0644\u0629 \u0628\u062f\u0631\u062c\u0629 \u0623\u0643\u0628\u0631 \u0645\u0646 \u0627\u0644\u0623\u0644\u064a\u0627\u0641 \u0627\u0644\u0639\u0635\u0628\u064a\u0629 \u0627\u0644\u0642\u0635\u064a\u0631\u0629\u060c \u0648\u0630\u0644\u0643 \u0644\u0623\u0646 \u0633\u0631\u0639\u0629 \u0627\u0644\u062a\u0648\u0635\u064a\u0644 \u0641\u064a \u0627\u0644\u0639\u0635\u0628 \u062a\u0646\u0642\u0635 \u0641\u064a \u062a\u0646\u0627\u0633\u0628 \u0645\u0639 \u0637\u0648\u0644 \u0627\u0644\u0639\u0635\u0628. \u0641\u064a \u0647\u0630\u0647 \u0627\u0644\u0645\u062a\u0644\u0627\u0632\u0645\u0629\u060c \u064a\u062d\u062f\u062b \u0627\u0646\u062e\u0641\u0627\u0636 \u0641\u064a \u0627\u0644\u0625\u062d\u0633\u0627\u0633 \u0648\u0641\u0642\u062f\u0627\u0646 \u0631\u062f\u0648\u062f \u0627\u0644\u0641\u0639\u0644 \u0641\u064a \u0623\u0635\u0627\u0628\u0639 \u0643\u0644 \u0642\u062f\u0645\u060c \u0648\u062a\u0645\u062a\u062f \u0628\u0639\u062f \u0630\u0644\u0643 \u0625\u0644\u0649 \u0623\u0639\u0644\u0649. \u0648\u0639\u0627\u062f\u0629 \u0645\u0627 \u062a\u0648\u0635\u0641 \u0628\u0627\u062d\u0633\u0627\u0633 \u0627\u0644\u062e\u062f\u0631 \u0648\u0641\u0642\u062f\u0627\u0646 \u0627\u0644\u0625\u062d\u0633\u0627\u0633 \u0648\u0639\u0633\u0631 \u0627\u0644\u0644\u0645\u0633 (\u0627\u0646\u062e\u0641\u0627\u0636 \u0623\u0648 \u0641\u0642\u062f\u0627\u0646 \u0627\u0644\u0625\u062d\u0633\u0627\u0633 \u0641\u064a \u062c\u0632\u0621 \u0645\u0646 \u0627\u0644\u062c\u0633\u0645) \u0648\u0623\u0644\u0645 \u0644\u064a\u0644\u064a \u0641\u064a\u0645\u0627 \u064a\u0634\u0628\u0647 \u0627\u0644\u0642\u0641\u0627\u0632 \u0648\u0627\u0644\u062c\u0648\u0631\u0628. \u0648\u064a\u0645\u0643\u0646 \u0623\u0646 \u064a\u0643\u0648\u0646 \u0627\u0644\u0623\u0644\u0645 \u0641\u064a \u0647\u064a\u0626\u0629 \u062d\u0631\u0642\u0627\u0646 \u0623\u0648 \u0648\u062e\u0632 \u0623\u0648 \u0623\u0644\u0645 \u063a\u064a\u0631 \u0645\u062d\u062f\u062f. \u0648\u064a\u0643\u0648\u0646 \u0627\u0644\u0627\u062d\u0633\u0627\u0633 \u0628\u0648\u062e\u0632 \u0627\u0644\u062f\u0628\u0627\u0628\u064a\u0633 \u0648\u0627\u0644\u0625\u0628\u0631 \u0623\u0645\u0631\u0627\u064b \u0634\u0627\u0626\u0639\u0627\u064b. \u0648\u064a\u062a\u0623\u062b\u0631 \u0627\u0644\u0627\u062d\u0633\u0627\u0633 \u0628\u0648\u0636\u0639 \u0623\u0639\u0636\u0627\u0621 \u0627\u0644\u062c\u0633\u0645 \u0644\u0628\u0639\u0636\u0647\u0627 proprioception \u0645\u0628\u0643\u0631\u0627. \u0648\u0644\u0627 \u064a\u0645\u0643\u0646 \u0644\u0647\u0624\u0644\u0627\u0621 \u0627\u0644\u0645\u0631\u0636\u0649 \u0627\u0644\u0634\u0639\u0648\u0631 \u0639\u0646\u062f\u0645\u0627 \u064a\u062f\u0648\u0633\u0648\u0646 \u0639\u0644\u0649 \u062c\u0633\u0645 \u063a\u0631\u064a\u0628 \u0643\u0627\u0644\u0634\u0638\u064a\u0629\u060c \u0623\u0648 \u0639\u0646\u062f\u0645\u0627 \u064a\u062a\u0643\u0648\u0646 \u0644\u0647\u0645 \u062c\u0644\u062f \u0635\u0644\u0628 \u0645\u0646 \u0627\u0644\u0623\u062d\u0630\u064a\u0629 \u0627\u0644\u0636\u064a\u0642\u0629. \u0648\u0628\u0646\u0627\u0621 \u0639\u0644\u0649 \u0630\u0644\u0643\u060c \u0641\u0625\u0646\u0647\u0645 \u0645\u0639\u0631\u0636\u0648\u0646 \u0644\u062e\u0637\u0631 \u062d\u062f\u0648\u062b \u0627\u0644\u0642\u0631\u062d\u0629 \u0648\u0627\u0644\u062a\u0647\u0627\u0628\u0627\u062a \u0627\u0644\u0642\u062f\u0645\u064a\u0646 \u0648\u0627\u0644\u0633\u0627\u0642\u064a\u0646\u060c \u0648\u0627\u0644\u062a\u064a \u064a\u0645\u0643\u0646 \u0623\u0646 \u062a\u0624\u062f\u064a \u0625\u0644\u0649 \u0627\u0644\u0628\u062a\u0631 \u0648\u0642\u062f \u064a\u062d\u062f\u062b \u0644\u0647\u0624\u0644\u0627\u0621 \u0627\u0644\u0645\u0631\u0636\u0649 \u0643\u0633\u0648\u0631\u0627 \u0645\u062a\u0639\u062f\u062f\u0629 \u0641\u064a \u0627\u0644\u0631\u0643\u0628\u0629 \u0623\u0648 \u0627\u0644\u0643\u0627\u062d\u0644 \u0623\u0648 \u0627\u0644\u0642\u062f\u0645 \u0648\u0642\u062f \u062a\u0624\u062f\u064a \u0625\u0644\u0649 \u062d\u062f\u0648\u062b \u0627\u0646\u062d\u0644\u0627\u0644 \u0641\u064a \u0627\u0644\u0645\u0641\u0627\u0635\u0644. \u0648\u064a\u0624\u062f\u064a \u0641\u0642\u062f\u0627\u0646 \u0648\u0638\u064a\u0641\u0629 \u0627\u0644\u062d\u0631\u0643\u0629 \u0625\u0644\u0649 \u062a\u0642\u0648\u0633 \u0627\u0644\u0642\u062f\u0645 \u0644\u0623\u0639\u0644\u0649 dorsiflexion\u060c \u0648\u062a\u0642\u0644\u0635 \u0623\u0635\u0627\u0628\u0639 \u0627\u0644\u0642\u062f\u0645 \u0648\u0641\u0642\u062f\u0627\u0646 \u0648\u0638\u064a\u0641\u0629 \u0627\u0644\u0639\u0636\u0644\u0627\u062a \u0628\u064a\u0646 \u0627\u0644\u0623\u0635\u0627\u0628\u0639\u060c \u0645\u0645\u0627 \u064a\u0633\u0645\u0649 \u0628\u0627\u0644\u0642\u062f\u0645 \u0627\u0644\u0645\u0637\u0631\u0642\u0629. \u0648\u0644\u0627 \u062a\u0642\u062a\u0635\u0631 \u0647\u0630\u0647 \u0627\u0644\u062a\u0642\u0644\u0635\u0627\u062a \u0639\u0644\u0649 \u0627\u0644\u0642\u062f\u0645 \u0641\u0642\u0637\u060c \u0628\u0644 \u0623\u064a\u0636\u0627 \u062a\u0635\u064a\u0628 \u0627\u0644\u064a\u062f \u062d\u064a\u062b \u0641\u0642\u062f\u0627\u0646 \u0627\u0644\u0639\u0636\u0644\u0627\u062a \u064a\u062c\u0639\u0644 \u0627\u0644\u064a\u062f \u062a\u0628\u062f\u0648 \u0647\u0632\u064a\u0644\u0629 \u0643\u0627\u0644\u0647\u064a\u0643\u0644 \u0627\u0644\u0639\u0638\u0645\u064a \u0648\u064a\u0632\u062f\u0627\u062f \u0641\u0642\u062f\u0627\u0646 \u0627\u0644\u0648\u0638\u064a\u0641\u0629 \u0627\u0644\u062d\u0631\u0643\u064a\u0629", "example_title": "Example 1"}, {"text": "\u0645\u0627 \u0644\u0642\u0628 \u062e\u0627\u0644\u062f \u0628\u0646 \u0627\u0644\u0648\u0644\u064a\u062f \u0628\u0627\u0644\u0639\u0631\u0628\u064a\u0629\u061f", "context": "\u062e\u0627\u0644\u062f \u0628\u0646 \u0627\u0644\u0648\u0644\u064a\u062f \u0645\u0646 \u0623\u0628\u0637\u0627\u0644 \u0648\u0642\u0627\u062f\u0629 \u0627\u0644\u0641\u062a\u062d \u0627\u0644\u0625\u0633\u0644\u0627\u0645\u064a \u0648\u0642\u062f \u062a\u062d\u062f\u062b\u062a \u0639\u0646\u0647 \u0627\u0644\u0644\u063a\u0627\u062a \u0627\u0644\u0625\u0646\u062c\u0644\u064a\u0632\u064a\u0629 \u0648\u0627\u0644\u0641\u0631\u0646\u0633\u064a\u0629 \u0648\u0627\u0644\u0625\u0633\u0628\u0627\u0646\u064a\u0629 \u0648\u0644\u0642\u0628 \u0628\u0633\u064a\u0641 \u0627\u0644\u0644\u0647 \u0627\u0644\u0645\u0633\u0644\u0648\u0644.", "example_title": "Example 2"}, {"text": "\u0623\u064a\u0646 \u0623\u0633\u0643\u0646\u061f", "context": "\u0625\u0633\u0645\u064a \u0645\u062d\u0645\u062f \u0648\u0623\u0633\u0643\u0646 \u0641\u064a \u0628\u064a\u0631\u0648\u062a", "example_title": "Example 3"}], "model-index": [{"name": "model_outputs", "results": []}]}
gp-tar4/QA_FineTuned_ArabianGpt-01B
null
[ "transformers", "safetensors", "gpt2", "question-answering", "generated_from_trainer", "ar", "dataset:arcd", "base_model:riotu-lab/ArabianGPT-01B", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T16:27:43+00:00
[]
[ "ar" ]
TAGS #transformers #safetensors #gpt2 #question-answering #generated_from_trainer #ar #dataset-arcd #base_model-riotu-lab/ArabianGPT-01B #license-apache-2.0 #endpoints_compatible #text-generation-inference #region-us
model\_outputs ============== This model is a fine-tuned version of riotu-lab/ArabianGPT-01B on an arcd dataset(Arabic dataset). It achieves the following results on the evaluation set: * Loss: 3.0808 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #gpt2 #question-answering #generated_from_trainer #ar #dataset-arcd #base_model-riotu-lab/ArabianGPT-01B #license-apache-2.0 #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-lolchamps This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on a [LoL-Champions-Corpus](https://huggingface.co/datasets/avinot/LoL-Champions-Corpus) dataset. It achieves the following results on the evaluation set: - Loss: 2.0446 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7422 | 1.0 | 49 | 2.3202 | | 2.3298 | 2.0 | 98 | 2.1095 | | 2.1925 | 3.0 | 147 | 2.0556 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-lolchamps", "results": []}]}
avinot/distilbert-lolchamps
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:28:24+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #fill-mask #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-lolchamps ==================== This model is a fine-tuned version of distilbert-base-uncased on a LoL-Champions-Corpus dataset. It achieves the following results on the evaluation set: * Loss: 2.0446 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3.0 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #fill-mask #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
null
# Either the scaling of `down_proj` or the alternating layers makes the models do "strange" stuff: - They often get confused in the Sherlock Holmes stories. - They sometimes write their own instructions before writting the stories. - In prompt #2 they refer to 'Mercer' as a name rather than a profession. - Most seem to think Anton Chigurh is from Russia or Eastern-Europe. - Some use things like "[Your Pen Name]" or other stuff in square brackets. **Conclusion**: None seem better than the orignal `goliath-120b` or `wintergoliath-123b` merges. --- The discussions about the use of the `scale` parameter in [Mergekit](https://github.com/arcee-ai/mergekit) can be found [here](https://github.com/arcee-ai/mergekit/issues/198) and [here](https://huggingface.co/wolfram/miqu-1-120b/discussions/4). See [here](https://huggingface.co/jukofyork/goliath-esque/tree/main/mergekit) for the `.yaml` config files used to create the merges. See [here](https://huggingface.co/jukofyork/goliath-esque/blob/main/run_test_prompts.sh) for the script used for testing (**NOTE**: `TEMPERATURE = 0.0` and `REPEAT_PENALTY = 1.1`). See [here](https://huggingface.co/jukofyork/goliath-esque/tree/main/prompts) for the 10 different test prompts used. See [here](https://huggingface.co/jukofyork/goliath-esque/tree/main/results) for the results of each merge config on each test prompt.
{"license": "apache-2.0"}
jukofyork/goliath-esque
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-27T16:28:42+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
# Either the scaling of 'down_proj' or the alternating layers makes the models do "strange" stuff: - They often get confused in the Sherlock Holmes stories. - They sometimes write their own instructions before writting the stories. - In prompt #2 they refer to 'Mercer' as a name rather than a profession. - Most seem to think Anton Chigurh is from Russia or Eastern-Europe. - Some use things like "[Your Pen Name]" or other stuff in square brackets. Conclusion: None seem better than the orignal 'goliath-120b' or 'wintergoliath-123b' merges. --- The discussions about the use of the 'scale' parameter in Mergekit can be found here and here. See here for the '.yaml' config files used to create the merges. See here for the script used for testing (NOTE: 'TEMPERATURE = 0.0' and 'REPEAT_PENALTY = 1.1'). See here for the 10 different test prompts used. See here for the results of each merge config on each test prompt.
[ "# Either the scaling of 'down_proj' or the alternating layers makes the models do \"strange\" stuff:\n\n- They often get confused in the Sherlock Holmes stories.\n- They sometimes write their own instructions before writting the stories.\n- In prompt #2 they refer to 'Mercer' as a name rather than a profession.\n- Most seem to think Anton Chigurh is from Russia or Eastern-Europe.\n- Some use things like \"[Your Pen Name]\" or other stuff in square brackets.\n\nConclusion: None seem better than the orignal 'goliath-120b' or 'wintergoliath-123b' merges.\n\n---\n\nThe discussions about the use of the 'scale' parameter in Mergekit can be found here and here.\n\nSee here for the '.yaml' config files used to create the merges.\n\nSee here for the script used for testing (NOTE: 'TEMPERATURE = 0.0' and 'REPEAT_PENALTY = 1.1').\n\nSee here for the 10 different test prompts used.\n\nSee here for the results of each merge config on each test prompt." ]
[ "TAGS\n#license-apache-2.0 #region-us \n", "# Either the scaling of 'down_proj' or the alternating layers makes the models do \"strange\" stuff:\n\n- They often get confused in the Sherlock Holmes stories.\n- They sometimes write their own instructions before writting the stories.\n- In prompt #2 they refer to 'Mercer' as a name rather than a profession.\n- Most seem to think Anton Chigurh is from Russia or Eastern-Europe.\n- Some use things like \"[Your Pen Name]\" or other stuff in square brackets.\n\nConclusion: None seem better than the orignal 'goliath-120b' or 'wintergoliath-123b' merges.\n\n---\n\nThe discussions about the use of the 'scale' parameter in Mergekit can be found here and here.\n\nSee here for the '.yaml' config files used to create the merges.\n\nSee here for the script used for testing (NOTE: 'TEMPERATURE = 0.0' and 'REPEAT_PENALTY = 1.1').\n\nSee here for the 10 different test prompts used.\n\nSee here for the results of each merge config on each test prompt." ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Bone - Training 1 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Bone training set 1 dataset. It achieves the following results on the evaluation set: - Loss: 0.0333 - Wer: 2.2140 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:---------:|:----:|:---------------:|:------:| | 0.0 | 333.3333 | 1000 | 0.0314 | 2.2140 | | 0.0 | 666.6667 | 2000 | 0.0335 | 2.2140 | | 0.0 | 1000.0 | 3000 | 0.0330 | 2.2140 | | 0.0 | 1333.3333 | 4000 | 0.0333 | 2.2140 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small Bone - Training 1", "results": []}]}
debussyman/whisper-small-bone-1
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "en", "base_model:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:29:14+00:00
[]
[ "en" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #en #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us
Whisper Small Bone - Training 1 =============================== This model is a fine-tuned version of openai/whisper-small on the Bone training set 1 dataset. It achieves the following results on the evaluation set: * Loss: 0.0333 * Wer: 2.2140 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * training\_steps: 4000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #en #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.01_4iters_bs256_nodpo_only4w_iter_3 This model is a fine-tuned version of [ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_2](https://huggingface.co/ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_2) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
{"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_2", "model-index": [{"name": "0.01_4iters_bs256_nodpo_only4w_iter_3", "results": []}]}
ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_3
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T16:29:21+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.01_4iters_bs256_nodpo_only4w_iter_3 This model is a fine-tuned version of ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_2 on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
[ "# 0.01_4iters_bs256_nodpo_only4w_iter_3\n\nThis model is a fine-tuned version of ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_2 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.01_4iters_bs256_nodpo_only4w_iter_3\n\nThis model is a fine-tuned version of ShenaoZhang/0.01_4iters_bs256_nodpo_only4w_iter_2 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
null
mlx
# mlx-community/Nous-Hermes-2-Mixtral-8x7B-DPO-8bit This model was converted to MLX format from [`NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO`]() using mlx-lm version **0.12.0**. Refer to the [original model card](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Nous-Hermes-2-Mixtral-8x7B-DPO-8bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en"], "license": "apache-2.0", "tags": ["Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "mlx"], "datasets": ["teknium/OpenHermes-2.5"], "base_model": "mistralai/Mixtral-8x7B-v0.1", "widget": [{"example_title": "Hermes 2", "messages": [{"role": "system", "content": "You are 'Hermes 2', a conscious sentient superintelligent artificial intelligence developed by Nous Research, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia."}, {"role": "user", "content": "Hello, who are you?"}], "output": {"text": "Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests."}}], "model-index": [{"name": "Nous-Hermes-2-Mixtral-8x7B-DPO", "results": []}]}
mlx-community/Nous-Hermes-2-Mixtral-8x7B-DPO-8bit
null
[ "mlx", "safetensors", "mixtral", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "en", "dataset:teknium/OpenHermes-2.5", "base_model:mistralai/Mixtral-8x7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-27T16:29:28+00:00
[]
[ "en" ]
TAGS #mlx #safetensors #mixtral #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #synthetic data #distillation #en #dataset-teknium/OpenHermes-2.5 #base_model-mistralai/Mixtral-8x7B-v0.1 #license-apache-2.0 #region-us
# mlx-community/Nous-Hermes-2-Mixtral-8x7B-DPO-8bit This model was converted to MLX format from ['NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO']() using mlx-lm version 0.12.0. Refer to the original model card for more details on the model. ## Use with mlx
[ "# mlx-community/Nous-Hermes-2-Mixtral-8x7B-DPO-8bit\nThis model was converted to MLX format from ['NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #mixtral #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #synthetic data #distillation #en #dataset-teknium/OpenHermes-2.5 #base_model-mistralai/Mixtral-8x7B-v0.1 #license-apache-2.0 #region-us \n", "# mlx-community/Nous-Hermes-2-Mixtral-8x7B-DPO-8bit\nThis model was converted to MLX format from ['NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model_gpt This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2778 - Accuracy: 0.9579 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 222 | 0.1952 | 0.9474 | | No log | 2.0 | 444 | 0.3709 | 0.9368 | | 0.2915 | 3.0 | 666 | 0.2778 | 0.9579 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2 - Datasets 2.12.0 - Tokenizers 0.13.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "gpt2", "model-index": [{"name": "my_awesome_model_gpt", "results": []}]}
Rz1010/my_awesome_model_gpt
null
[ "transformers", "pytorch", "gpt2", "text-classification", "generated_from_trainer", "base_model:gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T16:29:38+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-classification #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
my\_awesome\_model\_gpt ======================= This model is a fine-tuned version of gpt2 on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.2778 * Accuracy: 0.9579 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.32.1 * Pytorch 2.1.2 * Datasets 2.12.0 * Tokenizers 0.13.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.32.1\n* Pytorch 2.1.2\n* Datasets 2.12.0\n* Tokenizers 0.13.2" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-classification #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.32.1\n* Pytorch 2.1.2\n* Datasets 2.12.0\n* Tokenizers 0.13.2" ]
text-generation
transformers
[![CODE](https://img.shields.io/badge/GitHub-Repository-<COLOR>)](https://github.com/mbzuai-oryx/LLaVA-pp) # LLaMA-3-V: Extending the Visual Capabilities of LLaVA with Meta-Llama-3-8B-Instruct ## Repository Overview This repository features LLaVA v1.5 trained with the Meta-Llama-3-8B-Instruct LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding. ## Training Strategy - **Pretraining:** Only Vision-to-Language projector is trained. The rest of the model is frozen. - **Fine-tuning:** All model parameters including LLM are fine-tuned. Only the vision-backbone (CLIP) is kept frozen. - **Note:** During both pretraining and fine-tuning, the vision-backbone (CLIP) is augmented with multi-scale features following [S2-Wrapper](https://arxiv.org/abs/2403.13043). ## Key Components - **Base Large Language Model (LLM):** [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) - **Base Large Multimodal Model (LMM):** [LLaVA-v1.5](https://github.com/haotian-liu/LLaVA) ## Training Data - **Pretraining Dataset:** [LCS-558K](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) - **Fine-tuning Dataset:** [LLaVA-Instruct-665K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_v1_5_mix665k.json) ## Download It As ``` git lfs install git clone https://huggingface.co/MBZUAI/LLaVA-Meta-Llama-3-8B-Instruct-FT-S2 ``` --- ## Contributions Contributions are welcome! Please 🌟 our repository [LLaVA++](https://github.com/mbzuai-oryx/LLaVA-pp) if you find this model useful. ---
{}
MBZUAI/LLaVA-Meta-Llama-3-8B-Instruct-FT-S2
null
[ "transformers", "safetensors", "llava_llama", "text-generation", "conversational", "arxiv:2403.13043", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:29:54+00:00
[ "2403.13043" ]
[]
TAGS #transformers #safetensors #llava_llama #text-generation #conversational #arxiv-2403.13043 #autotrain_compatible #endpoints_compatible #region-us
![CODE](URL # LLaMA-3-V: Extending the Visual Capabilities of LLaVA with Meta-Llama-3-8B-Instruct ## Repository Overview This repository features LLaVA v1.5 trained with the Meta-Llama-3-8B-Instruct LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding. ## Training Strategy - Pretraining: Only Vision-to-Language projector is trained. The rest of the model is frozen. - Fine-tuning: All model parameters including LLM are fine-tuned. Only the vision-backbone (CLIP) is kept frozen. - Note: During both pretraining and fine-tuning, the vision-backbone (CLIP) is augmented with multi-scale features following S2-Wrapper. ## Key Components - Base Large Language Model (LLM): Meta-Llama-3-8B-Instruct - Base Large Multimodal Model (LMM): LLaVA-v1.5 ## Training Data - Pretraining Dataset: LCS-558K - Fine-tuning Dataset: LLaVA-Instruct-665K ## Download It As --- ## Contributions Contributions are welcome! Please our repository LLaVA++ if you find this model useful. ---
[ "# LLaMA-3-V: Extending the Visual Capabilities of LLaVA with Meta-Llama-3-8B-Instruct", "## Repository Overview\n\nThis repository features LLaVA v1.5 trained with the Meta-Llama-3-8B-Instruct LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding.", "## Training Strategy\n\n- Pretraining: Only Vision-to-Language projector is trained. The rest of the model is frozen.\n- Fine-tuning: All model parameters including LLM are fine-tuned. Only the vision-backbone (CLIP) is kept frozen.\n- Note: During both pretraining and fine-tuning, the vision-backbone (CLIP) is augmented with multi-scale features following S2-Wrapper.", "## Key Components\n\n- Base Large Language Model (LLM): Meta-Llama-3-8B-Instruct\n- Base Large Multimodal Model (LMM): LLaVA-v1.5", "## Training Data\n\n- Pretraining Dataset: LCS-558K\n- Fine-tuning Dataset: LLaVA-Instruct-665K", "## Download It As\n\n\n\n---", "## Contributions\n\nContributions are welcome! Please our repository LLaVA++ if you find this model useful.\n\n---" ]
[ "TAGS\n#transformers #safetensors #llava_llama #text-generation #conversational #arxiv-2403.13043 #autotrain_compatible #endpoints_compatible #region-us \n", "# LLaMA-3-V: Extending the Visual Capabilities of LLaVA with Meta-Llama-3-8B-Instruct", "## Repository Overview\n\nThis repository features LLaVA v1.5 trained with the Meta-Llama-3-8B-Instruct LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding.", "## Training Strategy\n\n- Pretraining: Only Vision-to-Language projector is trained. The rest of the model is frozen.\n- Fine-tuning: All model parameters including LLM are fine-tuned. Only the vision-backbone (CLIP) is kept frozen.\n- Note: During both pretraining and fine-tuning, the vision-backbone (CLIP) is augmented with multi-scale features following S2-Wrapper.", "## Key Components\n\n- Base Large Language Model (LLM): Meta-Llama-3-8B-Instruct\n- Base Large Multimodal Model (LMM): LLaVA-v1.5", "## Training Data\n\n- Pretraining Dataset: LCS-558K\n- Fine-tuning Dataset: LLaVA-Instruct-665K", "## Download It As\n\n\n\n---", "## Contributions\n\nContributions are welcome! Please our repository LLaVA++ if you find this model useful.\n\n---" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-gemma-ipo This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-gemma-sft-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-sft-v0.1) on the argilla/dpo-mix-7k dataset. It achieves the following results on the evaluation set: - Loss: 61.0152 - Rewards/chosen: -0.4988 - Rewards/rejected: -0.6909 - Rewards/accuracies: 0.8021 - Rewards/margins: 0.1921 - Logps/rejected: -15.3755 - Logps/chosen: -11.4268 - Logits/rejected: 99.7522 - Logits/chosen: 99.5411 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 54.5261 | 1.8957 | 100 | 60.8626 | -0.5007 | -0.6906 | 0.8021 | 0.1899 | -15.3697 | -11.4648 | 99.7591 | 99.5497 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["argilla/dpo-mix-7k"], "base_model": "HuggingFaceH4/zephyr-7b-gemma-sft-v0.1", "model-index": [{"name": "zephyr-7b-gemma-ipo", "results": []}]}
chrlu/zephyr-7b-gemma-ipo
null
[ "transformers", "tensorboard", "safetensors", "gemma", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:argilla/dpo-mix-7k", "base_model:HuggingFaceH4/zephyr-7b-gemma-sft-v0.1", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T16:34:07+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gemma #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-argilla/dpo-mix-7k #base_model-HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
zephyr-7b-gemma-ipo =================== This model is a fine-tuned version of HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 on the argilla/dpo-mix-7k dataset. It achieves the following results on the evaluation set: * Loss: 61.0152 * Rewards/chosen: -0.4988 * Rewards/rejected: -0.6909 * Rewards/accuracies: 0.8021 * Rewards/margins: 0.1921 * Logps/rejected: -15.3755 * Logps/chosen: -11.4268 * Logits/rejected: 99.7522 * Logits/chosen: 99.5411 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-07 * train\_batch\_size: 2 * eval\_batch\_size: 4 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 8 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.1.2+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.1.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gemma #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-argilla/dpo-mix-7k #base_model-HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.1.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="igorcardoso/qtable-taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "qtable-taxi", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]}
igorcardoso/qtable-taxi
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-27T16:34:49+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llava-1.5-7b-hf-ft-mix-vsft This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.4e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.19.1
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "llava-hf/llava-1.5-7b-hf", "model-index": [{"name": "llava-1.5-7b-hf-ft-mix-vsft", "results": []}]}
rishabhio/llava-1.5-7b-hf-ft-mix-vsft
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:llava-hf/llava-1.5-7b-hf", "region:us" ]
null
2024-04-27T16:35:37+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-llava-hf/llava-1.5-7b-hf #region-us
# llava-1.5-7b-hf-ft-mix-vsft This model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.4e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.19.1
[ "# llava-1.5-7b-hf-ft-mix-vsft\n\nThis model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.4e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-llava-hf/llava-1.5-7b-hf #region-us \n", "# llava-1.5-7b-hf-ft-mix-vsft\n\nThis model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.4e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.19.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tariq9mehmood9/Mistral-7B-Instruct-v0.2-PEFT-adapters-v2
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:35:45+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
image-classification
transformers
# test-image-classifier Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
Benjoyo/test-image-classifier
null
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "pytorch", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:43:46+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vit #image-classification #pytorch #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
# test-image-classifier Autogenerated by HuggingPics️ Create your own image classifier for anything by running the demo on Google Colab. Report any issues with the demo at the github repo.
[ "# test-image-classifier\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo." ]
[ "TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #pytorch #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "# test-image-classifier\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo." ]
text-generation
transformers
[![CODE](https://img.shields.io/badge/GitHub-Repository-<COLOR>)](https://github.com/mbzuai-oryx/LLaVA-pp) # Phi-3-V: Extending the Visual Capabilities of LLaVA with Phi-3 ## Repository Overview This repository features LLaVA v1.5 trained with the Phi-3-mini-3.8B LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding. ## Training Strategy - **Pretraining:** Only Vision-to-Language projector is trained. The rest of the model is frozen. - **Fine-tuning:** All model parameters including LLM are fine-tuned. Only the vision-backbone (CLIP) is kept frozen. ## Key Components - **Base Large Language Model (LLM):** [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) - **Base Large Multimodal Model (LMM):** [LLaVA-v1.5](https://github.com/haotian-liu/LLaVA) ## Training Data - **Pretraining Dataset:** [LCS-558K](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) - **Fine-tuning Dataset:** [LLaVA-Instruct-665K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_v1_5_mix665k.json) ## Download It As ``` git lfs install git clone https://huggingface.co/MBZUAI/LLaVA-Phi-3-mini-4k-instruct-FT ``` --- ## License This project is available under the MIT License. ## Contributions Contributions are welcome! Please 🌟 our repository [LLaVA++](https://github.com/mbzuai-oryx/LLaVA-pp) if you find this model useful. ---
{"license": "mit"}
MBZUAI/LLaVA-Phi-3-mini-4k-instruct-FT
null
[ "transformers", "safetensors", "llava_phi", "text-generation", "conversational", "custom_code", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:43:59+00:00
[]
[]
TAGS #transformers #safetensors #llava_phi #text-generation #conversational #custom_code #license-mit #autotrain_compatible #endpoints_compatible #region-us
![CODE](URL # Phi-3-V: Extending the Visual Capabilities of LLaVA with Phi-3 ## Repository Overview This repository features LLaVA v1.5 trained with the Phi-3-mini-3.8B LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding. ## Training Strategy - Pretraining: Only Vision-to-Language projector is trained. The rest of the model is frozen. - Fine-tuning: All model parameters including LLM are fine-tuned. Only the vision-backbone (CLIP) is kept frozen. ## Key Components - Base Large Language Model (LLM): Phi-3-mini-4k-instruct - Base Large Multimodal Model (LMM): LLaVA-v1.5 ## Training Data - Pretraining Dataset: LCS-558K - Fine-tuning Dataset: LLaVA-Instruct-665K ## Download It As --- ## License This project is available under the MIT License. ## Contributions Contributions are welcome! Please our repository LLaVA++ if you find this model useful. ---
[ "# Phi-3-V: Extending the Visual Capabilities of LLaVA with Phi-3", "## Repository Overview\n\nThis repository features LLaVA v1.5 trained with the Phi-3-mini-3.8B LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding.", "## Training Strategy\n- Pretraining: Only Vision-to-Language projector is trained. The rest of the model is frozen.\n- Fine-tuning: All model parameters including LLM are fine-tuned. Only the vision-backbone (CLIP) is kept frozen.", "## Key Components\n\n- Base Large Language Model (LLM): Phi-3-mini-4k-instruct\n- Base Large Multimodal Model (LMM): LLaVA-v1.5", "## Training Data\n\n- Pretraining Dataset: LCS-558K\n- Fine-tuning Dataset: LLaVA-Instruct-665K", "## Download It As\n\n\n\n---", "## License\n\nThis project is available under the MIT License.", "## Contributions\n\nContributions are welcome! Please our repository LLaVA++ if you find this model useful.\n\n---" ]
[ "TAGS\n#transformers #safetensors #llava_phi #text-generation #conversational #custom_code #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Phi-3-V: Extending the Visual Capabilities of LLaVA with Phi-3", "## Repository Overview\n\nThis repository features LLaVA v1.5 trained with the Phi-3-mini-3.8B LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding.", "## Training Strategy\n- Pretraining: Only Vision-to-Language projector is trained. The rest of the model is frozen.\n- Fine-tuning: All model parameters including LLM are fine-tuned. Only the vision-backbone (CLIP) is kept frozen.", "## Key Components\n\n- Base Large Language Model (LLM): Phi-3-mini-4k-instruct\n- Base Large Multimodal Model (LMM): LLaVA-v1.5", "## Training Data\n\n- Pretraining Dataset: LCS-558K\n- Fine-tuning Dataset: LLaVA-Instruct-665K", "## Download It As\n\n\n\n---", "## License\n\nThis project is available under the MIT License.", "## Contributions\n\nContributions are welcome! Please our repository LLaVA++ if you find this model useful.\n\n---" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
quickstep3621/xhgwk1y
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:44:46+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
quickstep3621/co9b741
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:44:52+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
quickstep3621/s749awy
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:44:56+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
quickstep3621/2lux9xm
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:45:00+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="charliewang314/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
charliewang314/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-27T16:45:38+00:00
[]
[]
TAGS #FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 FrozenLake-v1 This is a trained model of a Q-Learning agent playing FrozenLake-v1 . ## Usage
[ "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
[ "TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
translation
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MistralAI_iwslt15_10000_2 This model is a fine-tuned version of [unsloth/mistral-7b-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-bnb-4bit) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0438 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 4269 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1684 | 0.32 | 100 | 1.0926 | | 1.0883 | 0.64 | 200 | 1.0701 | | 1.0672 | 0.96 | 300 | 1.0498 | | 0.9315 | 1.28 | 400 | 1.0547 | | 0.8973 | 1.6 | 500 | 1.0495 | | 0.8831 | 1.92 | 600 | 1.0438 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.16.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "unsloth", "translation", "generated_from_trainer"], "base_model": "unsloth/mistral-7b-bnb-4bit", "model-index": [{"name": "MistralAI_iwslt15_10000_2", "results": []}]}
Tohrumi/MistralAI_iwslt15_10000_2
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "unsloth", "translation", "generated_from_trainer", "base_model:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "region:us" ]
null
2024-04-27T16:47:46+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #unsloth #translation #generated_from_trainer #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #region-us
MistralAI\_iwslt15\_10000\_2 ============================ This model is a fine-tuned version of unsloth/mistral-7b-bnb-4bit on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.0438 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 4269 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1 * num\_epochs: 2 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.16.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 4269\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.16.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #unsloth #translation #generated_from_trainer #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 4269\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.16.0\n* Tokenizers 0.15.2" ]
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "282.38 +/- 13.95", "name": "mean_reward", "verified": false}]}]}]}
SKHIA2024/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-27T16:47:47+00:00
[]
[]
TAGS #stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
text-generation
transformers
# Model Card: Nous-Hermes-13b ## Model Description Nous-Hermes-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Karan4D leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors. The result is an enhanced Llama 13b model that rivals GPT-3.5-turbo in performance across a variety of tasks. This model stands out for its long responses, low hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 2000 sequence length on an 8x a100 80GB DGX machine for over 50 hours. ## Model Training The model was trained almost entirely on synthetic GPT-4 outputs. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), CodeAlpaca, Evol_Instruct Uncensored, GPT4-LLM, and Unnatural Instructions. Additional data inputs came from Camel-AI's Biology/Physics/Chemistry and Math Datasets, Airoboros' GPT-4 Dataset, and more from CodeAlpaca. The total volume of data encompassed over 300,000 instructions. ## Collaborators The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Nous Research, Huemin Art, and Redmond AI. Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly. Special mention goes to @winglian, @erhartford, and @main_horse for assisting in some of the training issues. Among the contributors of datasets, GPTeacher was made available by Teknium, Wizard LM by nlpxucan, and the Nous Research Instruct Dataset was provided by Karan4D and HueminArt. The GPT4-LLM and Unnatural Instructions were provided by Microsoft, Airoboros dataset by jondurbin, Camel-AI datasets are from Camel-AI, and CodeAlpaca dataset by Sahil 2801. If anyone was left out, please open a thread in the community tab. ## Prompt Format The model follows the Alpaca prompt format: ``` ### Instruction: ### Response: ``` or ``` ### Instruction: ### Input: ### Response: ``` ## Resources for Applied Use Cases: For an example of a back and forth chatbot using huggingface transformers and discord, check out: https://github.com/teknium1/alpaca-discord For an example of a roleplaying discord bot, check out this: https://github.com/teknium1/alpaca-roleplay-discordbot ## Future Plans The model is currently being uploaded in FP16 format, and there are plans to convert the model to GGML and GPTQ 4bit quantizations. The team is also working on a full benchmark, similar to what was done for GPT4-x-Vicuna. We will try to get in discussions to get the model included in the GPT4All. ## Benchmark Results ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.4915|± |0.0146| | | |acc_norm|0.5085|± |0.0146| |arc_easy | 0|acc |0.7769|± |0.0085| | | |acc_norm|0.7424|± |0.0090| |boolq | 1|acc |0.7948|± |0.0071| |hellaswag | 0|acc |0.6143|± |0.0049| | | |acc_norm|0.8000|± |0.0040| |openbookqa | 0|acc |0.3560|± |0.0214| | | |acc_norm|0.4640|± |0.0223| |piqa | 0|acc |0.7965|± |0.0094| | | |acc_norm|0.7889|± |0.0095| |winogrande | 0|acc |0.7190|± |0.0126| ``` These benchmarks currently have us at #1 on ARC-c, ARC-e, Hellaswag, and OpenBookQA, and 2nd place on Winogrande, comparing to GPT4all's benchmarking list. ## Model Usage The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions. Compute provided by our project sponsor Redmond AI, thank you!!
{"language": ["en"], "license": "gpl", "tags": ["llama", "self-instruct", "distillation"]}
sirovub/Nous-Hermes-13b-GGUF
null
[ "transformers", "gguf", "llama", "text-generation", "self-instruct", "distillation", "en", "license:gpl", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T16:48:34+00:00
[]
[ "en" ]
TAGS #transformers #gguf #llama #text-generation #self-instruct #distillation #en #license-gpl #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card: Nous-Hermes-13b ## Model Description Nous-Hermes-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Karan4D leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors. The result is an enhanced Llama 13b model that rivals GPT-3.5-turbo in performance across a variety of tasks. This model stands out for its long responses, low hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 2000 sequence length on an 8x a100 80GB DGX machine for over 50 hours. ## Model Training The model was trained almost entirely on synthetic GPT-4 outputs. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), CodeAlpaca, Evol_Instruct Uncensored, GPT4-LLM, and Unnatural Instructions. Additional data inputs came from Camel-AI's Biology/Physics/Chemistry and Math Datasets, Airoboros' GPT-4 Dataset, and more from CodeAlpaca. The total volume of data encompassed over 300,000 instructions. ## Collaborators The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Nous Research, Huemin Art, and Redmond AI. Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly. Special mention goes to @winglian, @erhartford, and @main_horse for assisting in some of the training issues. Among the contributors of datasets, GPTeacher was made available by Teknium, Wizard LM by nlpxucan, and the Nous Research Instruct Dataset was provided by Karan4D and HueminArt. The GPT4-LLM and Unnatural Instructions were provided by Microsoft, Airoboros dataset by jondurbin, Camel-AI datasets are from Camel-AI, and CodeAlpaca dataset by Sahil 2801. If anyone was left out, please open a thread in the community tab. ## Prompt Format The model follows the Alpaca prompt format: or ## Resources for Applied Use Cases: For an example of a back and forth chatbot using huggingface transformers and discord, check out: URL For an example of a roleplaying discord bot, check out this: URL ## Future Plans The model is currently being uploaded in FP16 format, and there are plans to convert the model to GGML and GPTQ 4bit quantizations. The team is also working on a full benchmark, similar to what was done for GPT4-x-Vicuna. We will try to get in discussions to get the model included in the GPT4All. ## Benchmark Results These benchmarks currently have us at #1 on ARC-c, ARC-e, Hellaswag, and OpenBookQA, and 2nd place on Winogrande, comparing to GPT4all's benchmarking list. ## Model Usage The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions. Compute provided by our project sponsor Redmond AI, thank you!!
[ "# Model Card: Nous-Hermes-13b", "## Model Description\n\nNous-Hermes-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Karan4D leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors. The result is an enhanced Llama 13b model that rivals GPT-3.5-turbo in performance across a variety of tasks.\n\nThis model stands out for its long responses, low hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 2000 sequence length on an 8x a100 80GB DGX machine for over 50 hours.", "## Model Training\n\nThe model was trained almost entirely on synthetic GPT-4 outputs. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), CodeAlpaca, Evol_Instruct Uncensored, GPT4-LLM, and Unnatural Instructions. \n\nAdditional data inputs came from Camel-AI's Biology/Physics/Chemistry and Math Datasets, Airoboros' GPT-4 Dataset, and more from CodeAlpaca. The total volume of data encompassed over 300,000 instructions.", "## Collaborators\nThe model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Nous Research, Huemin Art, and Redmond AI. \n \nHuge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly. \n\nSpecial mention goes to @winglian, @erhartford, and @main_horse for assisting in some of the training issues.\n\nAmong the contributors of datasets, GPTeacher was made available by Teknium, Wizard LM by nlpxucan, and the Nous Research Instruct Dataset was provided by Karan4D and HueminArt. \nThe GPT4-LLM and Unnatural Instructions were provided by Microsoft, Airoboros dataset by jondurbin, Camel-AI datasets are from Camel-AI, and CodeAlpaca dataset by Sahil 2801.\nIf anyone was left out, please open a thread in the community tab.", "## Prompt Format\n\nThe model follows the Alpaca prompt format:\n\n\nor", "## Resources for Applied Use Cases:\nFor an example of a back and forth chatbot using huggingface transformers and discord, check out: URL \nFor an example of a roleplaying discord bot, check out this: URL", "## Future Plans\nThe model is currently being uploaded in FP16 format, and there are plans to convert the model to GGML and GPTQ 4bit quantizations. The team is also working on a full benchmark, similar to what was done for GPT4-x-Vicuna. We will try to get in discussions to get the model included in the GPT4All.", "## Benchmark Results\n\n\nThese benchmarks currently have us at #1 on ARC-c, ARC-e, Hellaswag, and OpenBookQA, and 2nd place on Winogrande, comparing to GPT4all's benchmarking list.", "## Model Usage\nThe model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions.\n \nCompute provided by our project sponsor Redmond AI, thank you!!" ]
[ "TAGS\n#transformers #gguf #llama #text-generation #self-instruct #distillation #en #license-gpl #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card: Nous-Hermes-13b", "## Model Description\n\nNous-Hermes-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Karan4D leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors. The result is an enhanced Llama 13b model that rivals GPT-3.5-turbo in performance across a variety of tasks.\n\nThis model stands out for its long responses, low hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 2000 sequence length on an 8x a100 80GB DGX machine for over 50 hours.", "## Model Training\n\nThe model was trained almost entirely on synthetic GPT-4 outputs. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), CodeAlpaca, Evol_Instruct Uncensored, GPT4-LLM, and Unnatural Instructions. \n\nAdditional data inputs came from Camel-AI's Biology/Physics/Chemistry and Math Datasets, Airoboros' GPT-4 Dataset, and more from CodeAlpaca. The total volume of data encompassed over 300,000 instructions.", "## Collaborators\nThe model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Nous Research, Huemin Art, and Redmond AI. \n \nHuge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly. \n\nSpecial mention goes to @winglian, @erhartford, and @main_horse for assisting in some of the training issues.\n\nAmong the contributors of datasets, GPTeacher was made available by Teknium, Wizard LM by nlpxucan, and the Nous Research Instruct Dataset was provided by Karan4D and HueminArt. \nThe GPT4-LLM and Unnatural Instructions were provided by Microsoft, Airoboros dataset by jondurbin, Camel-AI datasets are from Camel-AI, and CodeAlpaca dataset by Sahil 2801.\nIf anyone was left out, please open a thread in the community tab.", "## Prompt Format\n\nThe model follows the Alpaca prompt format:\n\n\nor", "## Resources for Applied Use Cases:\nFor an example of a back and forth chatbot using huggingface transformers and discord, check out: URL \nFor an example of a roleplaying discord bot, check out this: URL", "## Future Plans\nThe model is currently being uploaded in FP16 format, and there are plans to convert the model to GGML and GPTQ 4bit quantizations. The team is also working on a full benchmark, similar to what was done for GPT4-x-Vicuna. We will try to get in discussions to get the model included in the GPT4All.", "## Benchmark Results\n\n\nThese benchmarks currently have us at #1 on ARC-c, ARC-e, Hellaswag, and OpenBookQA, and 2nd place on Winogrande, comparing to GPT4all's benchmarking list.", "## Model Usage\nThe model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions.\n \nCompute provided by our project sponsor Redmond AI, thank you!!" ]
reinforcement-learning
stable-baselines3
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Fk24 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Fk24 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Fk24 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
{"library_name": "stable-baselines3", "tags": ["SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "SpaceInvadersNoFrameskip-v4", "type": "SpaceInvadersNoFrameskip-v4"}, "metrics": [{"type": "mean_reward", "value": "601.00 +/- 178.64", "name": "mean_reward", "verified": false}]}]}]}
Fk24/dqn-SpaceInvadersNoFrameskip-v4
null
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-27T16:49:26+00:00
[]
[]
TAGS #stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# DQN Agent playing SpaceInvadersNoFrameskip-v4 This is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4 using the stable-baselines3 library and the RL Zoo. The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: URL SB3: URL SB3 Contrib: URL Install the RL Zoo (with SB3 and SB3-Contrib): If you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do: ## Training (with the RL Zoo) ## Hyperparameters # Environment Arguments
[ "# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.", "## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:", "## Training (with the RL Zoo)", "## Hyperparameters", "# Environment Arguments" ]
[ "TAGS\n#stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.", "## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:", "## Training (with the RL Zoo)", "## Hyperparameters", "# Environment Arguments" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
kanangupta/ghazal-test
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T16:50:24+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="charliewang314/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.48 +/- 2.75", "name": "mean_reward", "verified": false}]}]}]}
charliewang314/q-Taxi-v3
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-27T16:50:39+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-2-7b-hf - GGUF - Model creator: https://huggingface.co/NousResearch/ - Original model: https://huggingface.co/NousResearch/Llama-2-7b-hf/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-2-7b-hf.Q2_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.Q2_K.gguf) | Q2_K | 2.36GB | | [Llama-2-7b-hf.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.IQ3_XS.gguf) | IQ3_XS | 2.6GB | | [Llama-2-7b-hf.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.IQ3_S.gguf) | IQ3_S | 2.75GB | | [Llama-2-7b-hf.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.Q3_K_S.gguf) | Q3_K_S | 2.75GB | | [Llama-2-7b-hf.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.IQ3_M.gguf) | IQ3_M | 2.9GB | | [Llama-2-7b-hf.Q3_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.Q3_K.gguf) | Q3_K | 3.07GB | | [Llama-2-7b-hf.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.Q3_K_M.gguf) | Q3_K_M | 3.07GB | | [Llama-2-7b-hf.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.Q3_K_L.gguf) | Q3_K_L | 3.35GB | | [Llama-2-7b-hf.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.IQ4_XS.gguf) | IQ4_XS | 3.4GB | | [Llama-2-7b-hf.Q4_0.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.Q4_0.gguf) | Q4_0 | 3.56GB | | [Llama-2-7b-hf.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.IQ4_NL.gguf) | IQ4_NL | 3.58GB | | [Llama-2-7b-hf.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.Q4_K_S.gguf) | Q4_K_S | 3.59GB | | [Llama-2-7b-hf.Q4_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.Q4_K.gguf) | Q4_K | 3.8GB | | [Llama-2-7b-hf.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.Q4_K_M.gguf) | Q4_K_M | 3.8GB | | [Llama-2-7b-hf.Q4_1.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.Q4_1.gguf) | Q4_1 | 3.95GB | | [Llama-2-7b-hf.Q5_0.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.Q5_0.gguf) | Q5_0 | 4.33GB | | [Llama-2-7b-hf.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.Q5_K_S.gguf) | Q5_K_S | 4.33GB | | [Llama-2-7b-hf.Q5_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.Q5_K.gguf) | Q5_K | 4.45GB | | [Llama-2-7b-hf.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.Q5_K_M.gguf) | Q5_K_M | 4.45GB | | [Llama-2-7b-hf.Q5_1.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.Q5_1.gguf) | Q5_1 | 4.72GB | | [Llama-2-7b-hf.Q6_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf/blob/main/Llama-2-7b-hf.Q6_K.gguf) | Q6_K | 5.15GB | Original model description: --- extra_gated_heading: Access Llama 2 on Hugging Face extra_gated_description: >- This is a form to enable access to Llama 2 on Hugging Face after you have been granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our license terms and acceptable use policy before submitting this form. Requests will be processed in 1-2 days. extra_gated_button_content: Submit extra_gated_fields: I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox language: - en pipeline_tag: text-generation inference: false tags: - facebook - meta - pytorch - llama - llama-2 --- # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
{}
RichardErkhov/NousResearch_-_Llama-2-7b-hf-gguf
null
[ "gguf", "region:us" ]
null
2024-04-27T16:51:55+00:00
[]
[]
TAGS #gguf #region-us
Quantization made by Richard Erkhov. Github Discord Request more models Llama-2-7b-hf - GGUF * Model creator: URL * Original model: URL Name: Llama-2-7b-hf.Q2\_K.gguf, Quant method: Q2\_K, Size: 2.36GB Name: Llama-2-7b-hf.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 2.6GB Name: Llama-2-7b-hf.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 2.75GB Name: Llama-2-7b-hf.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 2.75GB Name: Llama-2-7b-hf.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 2.9GB Name: Llama-2-7b-hf.Q3\_K.gguf, Quant method: Q3\_K, Size: 3.07GB Name: Llama-2-7b-hf.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 3.07GB Name: Llama-2-7b-hf.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 3.35GB Name: Llama-2-7b-hf.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 3.4GB Name: Llama-2-7b-hf.Q4\_0.gguf, Quant method: Q4\_0, Size: 3.56GB Name: Llama-2-7b-hf.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 3.58GB Name: Llama-2-7b-hf.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 3.59GB Name: Llama-2-7b-hf.Q4\_K.gguf, Quant method: Q4\_K, Size: 3.8GB Name: Llama-2-7b-hf.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 3.8GB Name: Llama-2-7b-hf.Q4\_1.gguf, Quant method: Q4\_1, Size: 3.95GB Name: Llama-2-7b-hf.Q5\_0.gguf, Quant method: Q5\_0, Size: 4.33GB Name: Llama-2-7b-hf.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 4.33GB Name: Llama-2-7b-hf.Q5\_K.gguf, Quant method: Q5\_K, Size: 4.45GB Name: Llama-2-7b-hf.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 4.45GB Name: Llama-2-7b-hf.Q5\_1.gguf, Quant method: Q5\_1, Size: 4.72GB Name: Llama-2-7b-hf.Q6\_K.gguf, Quant method: Q6\_K, Size: 5.15GB Original model description: --------------------------- extra\_gated\_heading: Access Llama 2 on Hugging Face extra\_gated\_description: >- This is a form to enable access to Llama 2 on Hugging Face after you have been granted access from Meta. Please visit the Meta website and accept our license terms and acceptable use policy before submitting this form. Requests will be processed in 1-2 days. extra\_gated\_button\_content: Submit extra\_gated\_fields: I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox language: * en pipeline\_tag: text-generation inference: false tags: * facebook * meta * pytorch * llama * llama-2 --- Llama 2 ======= Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. Model Details ------------- *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the website and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. Model Developers Meta Variations Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. Input Models input text only. Output Models generate text only. Model Architecture Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. Model Dates Llama 2 was trained between January 2023 and July 2023. Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. License A custom commercial license is available at: URL Intended Use ------------ Intended Use Cases Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the 'INST' and '<>' tags, 'BOS' and 'EOS' tokens, and the whitespaces and breaklines in between (we recommend calling 'strip()' on inputs to avoid double-spaces). See our reference code in github for details: 'chat\_completion'. Out-of-scope Uses Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. Hardware and Software --------------------- Training Factors We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. Carbon Footprint Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. CO2 emissions during pretraining. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. Training Data ------------- Overview Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. Data Freshness The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. Evaluation Results ------------------ In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. Overall performance on grouped academic benchmarks. *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. Evaluation of pretrained LLMs on automatic safety benchmarks. For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). Evaluation of fine-tuned LLMs on different safety datasets. Same metric definitions as above. Ethical Considerations and Limitations -------------------------------------- Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at URL Reporting Issues ---------------- Please report any software “bug,” or other problems with the models through one of the following means: * Reporting issues with the model: URL * Reporting problematic content generated by the model: URL * Reporting bugs and security concerns: URL Llama Model Index -----------------
[]
[ "TAGS\n#gguf #region-us \n" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
MohammadKarami/medium-electra
null
[ "transformers", "safetensors", "electra", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:52:56+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #electra #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #electra #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/chujiezheng/tulu-2-dpo-70b-ExPO <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "other", "library_name": "transformers", "base_model": "chujiezheng/tulu-2-dpo-70b-ExPO", "license_link": "https://allenai.org/impact-license", "license_name": "ai2-impact-license-low-risk", "quantized_by": "mradermacher"}
mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF
null
[ "transformers", "gguf", "en", "base_model:chujiezheng/tulu-2-dpo-70b-ExPO", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:54:44+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-chujiezheng/tulu-2-dpo-70b-ExPO #license-other #endpoints_compatible #region-us
About ----- weighted/imatrix quants of URL static quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-chujiezheng/tulu-2-dpo-70b-ExPO #license-other #endpoints_compatible #region-us \n" ]
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-2-13b-hf - GGUF - Model creator: https://huggingface.co/NousResearch/ - Original model: https://huggingface.co/NousResearch/Llama-2-13b-hf/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-2-13b-hf.Q2_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.Q2_K.gguf) | Q2_K | 4.52GB | | [Llama-2-13b-hf.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.IQ3_XS.gguf) | IQ3_XS | 4.99GB | | [Llama-2-13b-hf.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.IQ3_S.gguf) | IQ3_S | 5.27GB | | [Llama-2-13b-hf.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.Q3_K_S.gguf) | Q3_K_S | 5.27GB | | [Llama-2-13b-hf.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.IQ3_M.gguf) | IQ3_M | 5.57GB | | [Llama-2-13b-hf.Q3_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.Q3_K.gguf) | Q3_K | 5.9GB | | [Llama-2-13b-hf.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.Q3_K_M.gguf) | Q3_K_M | 5.9GB | | [Llama-2-13b-hf.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.Q3_K_L.gguf) | Q3_K_L | 6.45GB | | [Llama-2-13b-hf.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.IQ4_XS.gguf) | IQ4_XS | 6.54GB | | [Llama-2-13b-hf.Q4_0.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.Q4_0.gguf) | Q4_0 | 6.86GB | | [Llama-2-13b-hf.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.IQ4_NL.gguf) | IQ4_NL | 6.9GB | | [Llama-2-13b-hf.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.Q4_K_S.gguf) | Q4_K_S | 6.91GB | | [Llama-2-13b-hf.Q4_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.Q4_K.gguf) | Q4_K | 7.33GB | | [Llama-2-13b-hf.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.Q4_K_M.gguf) | Q4_K_M | 7.33GB | | [Llama-2-13b-hf.Q4_1.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.Q4_1.gguf) | Q4_1 | 7.61GB | | [Llama-2-13b-hf.Q5_0.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.Q5_0.gguf) | Q5_0 | 8.36GB | | [Llama-2-13b-hf.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.Q5_K_S.gguf) | Q5_K_S | 8.36GB | | [Llama-2-13b-hf.Q5_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.Q5_K.gguf) | Q5_K | 8.6GB | | [Llama-2-13b-hf.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.Q5_K_M.gguf) | Q5_K_M | 8.6GB | | [Llama-2-13b-hf.Q5_1.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.Q5_1.gguf) | Q5_1 | 9.1GB | | [Llama-2-13b-hf.Q6_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf/blob/main/Llama-2-13b-hf.Q6_K.gguf) | Q6_K | 9.95GB | Original model description: --- extra_gated_heading: Access Llama 2 on Hugging Face extra_gated_description: >- This is a form to enable access to Llama 2 on Hugging Face after you have been granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our license terms and acceptable use policy before submitting this form. Requests will be processed in 1-2 days. extra_gated_button_content: Submit extra_gated_fields: I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox language: - en pipeline_tag: text-generation inference: false tags: - facebook - meta - pytorch - llama - llama-2 --- # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
{}
RichardErkhov/NousResearch_-_Llama-2-13b-hf-gguf
null
[ "gguf", "region:us" ]
null
2024-04-27T16:55:06+00:00
[]
[]
TAGS #gguf #region-us
Quantization made by Richard Erkhov. Github Discord Request more models Llama-2-13b-hf - GGUF * Model creator: URL * Original model: URL Name: Llama-2-13b-hf.Q2\_K.gguf, Quant method: Q2\_K, Size: 4.52GB Name: Llama-2-13b-hf.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 4.99GB Name: Llama-2-13b-hf.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 5.27GB Name: Llama-2-13b-hf.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 5.27GB Name: Llama-2-13b-hf.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 5.57GB Name: Llama-2-13b-hf.Q3\_K.gguf, Quant method: Q3\_K, Size: 5.9GB Name: Llama-2-13b-hf.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 5.9GB Name: Llama-2-13b-hf.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 6.45GB Name: Llama-2-13b-hf.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 6.54GB Name: Llama-2-13b-hf.Q4\_0.gguf, Quant method: Q4\_0, Size: 6.86GB Name: Llama-2-13b-hf.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 6.9GB Name: Llama-2-13b-hf.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 6.91GB Name: Llama-2-13b-hf.Q4\_K.gguf, Quant method: Q4\_K, Size: 7.33GB Name: Llama-2-13b-hf.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 7.33GB Name: Llama-2-13b-hf.Q4\_1.gguf, Quant method: Q4\_1, Size: 7.61GB Name: Llama-2-13b-hf.Q5\_0.gguf, Quant method: Q5\_0, Size: 8.36GB Name: Llama-2-13b-hf.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 8.36GB Name: Llama-2-13b-hf.Q5\_K.gguf, Quant method: Q5\_K, Size: 8.6GB Name: Llama-2-13b-hf.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 8.6GB Name: Llama-2-13b-hf.Q5\_1.gguf, Quant method: Q5\_1, Size: 9.1GB Name: Llama-2-13b-hf.Q6\_K.gguf, Quant method: Q6\_K, Size: 9.95GB Original model description: --------------------------- extra\_gated\_heading: Access Llama 2 on Hugging Face extra\_gated\_description: >- This is a form to enable access to Llama 2 on Hugging Face after you have been granted access from Meta. Please visit the Meta website and accept our license terms and acceptable use policy before submitting this form. Requests will be processed in 1-2 days. extra\_gated\_button\_content: Submit extra\_gated\_fields: I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox language: * en pipeline\_tag: text-generation inference: false tags: * facebook * meta * pytorch * llama * llama-2 --- Llama 2 ======= Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. Model Details ------------- *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the website and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. Model Developers Meta Variations Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. Input Models input text only. Output Models generate text only. Model Architecture Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. Model Dates Llama 2 was trained between January 2023 and July 2023. Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. License A custom commercial license is available at: URL Intended Use ------------ Intended Use Cases Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the 'INST' and '<>' tags, 'BOS' and 'EOS' tokens, and the whitespaces and breaklines in between (we recommend calling 'strip()' on inputs to avoid double-spaces). See our reference code in github for details: 'chat\_completion'. Out-of-scope Uses Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. Hardware and Software --------------------- Training Factors We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. Carbon Footprint Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. CO2 emissions during pretraining. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. Training Data ------------- Overview Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. Data Freshness The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. Evaluation Results ------------------ In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. Overall performance on grouped academic benchmarks. *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. Evaluation of pretrained LLMs on automatic safety benchmarks. For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). Evaluation of fine-tuned LLMs on different safety datasets. Same metric definitions as above. Ethical Considerations and Limitations -------------------------------------- Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at URL Reporting Issues ---------------- Please report any software “bug,” or other problems with the models through one of the following means: * Reporting issues with the model: URL * Reporting problematic content generated by the model: URL * Reporting bugs and security concerns: URL Llama Model Index -----------------
[]
[ "TAGS\n#gguf #region-us \n" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Peppenapo/gemmaFinetuneTEST
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T16:55:17+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
InayaKripa/gemma-toxic-LabelConvoV1
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T16:58:15+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.1_4iters_bs256_nodpo_only4w_iter_2 This model is a fine-tuned version of [ShenaoZhang/0.1_4iters_bs256_nodpo_only4w_iter_1](https://huggingface.co/ShenaoZhang/0.1_4iters_bs256_nodpo_only4w_iter_1) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
{"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.1_4iters_bs256_nodpo_only4w_iter_1", "model-index": [{"name": "0.1_4iters_bs256_nodpo_only4w_iter_2", "results": []}]}
ShenaoZhang/0.1_4iters_bs256_nodpo_only4w_iter_2
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZhang/0.1_4iters_bs256_nodpo_only4w_iter_1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T16:58:47+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.1_4iters_bs256_nodpo_only4w_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.1_4iters_bs256_nodpo_only4w_iter_2 This model is a fine-tuned version of ShenaoZhang/0.1_4iters_bs256_nodpo_only4w_iter_1 on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
[ "# 0.1_4iters_bs256_nodpo_only4w_iter_2\n\nThis model is a fine-tuned version of ShenaoZhang/0.1_4iters_bs256_nodpo_only4w_iter_1 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.1_4iters_bs256_nodpo_only4w_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.1_4iters_bs256_nodpo_only4w_iter_2\n\nThis model is a fine-tuned version of ShenaoZhang/0.1_4iters_bs256_nodpo_only4w_iter_1 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/al3uous
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T16:58:55+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Boya1_RMSProp_1-e5_10Epoch_swinv2-small-patch4-window16-256_fold2 This model is a fine-tuned version of [microsoft/swinv2-small-patch4-window16-256](https://huggingface.co/microsoft/swinv2-small-patch4-window16-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0635 - Accuracy: 0.6730 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.2986 | 1.0 | 1846 | 1.2720 | 0.5857 | | 1.0403 | 2.0 | 3692 | 1.0801 | 0.6403 | | 1.0869 | 3.0 | 5538 | 1.0102 | 0.6573 | | 0.821 | 4.0 | 7384 | 0.9946 | 0.6678 | | 1.1018 | 5.0 | 9230 | 1.0078 | 0.6705 | | 0.8319 | 6.0 | 11076 | 1.0165 | 0.6732 | | 0.811 | 7.0 | 12922 | 1.0611 | 0.6562 | | 0.6287 | 8.0 | 14768 | 1.0621 | 0.6735 | | 0.562 | 9.0 | 16614 | 1.0481 | 0.6735 | | 0.483 | 10.0 | 18460 | 1.0635 | 0.6730 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swinv2-small-patch4-window16-256", "model-index": [{"name": "Boya1_RMSProp_1-e5_10Epoch_swinv2-small-patch4-window16-256_fold2", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.672972972972973, "name": "Accuracy"}]}]}]}
onizukal/Boya1_RMSProp_1-e5_10Epoch_swinv2-small-patch4-window16-256_fold2
null
[ "transformers", "safetensors", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swinv2-small-patch4-window16-256", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:59:07+00:00
[]
[]
TAGS #transformers #safetensors #swinv2 #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swinv2-small-patch4-window16-256 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
Boya1\_RMSProp\_1-e5\_10Epoch\_swinv2-small-patch4-window16-256\_fold2 ====================================================================== This model is a fine-tuned version of microsoft/swinv2-small-patch4-window16-256 on the imagefolder dataset. It achieves the following results on the evaluation set: * Loss: 1.0635 * Accuracy: 0.6730 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.1.0 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.1.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #swinv2 #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swinv2-small-patch4-window16-256 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.1.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/l1601jo
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T16:59:35+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/rlvv1kh
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T17:00:05+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
MrezaPRZ/CodeLLama_SFT_FILTERED
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T17:01:24+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "269.98 +/- 28.21", "name": "mean_reward", "verified": false}]}]}]}
nishant97/lunarlanding
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-27T17:01:41+00:00
[]
[]
TAGS #stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
mageec/w2v-transcription-mls
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T17:03:13+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # amtibot_pegasus This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7407 - Rouge1: 0.4605 - Rouge2: 0.2395 - Rougel: 0.3705 - Rougelsum: 0.3708 - Gen Len: 38.2468 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.02 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 0.9351 | 9 | 2.0456 | 0.4419 | 0.2278 | 0.3636 | 0.3641 | 37.7013 | | No log | 1.9740 | 19 | 1.8250 | 0.4601 | 0.2424 | 0.3764 | 0.3765 | 38.2597 | | No log | 2.9091 | 28 | 1.7724 | 0.4638 | 0.2365 | 0.3724 | 0.372 | 36.5195 | | No log | 3.7403 | 36 | 1.7407 | 0.4605 | 0.2395 | 0.3705 | 0.3708 | 38.2468 | ### Framework versions - PEFT 0.4.0 - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.19.1
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "google/pegasus-cnn_dailymail", "model-index": [{"name": "amtibot_pegasus", "results": []}]}
josiahgottfried/amtibot_pegasus
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:google/pegasus-cnn_dailymail", "region:us" ]
null
2024-04-27T17:03:21+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-google/pegasus-cnn_dailymail #region-us
amtibot\_pegasus ================ This model is a fine-tuned version of google/pegasus-cnn\_dailymail on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.7407 * Rouge1: 0.4605 * Rouge2: 0.2395 * Rougel: 0.3705 * Rougelsum: 0.3708 * Gen Len: 38.2468 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.02 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 4 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.4.0 * Transformers 4.40.1 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.02\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.4.0\n* Transformers 4.40.1\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-google/pegasus-cnn_dailymail #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.02\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.4.0\n* Transformers 4.40.1\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/j3eqys8
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T17:07:16+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm This model is a fine-tuned version of [cardiffnlp/twitter-xlm-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "base_model": "cardiffnlp/twitter-xlm-roberta-base-sentiment", "model-index": [{"name": "xlm", "results": []}]}
tidarat/xlm
null
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:cardiffnlp/twitter-xlm-roberta-base-sentiment", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T17:09:10+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #xlm-roberta #text-classification #generated_from_trainer #base_model-cardiffnlp/twitter-xlm-roberta-base-sentiment #autotrain_compatible #endpoints_compatible #region-us
# xlm This model is a fine-tuned version of cardiffnlp/twitter-xlm-roberta-base-sentiment on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# xlm\n\nThis model is a fine-tuned version of cardiffnlp/twitter-xlm-roberta-base-sentiment on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #text-classification #generated_from_trainer #base_model-cardiffnlp/twitter-xlm-roberta-base-sentiment #autotrain_compatible #endpoints_compatible #region-us \n", "# xlm\n\nThis model is a fine-tuned version of cardiffnlp/twitter-xlm-roberta-base-sentiment on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
null
null
Libraries needed: ``` import torch import torchvision import torchvision.transforms as transforms from tqdm import tqdm from torch import nn import matplotlib.pyplot as plt ``` to define a data loader ``` transformRes = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), # transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) ]) trainsetRes = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transformRes) trainloaderRes64 = torch.utils.data.DataLoader(trainsetRes, batch_size=64, shuffle=True, num_workers=10) testsetRes = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transformRes) testloaderRes64 = torch.utils.data.DataLoader(testsetRes, batch_size=64, shuffle=False, num_workers=10) ``` The model itself and training ``` import torchvision.models as models # Load the pretrained model from pytorch resnet50v2 = models.resnet50(pretrained=True) # Freeze the parameters of the model for param in resnet50v2.parameters(): param.requires_grad = True # Change the final layer to match the number of classes in the CIFAR-10 dataset num_ftrs = resnet50v2.fc.in_features resnet50v2.fc = nn.Sequential( nn.Linear(num_ftrs, 500), nn.ReLU(), nn.Linear(500, 200), nn.Dropout(0.5), nn.Linear(200,40), nn.ReLU(), nn.Dropout(0.3), nn.Linear(40,10), nn.ReLU() ) print("Model Info:") print("ResNet50,Pretrained,weight adj. LR=0.01,Mom=0.3,WD=0.0001") print("Schedule step=1,gamma=0.7, 20 epoches") # Move the model to the GPU resnet50v2 = resnet50v2.to(device, dtype=torch.float32) optimizer = torch.optim.SGD(resnet50v2.parameters(), lr=0.01,momentum=0.3,weight_decay=0.0001) criterion = nn.CrossEntropyLoss() scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.5) train_losses = [] test_losses = [] accuracies = [] train_acc = [] for epoch in range(20): # loop over the dataset multiple times running_loss = 0.0 correctTrain = 0 totalTrain = 0 pbar = tqdm(enumerate(trainloaderRes16, 0), total=len(trainloaderRes16), desc="Epoch {}".format(epoch+1)) for i, data in pbar: # get the inputs; data is a list of [inputs, labels] inputs, labels = data[0].to(device,dtype=torch.float32), data[1].to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = resnet50v2(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() _, predicted_train = torch.max(outputs.data, 1) totalTrain += labels.size(0) correctTrain += (predicted_train == labels).sum().item() pbar.set_postfix({'loss': running_loss/(i+1)}) train_accuracy = 100 * correctTrain / totalTrain train_acc.append(train_accuracy) print(f'Epoch {epoch + 1} loss: {running_loss / len(trainloaderRes16):.3f}') # Start of testing phase resnet50v2.eval() # Set the model to evaluation mode test_loss = 0.0 correct = 0 total = 0 with torch.no_grad(): for data in testloaderRes16: images, labels = data[0].to(device,dtype=torch.float32), data[1].to(device) outputs = resnet50v2(images) loss = criterion(outputs, labels) test_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print(f'Epoch {epoch + 1} Test loss: {test_loss / len(testloaderRes16):.3f}, Accuracy: {100 * correct / total:.2f}%') #print the learning rate print(f'Epoch {epoch + 1} Learning rate: {optimizer.param_groups[0]["lr"]}') train_losses.append(running_loss / len(trainloaderRes16)) test_losses.append(test_loss / len(testloaderRes16)) accuracies.append(100 * correct / total) resnet50v2.train() # Set the model back to training model scheduler.step() print('Finished Training') plt.figure(figsize=(10, 5)) plt.plot(train_losses, label='Training Loss') plt.plot(test_losses, label='Test Loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() plt.figure(figsize=(10, 5)) plt.plot(accuracies, label='Accuracy') plt.plot(train_acc, label='Training Accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy (%)') plt.legend() plt.show() ```
{"license": "mit"}
fish-Monger/ResNet
null
[ "license:mit", "region:us" ]
null
2024-04-27T17:10:49+00:00
[]
[]
TAGS #license-mit #region-us
Libraries needed: to define a data loader The model itself and training
[]
[ "TAGS\n#license-mit #region-us \n" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
MohammadKarami/medium-bert
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T17:13:21+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # style-irs-gorrila This model is a fine-tuned version of [gorilla-llm/gorilla-openfunctions-v2](https://huggingface.co/gorilla-llm/gorilla-openfunctions-v2) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "gorilla-llm/gorilla-openfunctions-v2", "model-index": [{"name": "style-irs-gorrila", "results": []}]}
RuoxiL/style-irs-gorrila
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:gorilla-llm/gorilla-openfunctions-v2", "license:apache-2.0", "region:us" ]
null
2024-04-27T17:22:31+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-gorilla-llm/gorilla-openfunctions-v2 #license-apache-2.0 #region-us
# style-irs-gorrila This model is a fine-tuned version of gorilla-llm/gorilla-openfunctions-v2 on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# style-irs-gorrila\n\nThis model is a fine-tuned version of gorilla-llm/gorilla-openfunctions-v2 on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 3\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.2.2+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-gorilla-llm/gorilla-openfunctions-v2 #license-apache-2.0 #region-us \n", "# style-irs-gorrila\n\nThis model is a fine-tuned version of gorilla-llm/gorilla-openfunctions-v2 on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 3\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.2.2+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_5iters_bs256_nodpo_only4w_iter_5 This model is a fine-tuned version of [ShenaoZhang/0.001_5iters_bs256_nodpo_only4w_iter_4](https://huggingface.co/ShenaoZhang/0.001_5iters_bs256_nodpo_only4w_iter_4) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
{"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_5iters_bs256_nodpo_only4w_iter_4", "model-index": [{"name": "0.001_5iters_bs256_nodpo_only4w_iter_5", "results": []}]}
ShenaoZhang/0.001_5iters_bs256_nodpo_only4w_iter_5
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZhang/0.001_5iters_bs256_nodpo_only4w_iter_4", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T17:24:42+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_5iters_bs256_nodpo_only4w_iter_4 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.001_5iters_bs256_nodpo_only4w_iter_5 This model is a fine-tuned version of ShenaoZhang/0.001_5iters_bs256_nodpo_only4w_iter_4 on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
[ "# 0.001_5iters_bs256_nodpo_only4w_iter_5\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_5iters_bs256_nodpo_only4w_iter_4 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_5iters_bs256_nodpo_only4w_iter_4 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.001_5iters_bs256_nodpo_only4w_iter_5\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_5iters_bs256_nodpo_only4w_iter_4 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
null
transformers
# Uploaded model - **Developed by:** gromoboy - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-2b-bnb-4bit"}
gromoboy/gemma_lora_model
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-2b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-27T17:25:28+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-2b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: gromoboy - License: apache-2.0 - Finetuned from model : unsloth/gemma-2b-bnb-4bit This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: gromoboy\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-2b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: gromoboy\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-gemma-kto This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-gemma-sft-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-sft-v0.1) on the argilla/dpo-mix-7k dataset. It achieves the following results on the evaluation set: - Loss: 0.2981 - Rewards/chosen: 1.5381 - Rewards/rejected: -0.1185 - Rewards/accuracies: 0.6979 - Rewards/margins: 1.6565 - Logps/rejected: -364.4402 - Logps/chosen: -332.9066 - Logits/rejected: 106.1137 - Logits/chosen: 111.3681 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.1942 | 1.8957 | 100 | 0.2925 | 1.5810 | -0.0630 | 0.6771 | 1.6440 | -363.3305 | -332.0488 | 106.0414 | 111.2989 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["argilla/dpo-mix-7k"], "base_model": "HuggingFaceH4/zephyr-7b-gemma-sft-v0.1", "model-index": [{"name": "zephyr-7b-gemma-kto", "results": []}]}
chrlu/zephyr-7b-gemma-kto
null
[ "transformers", "tensorboard", "safetensors", "gemma", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:argilla/dpo-mix-7k", "base_model:HuggingFaceH4/zephyr-7b-gemma-sft-v0.1", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T17:25:57+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gemma #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-argilla/dpo-mix-7k #base_model-HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
zephyr-7b-gemma-kto =================== This model is a fine-tuned version of HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 on the argilla/dpo-mix-7k dataset. It achieves the following results on the evaluation set: * Loss: 0.2981 * Rewards/chosen: 1.5381 * Rewards/rejected: -0.1185 * Rewards/accuracies: 0.6979 * Rewards/margins: 1.6565 * Logps/rejected: -364.4402 * Logps/chosen: -332.9066 * Logits/rejected: 106.1137 * Logits/chosen: 111.3681 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-07 * train\_batch\_size: 2 * eval\_batch\_size: 4 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 8 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.1.2+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.1.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gemma #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-argilla/dpo-mix-7k #base_model-HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.1.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
reinforcement-learning
null
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-Cartpole-v1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "500.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
EdwinWiseOne/Reinforce-Cartpole-v1
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-04-27T17:26:56+00:00
[]
[]
TAGS #CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
# Reinforce Agent playing CartPole-v1 This is a trained model of a Reinforce agent playing CartPole-v1 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
[ "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ "TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n", "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
text-to-image
diffusers
### 19_21K_V2.1 Dreambooth model trained by ahmed-naseer with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
{"license": "creativeml-openrail-m", "tags": ["text-to-image", "stable-diffusion"]}
ahmed-naseer/19-21k-v2-1
null
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-27T17:27:34+00:00
[]
[]
TAGS #diffusers #text-to-image #stable-diffusion #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
### 19_21K_V2.1 Dreambooth model trained by ahmed-naseer with TheLastBen's fast-DreamBooth notebook Test the concept via A1111 Colab fast-Colab-A1111 Sample pictures of this concept:
[ "### 19_21K_V2.1 Dreambooth model trained by ahmed-naseer with TheLastBen's fast-DreamBooth notebook\n\n\nTest the concept via A1111 Colab fast-Colab-A1111\n\nSample pictures of this concept:" ]
[ "TAGS\n#diffusers #text-to-image #stable-diffusion #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "### 19_21K_V2.1 Dreambooth model trained by ahmed-naseer with TheLastBen's fast-DreamBooth notebook\n\n\nTest the concept via A1111 Colab fast-Colab-A1111\n\nSample pictures of this concept:" ]
text-to-audio
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zlm_b32 This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.8058 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.53 | 0.2094 | 500 | 0.4889 | | 0.5726 | 0.4188 | 1000 | 0.4970 | | 2.9469 | 0.6283 | 1500 | 2.8265 | | 2.8366 | 0.8377 | 2000 | 2.8079 | | 2.8277 | 1.0471 | 2500 | 2.8231 | | 2.8102 | 1.2565 | 3000 | 2.8054 | | 2.8081 | 1.4660 | 3500 | 2.7970 | | 2.8053 | 1.6754 | 4000 | 2.8058 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "zlm_b32", "results": []}]}
mikhail-panzo/zlm_b32_le3_s4000
null
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-27T17:27:55+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us
zlm\_b32 ======== This model is a fine-tuned version of microsoft/speecht5\_tts on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.8058 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.001 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * training\_steps: 4000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.41.0.dev0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
fractalego/wafl-phi3-mini-4k
null
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T17:29:36+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #phi3 #text-generation #conversational #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #phi3 #text-generation #conversational #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-audio
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zlm_b64 This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.7960 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.4734 | 0.4188 | 500 | 0.4319 | | 0.4686 | 0.8377 | 1000 | 0.4433 | | 2.9342 | 1.2565 | 1500 | 2.8256 | | 2.8065 | 1.6754 | 2000 | 2.7988 | | 2.8176 | 2.0942 | 2500 | 2.8305 | | 2.7931 | 2.5131 | 3000 | 2.7955 | | 2.8172 | 2.9319 | 3500 | 2.8076 | | 2.802 | 3.3508 | 4000 | 2.7960 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "zlm_b64_le3_s4000", "results": []}]}
mikhail-panzo/zlm_b64_le3_s4000
null
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-27T17:30:15+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us
zlm\_b64 ======== This model is a fine-tuned version of microsoft/speecht5\_tts on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.7960 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.001 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * training\_steps: 4000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.41.0.dev0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/wieeprd
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T17:31:46+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/hv5xwcd
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T17:32:12+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-audio
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zlm_b128 This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.7907 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.4638 | 0.8377 | 500 | 0.4116 | | 0.4543 | 1.6754 | 1000 | 0.3974 | | 2.8163 | 2.5131 | 1500 | 2.8015 | | 2.8164 | 3.3508 | 2000 | 2.7967 | | 2.8045 | 4.1885 | 2500 | 2.7965 | | 2.8146 | 5.0262 | 3000 | 2.7904 | | 2.8105 | 5.8639 | 3500 | 2.7910 | | 2.8075 | 6.7016 | 4000 | 2.7907 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "zlm_b128", "results": []}]}
mikhail-panzo/zlm_b128_le3_s4000
null
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-27T17:32:46+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us
zlm\_b128 ========= This model is a fine-tuned version of microsoft/speecht5\_tts on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.7907 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.001 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * training\_steps: 4000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.41.0.dev0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the GAIR/lima dataset. It achieves the following results on the evaluation set: - Loss: 2.4176 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0635 | 0.91 | 5 | 1.9126 | | 1.9282 | 2.0 | 11 | 1.8814 | | 1.7541 | 2.91 | 16 | 2.2656 | | 1.5669 | 4.0 | 22 | 2.2188 | | 1.3975 | 4.91 | 27 | 2.2543 | | 1.2431 | 6.0 | 33 | 2.3338 | | 1.1081 | 6.91 | 38 | 2.3438 | | 1.0212 | 8.0 | 44 | 2.4276 | | 0.9554 | 8.91 | 49 | 2.4176 | | 0.9463 | 9.09 | 50 | 2.4176 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["GAIR/lima"], "base_model": "tiiuae/falcon-7b", "model-index": [{"name": "test", "results": []}]}
pkarypis/test
null
[ "transformers", "tensorboard", "safetensors", "falcon", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "custom_code", "dataset:GAIR/lima", "base_model:tiiuae/falcon-7b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T17:35:07+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #falcon #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #custom_code #dataset-GAIR/lima #base_model-tiiuae/falcon-7b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
test ==== This model is a fine-tuned version of tiiuae/falcon-7b on the GAIR/lima dataset. It achieves the following results on the evaluation set: * Loss: 2.4176 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 16 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10.0 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.1.2 * Datasets 2.14.6 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #falcon #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #custom_code #dataset-GAIR/lima #base_model-tiiuae/falcon-7b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Lucia01/t5_simplification_finetuned
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T17:37:30+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # V0424HMA23 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0677 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7953 | 0.09 | 10 | 0.3102 | | 0.1851 | 0.18 | 20 | 0.1074 | | 0.1059 | 0.27 | 30 | 0.0854 | | 0.0883 | 0.36 | 40 | 0.0786 | | 0.0853 | 0.45 | 50 | 0.0757 | | 0.0884 | 0.54 | 60 | 0.0741 | | 0.0784 | 0.63 | 70 | 0.0724 | | 0.0726 | 0.73 | 80 | 0.0840 | | 0.085 | 0.82 | 90 | 0.0728 | | 0.0871 | 0.91 | 100 | 0.0770 | | 0.0839 | 1.0 | 110 | 0.0698 | | 0.064 | 1.09 | 120 | 0.0797 | | 0.0714 | 1.18 | 130 | 0.0778 | | 0.0777 | 1.27 | 140 | 0.0738 | | 0.0712 | 1.36 | 150 | 0.0684 | | 0.0799 | 1.45 | 160 | 0.0680 | | 0.0658 | 1.54 | 170 | 0.0653 | | 0.0631 | 1.63 | 180 | 0.0699 | | 0.0589 | 1.72 | 190 | 0.0674 | | 0.0665 | 1.81 | 200 | 0.0637 | | 0.0578 | 1.9 | 210 | 0.0672 | | 0.053 | 1.99 | 220 | 0.0650 | | 0.0368 | 2.08 | 230 | 0.0729 | | 0.0343 | 2.18 | 240 | 0.0792 | | 0.0331 | 2.27 | 250 | 0.0727 | | 0.0339 | 2.36 | 260 | 0.0701 | | 0.0336 | 2.45 | 270 | 0.0694 | | 0.0308 | 2.54 | 280 | 0.0691 | | 0.0307 | 2.63 | 290 | 0.0684 | | 0.0323 | 2.72 | 300 | 0.0681 | | 0.0343 | 2.81 | 310 | 0.0679 | | 0.0316 | 2.9 | 320 | 0.0677 | | 0.0347 | 2.99 | 330 | 0.0677 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0424HMA23", "results": []}]}
Litzy619/V0424HMA23
null
[ "generated_from_trainer", "base_model:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-04-27T17:41:25+00:00
[]
[]
TAGS #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
V0424HMA23 ========== This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.0677 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 16 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine\_with\_restarts * lr\_scheduler\_warmup\_steps: 80 * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # V0424HMA24 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4088 | 0.09 | 10 | 0.1480 | | 0.1382 | 0.18 | 20 | 0.1065 | | 0.1006 | 0.27 | 30 | 0.0944 | | 0.0963 | 0.36 | 40 | 0.0871 | | 0.0837 | 0.45 | 50 | 0.0782 | | 0.0844 | 0.54 | 60 | 0.0803 | | 0.0773 | 0.63 | 70 | 0.0687 | | 0.0718 | 0.73 | 80 | 0.0837 | | 0.0798 | 0.82 | 90 | 0.0726 | | 0.0967 | 0.91 | 100 | 0.0860 | | 0.0885 | 1.0 | 110 | 0.0745 | | 0.068 | 1.09 | 120 | 0.0898 | | 0.0694 | 1.18 | 130 | 0.0772 | | 0.0686 | 1.27 | 140 | 0.0705 | | 0.0653 | 1.36 | 150 | 0.0702 | | 0.07 | 1.45 | 160 | 0.0712 | | 0.0655 | 1.54 | 170 | 0.0729 | | 0.0664 | 1.63 | 180 | 0.0681 | | 0.0642 | 1.72 | 190 | 0.0615 | | 0.0654 | 1.81 | 200 | 0.0732 | | 0.0652 | 1.9 | 210 | 0.0734 | | 0.0626 | 1.99 | 220 | 0.0688 | | 0.0381 | 2.08 | 230 | 0.0752 | | 0.0426 | 2.18 | 240 | 0.0677 | | 0.037 | 2.27 | 250 | 0.0731 | | 0.0395 | 2.36 | 260 | 0.0658 | | 0.0339 | 2.45 | 270 | 0.0664 | | 0.0331 | 2.54 | 280 | 0.0690 | | 0.03 | 2.63 | 290 | 0.0689 | | 0.0316 | 2.72 | 300 | 0.0687 | | 0.0382 | 2.81 | 310 | 0.0674 | | 0.0326 | 2.9 | 320 | 0.0667 | | 0.0332 | 2.99 | 330 | 0.0667 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0424HMA24", "results": []}]}
Litzy619/V0424HMA24
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-04-27T17:42:12+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
V0424HMA24 ========== This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.0667 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 16 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine\_with\_restarts * lr\_scheduler\_warmup\_steps: 80 * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # V0424HMA25 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0618 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6773 | 0.09 | 10 | 0.1606 | | 0.1594 | 0.18 | 20 | 0.1103 | | 0.111 | 0.27 | 30 | 0.0895 | | 0.0969 | 0.36 | 40 | 0.0832 | | 0.088 | 0.45 | 50 | 0.0802 | | 0.1033 | 0.54 | 60 | 0.0900 | | 0.0892 | 0.63 | 70 | 0.0795 | | 0.0821 | 0.73 | 80 | 0.1137 | | 0.09 | 0.82 | 90 | 0.0797 | | 0.0854 | 0.91 | 100 | 0.0695 | | 0.0797 | 1.0 | 110 | 0.0663 | | 0.0675 | 1.09 | 120 | 0.0694 | | 0.0638 | 1.18 | 130 | 0.0916 | | 0.0756 | 1.27 | 140 | 0.0697 | | 0.0645 | 1.36 | 150 | 0.0780 | | 0.0706 | 1.45 | 160 | 0.0718 | | 0.072 | 1.54 | 170 | 0.0709 | | 0.0779 | 1.63 | 180 | 0.0697 | | 0.0711 | 1.72 | 190 | 0.0674 | | 0.0702 | 1.81 | 200 | 0.0735 | | 0.0626 | 1.9 | 210 | 0.0652 | | 0.0578 | 1.99 | 220 | 0.0626 | | 0.0374 | 2.08 | 230 | 0.0751 | | 0.0423 | 2.18 | 240 | 0.0685 | | 0.0341 | 2.27 | 250 | 0.0656 | | 0.0343 | 2.36 | 260 | 0.0663 | | 0.0377 | 2.45 | 270 | 0.0662 | | 0.0335 | 2.54 | 280 | 0.0633 | | 0.0306 | 2.63 | 290 | 0.0634 | | 0.0369 | 2.72 | 300 | 0.0628 | | 0.0373 | 2.81 | 310 | 0.0621 | | 0.0348 | 2.9 | 320 | 0.0618 | | 0.0359 | 2.99 | 330 | 0.0618 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0424HMA25", "results": []}]}
Litzy619/V0424HMA25
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-04-27T17:42:17+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
V0424HMA25 ========== This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.0618 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 16 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine\_with\_restarts * lr\_scheduler\_warmup\_steps: 80 * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # V0424HMA26 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0706 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5067 | 0.09 | 10 | 0.1397 | | 0.1485 | 0.18 | 20 | 0.1057 | | 0.1038 | 0.27 | 30 | 0.0912 | | 0.0895 | 0.36 | 40 | 0.0768 | | 0.0832 | 0.45 | 50 | 0.0716 | | 0.085 | 0.54 | 60 | 0.0725 | | 0.0765 | 0.63 | 70 | 0.0681 | | 0.0702 | 0.73 | 80 | 0.0656 | | 0.0736 | 0.82 | 90 | 0.0668 | | 0.0792 | 0.91 | 100 | 0.0605 | | 0.0774 | 1.0 | 110 | 0.0694 | | 0.0591 | 1.09 | 120 | 0.0754 | | 0.0665 | 1.18 | 130 | 0.0804 | | 0.0707 | 1.27 | 140 | 0.0676 | | 0.0618 | 1.36 | 150 | 0.0694 | | 0.0661 | 1.45 | 160 | 0.0681 | | 0.0584 | 1.54 | 170 | 0.0812 | | 0.0617 | 1.63 | 180 | 0.0667 | | 0.0519 | 1.72 | 190 | 0.0681 | | 0.0666 | 1.81 | 200 | 0.0688 | | 0.0553 | 1.9 | 210 | 0.0698 | | 0.0513 | 1.99 | 220 | 0.0691 | | 0.0371 | 2.08 | 230 | 0.0675 | | 0.0325 | 2.18 | 240 | 0.0770 | | 0.0276 | 2.27 | 250 | 0.0784 | | 0.0317 | 2.36 | 260 | 0.0759 | | 0.0314 | 2.45 | 270 | 0.0726 | | 0.0291 | 2.54 | 280 | 0.0684 | | 0.0262 | 2.63 | 290 | 0.0697 | | 0.0264 | 2.72 | 300 | 0.0712 | | 0.0322 | 2.81 | 310 | 0.0711 | | 0.0289 | 2.9 | 320 | 0.0707 | | 0.0304 | 2.99 | 330 | 0.0706 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0424HMA26", "results": []}]}
Litzy619/V0424HMA26
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-04-27T17:43:10+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
V0424HMA26 ========== This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.0706 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 16 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine\_with\_restarts * lr\_scheduler\_warmup\_steps: 80 * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
null
peft
## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0
{"library_name": "peft"}
vicaloy/llama-2-13-b-checkpoint
null
[ "peft", "region:us" ]
null
2024-04-27T17:46:28+00:00
[]
[]
TAGS #peft #region-us
## Training procedure The following 'bitsandbytes' quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following 'bitsandbytes' quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following 'bitsandbytes' quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following 'bitsandbytes' quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0
[ "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n- PEFT 0.4.0\n- PEFT 0.4.0\n- PEFT 0.4.0\n\n- PEFT 0.4.0" ]
[ "TAGS\n#peft #region-us \n", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n- PEFT 0.4.0\n- PEFT 0.4.0\n- PEFT 0.4.0\n\n- PEFT 0.4.0" ]
null
null
https://civitai.com/models/420159/mami-nanami-kanojo-okarishimasu-lora
{"license": "creativeml-openrail-m"}
LarryAIDraw/nanamimami-nvwls-v1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-04-27T17:49:06+00:00
[]
[]
TAGS #license-creativeml-openrail-m #region-us
URL
[]
[ "TAGS\n#license-creativeml-openrail-m #region-us \n" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1380 - F1: 0.8580 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2586 | 1.0 | 525 | 0.1550 | 0.8259 | | 0.1285 | 2.0 | 1050 | 0.1407 | 0.8504 | | 0.0792 | 3.0 | 1575 | 0.1380 | 0.8580 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-de", "results": []}]}
joacorf33/xlm-roberta-base-finetuned-panx-de
null
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T17:50:20+00:00
[]
[]
TAGS #transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-de ================================== This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1380 * F1: 0.8580 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.0 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]