url
stringlengths
23
7.17k
text
stringlengths
0
1.65M
https://huggingface.co/spaces/awacke1/Memory-Shared
This Space is sleeping due to inactivity.
https://huggingface.co/averma26
Abhishek Verma averma26 Research interests NLP Learning about Large Language Model Would like to learn about RLHF Multi-modal Applications Organizations
https://huggingface.co/spaces/awacke1/AI.Dashboard.HEDIS
App Files Files Community
https://huggingface.co/spaces/awacke1/Model-Easy-Button-Generative-Images-runwayml-stable-diffusion-v1-5
runtime error failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to create new parent process: namespace path: lstat /proc/0/ns/ipc: no such file or directory: unknown Container logs:
https://huggingface.co/spaces/awacke1/SaveAndReloadDataset
runtime error Traceback (most recent call last): File "/home/user/.local/bin/streamlit", line 5, in <module> from streamlit.cli import main File "/home/user/.local/lib/python3.8/site-packages/streamlit/__init__.py", line 70, in <module> from streamlit.delta_generator import DeltaGenerator as _DeltaGenerator File "/home/user/.local/lib/python3.8/site-packages/streamlit/delta_generator.py", line 73, in <module> from streamlit.elements.arrow_altair import ArrowAltairMixin File "/home/user/.local/lib/python3.8/site-packages/streamlit/elements/arrow_altair.py", line 25, in <module> from altair.vegalite.v4.api import Chart ModuleNotFoundError: No module named 'altair.vegalite.v4' Container logs:
https://huggingface.co/spaces/awacke1/Health-Care-AI-and-Datasets
runtime error 2023-09-26 00:32:39.483 INFO matplotlib.font_manager: generated new fontManager Traceback (most recent call last): File "/home/user/.local/bin/streamlit", line 5, in <module> from streamlit.web.cli import main File "/home/user/.local/lib/python3.8/site-packages/streamlit/__init__.py", line 55, in <module> from streamlit.delta_generator import DeltaGenerator as _DeltaGenerator File "/home/user/.local/lib/python3.8/site-packages/streamlit/delta_generator.py", line 45, in <module> from streamlit.elements.arrow_altair import ArrowAltairMixin File "/home/user/.local/lib/python3.8/site-packages/streamlit/elements/arrow_altair.py", line 36, in <module> from altair.vegalite.v4.api import Chart ModuleNotFoundError: No module named 'altair.vegalite.v4' Container logs:
https://huggingface.co/Classroom-workshop/assignment2-francesco
Edit model card PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. Usage (with Stable-baselines3) TODO: Add your code Evaluation results mean_reward on LunarLander-v2 self-reported 311.40 +/- 10.16
https://huggingface.co/spaces/awacke1/chatgpt-demo
This Space is sleeping due to inactivity.
https://huggingface.co/spaces/awacke1/CSVDatasetAnalyzer
runtime error Scheduling failure: not enough hardware capacity Container logs: Fetching error logs...
https://huggingface.co/Classroom-workshop/assignment2-llama
Edit model card PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. Usage (with Stable-baselines3) TODO: Add your code Evaluation results mean_reward on LunarLander-v2 self-reported 200.68 +/- 7.11
https://huggingface.co/login?next=%2FClassroom-workshop
Log In Don't have an account? Sign Up Username or Email address Password Forgot your password? SSO is available for companies
https://huggingface.co/Classroom-workshop/assignment2-thom
Edit model card PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. Usage (with Stable-baselines3) TODO: Add your code Evaluation results mean_reward on LunarLander-v2 self-reported 0.000
https://huggingface.co/Classroom-workshop/assignment1-joane
S2T-SMALL-LIBRISPEECH-ASR s2t-small-librispeech-asr is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR). The S2T model was proposed in this paper and released in this repository Model description S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard autoregressive cross-entropy loss and generates the transcripts autoregressively. Intended uses & limitations This model can be used for end-to-end speech recognition (ASR). See the model hub to look for other S2T checkpoints. How to use As this a standard sequence to sequence transformer model, you can use the generate method to generate the transcripts by passing the speech features to the model. Note: The Speech2TextProcessor object uses torchaudio to extract the filter bank features. Make sure to install the torchaudio package before running this example. Note: The feature extractor depends on torchaudio and the tokenizer depends on sentencepiece so be sure to install those packages before running the examples. You could either install those as extra speech dependancies with pip install transformers"[speech, sentencepiece]" or install the packages seperatly with pip install torchaudio sentencepiece. import torch from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration from datasets import load_dataset model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr") ds = load_dataset( "patrickvonplaten/librispeech_asr_dummy", "clean", split="validation" ) input_features = processor( ds[0]["audio"]["array"], sampling_rate=16_000, return_tensors="pt" ).input_features # Batch size 1 generated_ids = model.generate(input_ids=input_features) transcription = processor.batch_decode(generated_ids) Evaluation on LibriSpeech Test The following script shows how to evaluate this model on the LibriSpeech "clean" and "other" test dataset. from datasets import load_dataset, load_metric from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset wer = load_metric("wer") model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr").to("cuda") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr", do_upper_case=True) librispeech_eval = librispeech_eval.map(map_to_array) def map_to_pred(batch): features = processor(batch["audio"]["array"], sampling_rate=16000, padding=True, return_tensors="pt") input_features = features.input_features.to("cuda") attention_mask = features.attention_mask.to("cuda") gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask) batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True) return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"]) print("WER:", wer(predictions=result["transcription"], references=result["text"])) Result (WER): Training data The S2T-SMALL-LIBRISPEECH-ASR is trained on LibriSpeech ASR Corpus, a dataset consisting of approximately 1000 hours of 16kHz read English speech. Training procedure Preprocessing The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization) is applied to each example. The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000. Training The model is trained with standard autoregressive cross-entropy loss and using SpecAugment. The encoder receives speech features, and the decoder generates the transcripts autoregressively. BibTeX entry and citation info @inproceedings{wang2020fairseqs2t, title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq}, author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino}, booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations}, year = {2020}, }
https://huggingface.co/Classroom-workshop/assignment1-maria
S2T-SMALL-LIBRISPEECH-ASR s2t-small-librispeech-asr is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR). The S2T model was proposed in this paper and released in this repository Model description S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard autoregressive cross-entropy loss and generates the transcripts autoregressively. Intended uses & limitations This model can be used for end-to-end speech recognition (ASR). See the model hub to look for other S2T checkpoints. How to use As this a standard sequence to sequence transformer model, you can use the generate method to generate the transcripts by passing the speech features to the model. Note: The Speech2TextProcessor object uses torchaudio to extract the filter bank features. Make sure to install the torchaudio package before running this example. Note: The feature extractor depends on torchaudio and the tokenizer depends on sentencepiece so be sure to install those packages before running the examples. You could either install those as extra speech dependancies with pip install transformers"[speech, sentencepiece]" or install the packages seperatly with pip install torchaudio sentencepiece. import torch from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration from datasets import load_dataset model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr") ds = load_dataset( "patrickvonplaten/librispeech_asr_dummy", "clean", split="validation" ) input_features = processor( ds[0]["audio"]["array"], sampling_rate=16_000, return_tensors="pt" ).input_features # Batch size 1 generated_ids = model.generate(input_ids=input_features) transcription = processor.batch_decode(generated_ids) Evaluation on LibriSpeech Test The following script shows how to evaluate this model on the LibriSpeech "clean" and "other" test dataset. from datasets import load_dataset, load_metric from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset wer = load_metric("wer") model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr").to("cuda") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr", do_upper_case=True) librispeech_eval = librispeech_eval.map(map_to_array) def map_to_pred(batch): features = processor(batch["audio"]["array"], sampling_rate=16000, padding=True, return_tensors="pt") input_features = features.input_features.to("cuda") attention_mask = features.attention_mask.to("cuda") gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask) batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True) return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"]) print("WER:", wer(predictions=result["transcription"], references=result["text"])) Result (WER): Training data The S2T-SMALL-LIBRISPEECH-ASR is trained on LibriSpeech ASR Corpus, a dataset consisting of approximately 1000 hours of 16kHz read English speech. Training procedure Preprocessing The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization) is applied to each example. The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000. Training The model is trained with standard autoregressive cross-entropy loss and using SpecAugment. The encoder receives speech features, and the decoder generates the transcripts autoregressively. BibTeX entry and citation info @inproceedings{wang2020fairseqs2t, title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq}, author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino}, booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations}, year = {2020}, }
https://huggingface.co/spaces/awacke1/Prompt-Refinery-Text-to-Image-Generation/discussions
Welcome to the community The community tab is the place to discuss and collaborate with the HF community!
https://huggingface.co/Classroom-workshop/assignment2-omar
Edit model card PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. Usage (with Stable-baselines3) TODO: Add your code Evaluation results mean_reward on LunarLander-v2 self-reported 10 +/- 7.11
https://huggingface.co/login?next=%2FIcelandAI
Log In Don't have an account? Sign Up Username or Email address Password Forgot your password? SSO is available for companies
https://huggingface.co/spaces/awacke1/acw-dr-llama-7b-chat/discussions
Welcome to the community The community tab is the place to discuss and collaborate with the HF community!
https://huggingface.co/spaces/awacke1/Prompt-Refinery-Text-to-Image-Generation/tree/main
awacke1 Update README.md 7544130 about 2 months ago
https://huggingface.co/spaces/IcelandAI/AnimalsOfIceland
runtime error Traceback (most recent call last): File "/home/user/.local/bin/streamlit", line 5, in <module> from streamlit.web.cli import main File "/home/user/.local/lib/python3.8/site-packages/streamlit/__init__.py", line 55, in <module> from streamlit.delta_generator import DeltaGenerator as _DeltaGenerator File "/home/user/.local/lib/python3.8/site-packages/streamlit/delta_generator.py", line 45, in <module> from streamlit.elements.arrow_altair import ArrowAltairMixin File "/home/user/.local/lib/python3.8/site-packages/streamlit/elements/arrow_altair.py", line 36, in <module> from altair.vegalite.v4.api import Chart ModuleNotFoundError: No module named 'altair.vegalite.v4' Container logs:
https://huggingface.co/login?next=%2FAIPairProgramming
Log In Don't have an account? Sign Up Username or Email address Password Forgot your password? SSO is available for companies
https://huggingface.co/Classroom-workshop/assignment2-julien
Edit model card PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. Usage (with Stable-baselines3) TODO: Add your code Evaluation results mean_reward on LunarLander-v2 self-reported 0 +/- 10.16
https://huggingface.co/spaces/IcelandAI/Iceland-Top-Ten-Things-To-See
runtime error Traceback (most recent call last): File "/home/user/.local/bin/streamlit", line 5, in <module> from streamlit.web.cli import main File "/home/user/.local/lib/python3.8/site-packages/streamlit/__init__.py", line 55, in <module> from streamlit.delta_generator import DeltaGenerator as _DeltaGenerator File "/home/user/.local/lib/python3.8/site-packages/streamlit/delta_generator.py", line 45, in <module> from streamlit.elements.arrow_altair import ArrowAltairMixin File "/home/user/.local/lib/python3.8/site-packages/streamlit/elements/arrow_altair.py", line 36, in <module> from altair.vegalite.v4.api import Chart ModuleNotFoundError: No module named 'altair.vegalite.v4' Container logs:
https://huggingface.co/spaces/IcelandAI/Foods-and-Drinks-of-Iceland
runtime error Traceback (most recent call last): File "/home/user/.local/bin/streamlit", line 5, in <module> from streamlit.web.cli import main File "/home/user/.local/lib/python3.8/site-packages/streamlit/__init__.py", line 55, in <module> from streamlit.delta_generator import DeltaGenerator as _DeltaGenerator File "/home/user/.local/lib/python3.8/site-packages/streamlit/delta_generator.py", line 45, in <module> from streamlit.elements.arrow_altair import ArrowAltairMixin File "/home/user/.local/lib/python3.8/site-packages/streamlit/elements/arrow_altair.py", line 36, in <module> from altair.vegalite.v4.api import Chart ModuleNotFoundError: No module named 'altair.vegalite.v4' Container logs:
https://huggingface.co/spaces/awacke1/acw-dr-llama-7b-chat/tree/main
1.52 kB initial commit about 1 month ago 272 Bytes Update README.md 20 days ago 32.7 kB Update app.py 1 day ago 30 kB Update backupapp.py 8 days ago 144 Bytes Update requirements.txt about 1 month ago 1.11 kB Create templates.py about 1 month ago
https://huggingface.co/spaces/awacke1/Health.Assessments.Summarizer
runtime error Traceback (most recent call last): File "/home/user/.local/bin/streamlit", line 5, in <module> from streamlit.web.cli import main File "/home/user/.local/lib/python3.8/site-packages/streamlit/__init__.py", line 55, in <module> from streamlit.delta_generator import DeltaGenerator as _DeltaGenerator File "/home/user/.local/lib/python3.8/site-packages/streamlit/delta_generator.py", line 45, in <module> from streamlit.elements.arrow_altair import ArrowAltairMixin File "/home/user/.local/lib/python3.8/site-packages/streamlit/elements/arrow_altair.py", line 36, in <module> from altair.vegalite.v4.api import Chart ModuleNotFoundError: No module named 'altair.vegalite.v4' Container logs:
https://huggingface.co/spaces/AIZero2HeroBootcamp/3DHuman
App Files Files Community
https://huggingface.co/spaces/awacke1/PromptSuperHeroImageGenerator/discussions
Welcome to the community The community tab is the place to discuss and collaborate with the HF community!
https://huggingface.co/spaces/awacke1/CarePlanQnAWithContext
App Files Files Community
https://huggingface.co/login?next=%2Fcollections%2Fawacke1%2Fai-favorite-pipelines-650279cfb5d4e364c21adcdf
Log In Don't have an account? Sign Up Username or Email address Password Forgot your password? SSO is available for companies
https://huggingface.co/spaces/awacke1/VoiceGPT15
App Files Files Community
https://huggingface.co/spaces/AIZero2HeroBootcamp/ChatGPTandLangchain
This Space is sleeping due to inactivity.
https://huggingface.co/spaces/AIZero2HeroBootcamp/StaticHTML5Playcanvas
App Files Files Community
https://huggingface.co/spaces/AIZero2HeroBootcamp/AnimatedGifGallery
This Space is sleeping due to inactivity.
https://huggingface.co/spaces/awacke1/CloneAnyVoice
App Files Files Community 1
https://huggingface.co/spaces/awacke1/ChatGPT-Streamlit-2
App Files Files
https://huggingface.co/Classroom-workshop/assignment1-francesco
S2T-SMALL-LIBRISPEECH-ASR s2t-small-librispeech-asr is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR). The S2T model was proposed in this paper and released in this repository Model description S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard autoregressive cross-entropy loss and generates the transcripts autoregressively. Intended uses & limitations This model can be used for end-to-end speech recognition (ASR). See the model hub to look for other S2T checkpoints. How to use As this a standard sequence to sequence transformer model, you can use the generate method to generate the transcripts by passing the speech features to the model. Note: The Speech2TextProcessor object uses torchaudio to extract the filter bank features. Make sure to install the torchaudio package before running this example. Note: The feature extractor depends on torchaudio and the tokenizer depends on sentencepiece so be sure to install those packages before running the examples. You could either install those as extra speech dependancies with pip install transformers"[speech, sentencepiece]" or install the packages seperatly with pip install torchaudio sentencepiece. import torch from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration from datasets import load_dataset model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr") ds = load_dataset( "patrickvonplaten/librispeech_asr_dummy", "clean", split="validation" ) input_features = processor( ds[0]["audio"]["array"], sampling_rate=16_000, return_tensors="pt" ).input_features # Batch size 1 generated_ids = model.generate(input_ids=input_features) transcription = processor.batch_decode(generated_ids) Evaluation on LibriSpeech Test The following script shows how to evaluate this model on the LibriSpeech "clean" and "other" test dataset. from datasets import load_dataset, load_metric from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset wer = load_metric("wer") model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr").to("cuda") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr", do_upper_case=True) librispeech_eval = librispeech_eval.map(map_to_array) def map_to_pred(batch): features = processor(batch["audio"]["array"], sampling_rate=16000, padding=True, return_tensors="pt") input_features = features.input_features.to("cuda") attention_mask = features.attention_mask.to("cuda") gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask) batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True) return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"]) print("WER:", wer(predictions=result["transcription"], references=result["text"])) Result (WER): Training data The S2T-SMALL-LIBRISPEECH-ASR is trained on LibriSpeech ASR Corpus, a dataset consisting of approximately 1000 hours of 16kHz read English speech. Training procedure Preprocessing The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization) is applied to each example. The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000. Training The model is trained with standard autoregressive cross-entropy loss and using SpecAugment. The encoder receives speech features, and the decoder generates the transcripts autoregressively. BibTeX entry and citation info @inproceedings{wang2020fairseqs2t, title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq}, author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino}, booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations}, year = {2020}, }
https://huggingface.co/spaces/AIZero2HeroBootcamp/MultiPDF-QA-ChatGPT-Langchain
This Space is sleeping due to inactivity.
https://huggingface.co/Classroom-workshop/assignment2-llamas
Edit model card PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. Usage (with Stable-baselines3) TODO: Add your code Evaluation results mean_reward on LunarLander-v2 self-reported 0.000
https://huggingface.co/spaces/awacke1/Image-to-Text-Salesforce-blip-image-captioning-base
App Files Files Community
https://huggingface.co/spaces/AIZero2HeroBootcamp/VideoToAnimatedGif
Spaces AIZero2HeroBootcamp / VideoToAnimatedGif Runtime error App Files Files Community
https://huggingface.co/Classroom-workshop/assignment1-jack
S2T-SMALL-LIBRISPEECH-ASR s2t-small-librispeech-asr is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR). The S2T model was proposed in this paper and released in this repository Model description S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard autoregressive cross-entropy loss and generates the transcripts autoregressively. Intended uses & limitations This model can be used for end-to-end speech recognition (ASR). See the model hub to look for other S2T checkpoints. How to use As this a standard sequence to sequence transformer model, you can use the generate method to generate the transcripts by passing the speech features to the model. Note: The Speech2TextProcessor object uses torchaudio to extract the filter bank features. Make sure to install the torchaudio package before running this example. Note: The feature extractor depends on torchaudio and the tokenizer depends on sentencepiece so be sure to install those packages before running the examples. You could either install those as extra speech dependancies with pip install transformers"[speech, sentencepiece]" or install the packages seperatly with pip install torchaudio sentencepiece. import torch from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration from datasets import load_dataset model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr") ds = load_dataset( "patrickvonplaten/librispeech_asr_dummy", "clean", split="validation" ) input_features = processor( ds[0]["audio"]["array"], sampling_rate=16_000, return_tensors="pt" ).input_features # Batch size 1 generated_ids = model.generate(input_ids=input_features) transcription = processor.batch_decode(generated_ids) Evaluation on LibriSpeech Test The following script shows how to evaluate this model on the LibriSpeech "clean" and "other" test dataset. from datasets import load_dataset, load_metric from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset wer = load_metric("wer") model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr").to("cuda") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr", do_upper_case=True) librispeech_eval = librispeech_eval.map(map_to_array) def map_to_pred(batch): features = processor(batch["audio"]["array"], sampling_rate=16000, padding=True, return_tensors="pt") input_features = features.input_features.to("cuda") attention_mask = features.attention_mask.to("cuda") gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask) batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True) return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"]) print("WER:", wer(predictions=result["transcription"], references=result["text"])) Result (WER): Training data The S2T-SMALL-LIBRISPEECH-ASR is trained on LibriSpeech ASR Corpus, a dataset consisting of approximately 1000 hours of 16kHz read English speech. Training procedure Preprocessing The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization) is applied to each example. The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000. Training The model is trained with standard autoregressive cross-entropy loss and using SpecAugment. The encoder receives speech features, and the decoder generates the transcripts autoregressively. BibTeX entry and citation info @inproceedings{wang2020fairseqs2t, title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq}, author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino}, booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations}, year = {2020}, }
https://huggingface.co/spaces/awacke1/PromptSuperHeroImageGenerator/tree/main
awacke1 Update README.md ce68d10 about 1 month ago
https://huggingface.co/spaces/AIZero2HeroBootcamp/TranscriptAILearnerFromYoutube
This Space is sleeping due to inactivity.
https://huggingface.co/spaces/AIZero2HeroBootcamp/ClassDescriptionAndExamplesStreamlit
This Space is sleeping due to inactivity.
https://huggingface.co/spaces/AIZero2HeroBootcamp/ClassDescriptionAndExamples
App Files Files Community
https://huggingface.co/spaces/AIZero2HeroBootcamp/GetAllContentFromWebURL
runtime error Scheduling failure: not enough hardware capacity Container logs: Fetching error logs...
https://huggingface.co/cnguye72
Caitlin Nguyen cnguye72 Research interests None yet Organizations spaces 1 No application file πŸ”₯ SuperSimple2LinerText2Speech models None public yet datasets None public yet
https://huggingface.co/aphn
Andrew Phan aphn Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/Connorsharp
Connor Sharp Connorsharp Research interests None yet Organizations spaces 2 No application file πŸ”₯ SuperSimple2LinerText2Speech Stopped 🐨 AIPairProgramming1 models None public yet datasets None public yet
https://huggingface.co/ocrane230249
Olivia Crane ocrane230249 Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/AliyaJ
J AliyaJ Research interests None yet Organizations spaces 1 No application file 🐨 Supersimple2LinerTexttoSpeech models None public yet datasets None public yet
https://huggingface.co/gmutombo
Gustave gmutombo Research interests None yet Organizations spaces 1 No application file 🐒 SSupersimpletext models None public yet datasets None public yet
https://huggingface.co/seanwendlandt
1 3 Sean Wendlandt seanwendlandt Research interests None yet Organizations spaces 5 2 🐒 VideoToAnimatedGif Stopped πŸ‘€ AI Pairprogramming Runtime error 🐨 SuperSimple2linerText2Speech πŸ¦€ HTML5Example πŸ† HTML5 Interactivity models None public yet datasets None public yet
https://huggingface.co/mayamercho
Maya Mercho mayamercho Research interests None yet Organizations spaces 1 Stopped πŸ‘ Facebook Fastspeech2 En Ljspeech 0512 models None public yet datasets None public yet
https://huggingface.co/spaces/Classroom-workshop/assignments-leaderboard
runtime error Space failed to start. Exit code: 1 Container logs: Fetching error logs...
https://huggingface.co/byungchulhan
Byung-Chul Han byungchulhan Research interests None yet Organizations spaces 4 🐨 HTML5Example No application file πŸ’» SuperSimple2linerText2Speech πŸ¦€ HTML5interactivity Stopped πŸ‘€ AIPairProgramming1 models None public yet datasets None public yet
https://huggingface.co/Hbali
Harshita Bali Hbali Research interests Big Data Analytics, Deep Learning and Neural Networks, Data Mining, Cluster analysis. Organizations spaces 4
https://huggingface.co/matt-westerhaus
Matt Westerhaus matt-westerhaus mwesterh_uhg Research interests None yet Organizations spaces 2 🐒 HTML5Interactivity Runtime error ⚑ AIPairProgramming models None public yet datasets None public yet
https://huggingface.co/reecemiron
1 Reece Miron reecemiron Research interests None yet Organizations spaces 2 πŸ‘€ SuperSimple2LinerTextToSpeech No application file πŸ‘€ AIPairProgramming1 models None public yet datasets None public yet
https://huggingface.co/willholt
William Holt willholt Research interests None yet Organizations spaces 3 Stopped ⛓️ LangFlow Stopped 1 😻 JAMA GPT Stopped πŸ“Š AIPairProgramming1 models None public yet datasets None public yet
https://huggingface.co/dantemoonopt
Dante Moon dantemoonopt Research interests None yet Organizations spaces 1 Stopped πŸƒ AiPairProgramming1 models None public yet datasets None public yet
https://huggingface.co/nkelly28
Nolan Kelly nkelly28 Research interests None yet Organizations spaces 1 Stopped πŸ‘ SuperSimple2linerText2Speech models None public yet datasets None public yet
https://huggingface.co/jzachmann
Jordan Zachmann jzachmann Research interests None yet Organizations spaces 5 No application file πŸ“Š VideoToAnimatedGif 🐠 HTML5Example2 No application file πŸ† HTML5Example Stopped πŸƒ SuperSimple2LinerText2Speech πŸš€ HTML5Interactivity models None public yet datasets None public yet
https://huggingface.co/guysnovelutumba
Guysnove Lutumba guysnovelutumba Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/misterkait
Kathryn Wright misterkait Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/m-lubinski13
Megan Lubinski m-lubinski13 Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/Ccompson
Cameron Compson Ccompson Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/spaces/AIZero2HeroBootcamp/ExperimentalChatGPTv1
This Space is sleeping due to inactivity.
https://huggingface.co/madelinekirke
Madeline Kirke madelinekirke Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/ninapd
Nina Perez-Dubson ninapd Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/garrettv
Garrett Vareldzis garrettv Research interests None yet Organizations spaces 3 No application file πŸ“Š HTML5interactivityDemo Runtime error πŸ“‰ Facebook Fastspeech2 En Ljspeech 1111 No application file 🐨 Facebook Fastspeech2 En Ljspeech Gv models None public yet datasets None public yet
https://huggingface.co/lainestubbs
Laine Stubbs lainestubbs Research interests None yet Organizations spaces 1 Stopped πŸƒ Facebook Fastspeech2 En Ljspeech 0731 models None public yet datasets None public yet
https://huggingface.co/samschock
Samuel Schock samschock Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/lekreitzer
Lauren Kreitzer lekreitzer Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/jesterkq
Keely Jester jesterkq Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/spaces/awacke1/WikipediaProfilerTestforDatasets
App Files Files Community
https://huggingface.co/hannahross5
3 Hannah Ross hannahross5 Research interests None yet Organizations spaces 3 🐠 HTML5interactivitydemo Stopped 1 πŸ“š Memory Runtime error 1 ⚑ Facebook Fastspeech2 En Ljspeech 0731 models None public yet datasets None public yet
https://huggingface.co/adianaleidich
Adiana Leidich adianaleidich Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/jadaiacco
Jada Iacco jadaiacco Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/sydneyharper
Sydney Harper sydneyharper Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/jsamuelson
John Samuelson jsamuelson Research interests None yet Organizations spaces 1 Stopped πŸ“ˆ Facebook Fastspeech2 En Ljspeech 069710 models None public yet datasets None public yet
https://huggingface.co/kyliefrese
Kylie Frese kyliefrese Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/kaptainkadar
Alex Kadar kaptainkadar Research interests None yet Organizations spaces 2 πŸ† HTML5interactivitydemo Stopped 🌍 Facebook Fastspeech2 En Ljspeech Kk0731 models None public yet datasets None public yet
https://huggingface.co/dfinck1
Daniel Casey Finck dfinck1 Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/lcunningham22
Alexis Cunningham lcunningham22 Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/Nidian
elizondo Nidian Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/spaces/Yntec/ToyWorld
App Files Files Community 3
https://huggingface.co/bbaker272514
Bridget Baker bbaker272514 Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/brettstrandskov
Brett Strandskov brettstrandskov Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/anyarob
Anya Roberts anyarob Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/mlito17
Mario Lito mlito17 Research interests None yet Organizations spaces 2 Stopped πŸ“Š SuperSimpleTwoLineText2Speech Stopped 🐨 AIPairProgramming1 models None public yet datasets None public yet
https://huggingface.co/sofiapostigo
Sofia Postigo sofiapostigo Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/msalcena
Mathew Salcena msalcena Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/sawyersmith1
2 Sawyer Smith sawyersmith1 Research interests None yet Organizations spaces 2 🐠 HTML5DemoActivity Stopped 🐨 Facebook Fastspeech2 En Ljspeech 0998 models None public yet datasets None public yet
https://huggingface.co/darbysween
Darby Sween darbysween Research interests None yet Organizations spaces 1 Stopped 😻 ChatGPTandLangchain models None public yet datasets None public yet
https://huggingface.co/josiegreener
josephine greener josiegreener Research interests None yet Organizations spaces 1 No application file πŸ“ˆ HTML5demo models None public yet datasets None public yet
https://huggingface.co/zfrieseke
Zachary Frieseke zfrieseke Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/willdeters2
William Deters willdeters2 Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/login?next=%2FAIZero2HeroBootcamp
Log In Don't have an account? Sign Up Username or Email address Password Forgot your password? SSO is available for companies