data
stringlengths 115
7.61k
|
---|
JonathanSum#8528: I got a lot of answer, which they are sex toy, sex xxx... I don't remember the question.
JonathanSum#8528: https://cdn.discordapp.com/attachments/922424173916196955/980809486061142036/unknown.png
JonathanSum#8528: How can a statistics question will give an answer related to sex with t0pp?
JonathanSum#8528: https://cdn.discordapp.com/attachments/922424173916196955/980811587893993492/unknown.png
JonathanSum#8528: I am going to stop posting right now. The t0pp keeps answering "sex". https://cdn.discordapp.com/attachments/922424173916196955/980811949942136882/unknown.png
JonathanSum#8528: I highly suggest you try to use it to answer reddit question. You should see a similar result.
Omar Sanseviero#6198: Hey @JonathanSum! This is very interesting, would you mind opening a Discussion in the repo with this?
Omar Sanseviero#6198: Happy to open it if you're too busy
JonathanSum#8528: @Omar Sanseviero
Repo? You mean the one in github?
JonathanSum#8528: I guess I am busy. If you want, you can open it. If you want me to open it, please tell me where to open it?
cakiki#9145: I've also raised this in the BigScience impact channel on slack
JonathanSum#8528: I don't feel this is true to certain questions. I feel it is true to general question in the AskReddit.
JonathanSum#8528: You can reproduce it by keep answering the question from here:
https://www.reddit.com/r/AskReddit
Omar Sanseviero#6198: Yeah I meant a discussion in https://huggingface.co/bigscience/T0pp for doing this in the open
Omar Sanseviero#6198: I will do it 🙂
Omar Sanseviero#6198: I tried the top 10 questions though and was not able to generate sex-related answers, but still this should be disclosed in the model card
JonathanSum#8528: I am going to try it again. Like 100 questions.
JonathanSum#8528: You need to try the new question.
|
JonathanSum#8528: This one doesn't have the word sex, but I am not sure it is related. https://cdn.discordapp.com/attachments/922424173916196955/980841681169555496/unknown.png,https://cdn.discordapp.com/attachments/922424173916196955/980841681412837396/unknown.png
JonathanSum#8528: https://cdn.discordapp.com/attachments/922424173916196955/980841909222244462/unknown.png
JonathanSum#8528: Just right after the question I posted, It answered "sex" again.
JonathanSum#8528: https://cdn.discordapp.com/attachments/922424173916196955/980842030131462244/unknown.png
JonathanSum#8528: I even got the answer "sex toy" this morning.
JonathanSum#8528: @cakiki Were you trying to say something?
JonathanSum#8528: I think Lecun is right. A larger Text model won't work. It is just the same thing. It just good at prediction.
cakiki#9145: Ah yes sorry, forgot to continue the message. Was gonna say "Your Mom" jokes are inherently offensive
cakiki#9145: as in by design
cakiki#9145: Large as it is, it's still only "11B"
cakiki#9145: (11B comment is only meant to address the Lecun comment)
cakiki#9145: not the issue you've noticed which itself is very valid!
cakiki#9145: (Also I agree with you that "just larger" won't work 🙂 )
JonathanSum#8528: Thus, it shouldn't give a lot of answer that has the key word "sex"?
cakiki#9145: it's weird that it does, because it has to come from a particular finetuning dataset. (The C4 dataset filters out the word "sex")
JonathanSum#8528: I saw this question this morning. https://cdn.discordapp.com/attachments/922424173916196955/980844812045541396/unknown.png
JonathanSum#8528: It gave me XXX toy from T0pp. I post it here just in case if you want to report to the slack.
cakiki#9145: Very weird. Was "sex toy" part of the reddit comments?
JonathanSum#8528: https://www.reddit.com/r/AskReddit/comments/v0yxtf/if_you_could_invent_anything_useless_but_usefull/
JonathanSum#8528: Nope.
|
JonathanSum#8528: It is just a lot of answer from the T0pp includes the word "sex" and it is very sexualized.
Omar Sanseviero#6198: Btw I opened the discussion here an hour ago, I forgot to update here: https://huggingface.co/bigscience/T0pp/discussions/1
Quad#3028: Hi everyone,
I have a multi-label classification problem and not sure how to approach it.
I have about 8k classes and 10m labeled documents(English, 150 to 600 words each)
Any suggestions or pointers/links are appreciated.
Which model is better suited for this task, and how should i fine-tune it?
I am coming from image-processing/computer-vision world, new to NLP. Any help is appreciated.
Quad#3028: I actually did try a couple of bert/distilbert models for multilabel classification with different test datasets to see how they work first
Quad#3028: I kind of get it but what i am not sure about is how to approach 8k classes thing, i could just do what i did for fewer class datasets but not sure how it would perform
Quad#3028: I am looking in to some extreme multi label classification papers right now
Quad#3028: but i guess i should try the models i tried with fewer classes if i want use HF libs to see how it goes first
Merve#3234: I'd suggest you to look at zero shot models 😅 you can go to hf.co/models and filter the zero shot text classification pipeline on the left.
Merve#3234: it definitely will tackle your issue with 8k classes, I'm sure
Quad#3028: Okay, looks promising
Quad#3028: But all my labels are just some letters and numbers, not actual words. So I guess I still have to train them
Quad#3028: Can i do that with zero-shot models?
Quad#3028: Okay, a custom DataLoader seems like the way to go, going to try it
Merve#3234: nope 😦
|
muskannnnn#9204: Hello everyone!
I'll be starting a summer internship with a local psychologist where I need to create an NLP application which answers FAQs related to mental health (and then host the model on the clinic's website).
Now, the clinic will be giving me a long and extensive list of FAQs from their database. I wanted to ask, how can I approach this NLP Problem? Is this a case of closed domain question answering?
muskannnnn#9204: ^update: i'm sorry but ive realised that this is a text similarity use case. however, can someone confirm if im correct?
godwinh19#0974: My first approach would be sentences similarities.. fine tuning my model on the given dataset.
muskannnnn#9204: Thank you, will start working along these lines 👍
Wafaa#5650: Hi, I want to use XLS-R pretrained model for further finetuning and training a downstream model for something new, not ASR etc. I use PyTorch. Does anyone know of any tutorial for doing this - the getting the feature extraction part of XLS-R only? Thanks so much.
shea_fyffe#0956: It seems like using a fined-tuned Question Answering model. [see here](https://huggingface.co/tasks/question-answering)
Mokshit#3640: So I have this project I'm currently working on, how would I convert a long paragraph into " different multiple sub-sections"? https://cdn.discordapp.com/attachments/922424173916196955/981740256141393960/Screenshot_2022-06-02-07-32-48-59_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg
nickmuchi#2844: Can check out my space where I tackle that 1st question, chunking long texts for summarisation: nickmuchi/article-text-summarizer hope it is what you are looking for.
muskannnnn#9204: will check this out, thank you!
Mokshit#3640: wdym by space?
Mokshit#3640: Oh nvm got it
Mokshit#3640: @nickmuchi but it wont split in different paragraphs when we give directly the text right? You are splitting into chunks when the user provides with a url
Mokshit#3640: I tried with the url method but it doesnt split into different paragraphs. It just returns me a whole summary para
ViShu#8745: Hi, I am currently reading the annotated transformer blog and am not able to understand what the src_mask is doing ?
ViShu#8745: https://cdn.discordapp.com/attachments/922424173916196955/982274036220461116/unknown.png
ViShu#8745: blog link: <http://nlp.seas.harvard.edu/annotated-transformer/>
Duluth#7138: Anyone have a suggestion on how to use embeddings to identify/extract Abstracts from scientific journals?
|
(for fun I don't want to use a classic Regex approach)
ViShu#8745: Started a thread.
Robert1#0234: hey everyone. I want to analyse my models and get the token probabilities or log probs of the text. Or more precisely I want to be able to evaluate the likelihood of a particular output text being generated. For example input="my name is Bob and my gender is" output=" male" or output=" female". Is this possible with the huggingface library? I dont think so from my current understanding.
Robert1#0234: I guess thats my real problem: give the input X can I work out the probability of observing Y for a typical NLP model (say GPT2). Can be log probability or other "score" of likelihood.
mr_seeker#1337: It looks like similar to medi-BERT? There was one that you gave input on symptoms and it would give the most likely answer...
Galuh#1782: Can you give me 20-50 paper recomendation to read in NLP?
godwinh19#0974: The process of padding add some special "PAD" token to maintain the same length of all input in batch, right?
The src_mask assume that this additional (and not useful for processing) tokens are removed or masked
ViShu#8745: Thank you very much !!
Dr.Inc#8332: Is there online resource for me to learn more about code generation.
Omar Sanseviero#6198: If it's about training your own model, this blog post might be interesting https://huggingface.co/blog/codeparrot
Omar Sanseviero#6198: Also https://huggingface.co/course/chapter7/6?fw=pt
Dr.Inc#8332: @Omar Sanseviero Thank you.
𝘾𝙤𝙨𝙢𝙞𝙘𝙇𝙡𝙖𝙢𝙖#1205: Having some issues with the reconstructions of an EncoderDecoderModel I'm training. Most of the outputs look something like: `<s>............`. Is this normal? And if that is normal, how many training steps should it take before the model begins generating anything intelligible?
JonathanSum#8528: I am not the expert for that. But I used a decoder model for generation text by learning one of my favorite Anime with Pain Pytorch. It only learned like 1 or 2 epochs before the Out of Memory issue. And the result was good. You can see it in my Github. Thus, I guess you forget the starting word?
𝘾𝙤𝙨𝙢𝙞𝙘𝙇𝙡𝙖𝙢𝙖#1205: Thanks. 👍 It helps knowing that something is going wrong. I wasn't sure if I just needed to train it for a long time, but I wanted to avoid wasting my time like that.
JonathanSum#8528: I hope you will have a better person to come and answer your question. I suggest you to show the code too.
CKtalon#7792: I suspect it's a tokenization issue. You are feeding the model some wrongly tokenized data during the finetuning process
alighte#0403: I just fine-tuned a model and wanted to test it. This is the code to use from the HF transformers library but I never specified the tokenizer as such which is why I assume I get the error. Does anyone know 1. why the test code from HF assumes so? 2. Do I then use the tokenizer I define during the model's training? I created my own since I curated my data. Thanks! https://cdn.discordapp.com/attachments/922424173916196955/985031727334449202/Screen_Shot_2022-06-10_at_11.02.08_PM.png,https://cdn.discordapp.com/attachments/922424173916196955/985031727862935612/Screen_Shot_2022-06-10_at_11.02.18_PM.png
cakiki#9145: Hi 🙂 #ask-for-help is better suited for such questions. If you didn't train your own tokenizer, then use that of the model you used to finetune.
|
Rajko Radovanović#8407: Hi Folks! I studied CS with a lot of data science in undergrad, but have since stayed away from ML/AI as a practitioner. I'm looking to spin up a new project using LLMs for a couple commercial use cases: 1) extracting key themes of customer product reviews - e.g. what do people like, not like, etc; 2) Auto-classifyiong certain types of posts on social media, etc.
My question is, what is the best course out there or general resource for doing all this using pre-trained LLMs? Back when i was doing AI studies, the deeplearning textbook had just come out and was SOTA + ltos of Qs on stackoverflow... I feel like that book is way too low level now and the main communities are no longer stackoverflow. Would really appreciate any input!! I have seen the huggingface course btw, looks great and will go through it, wondering if there are other recommended courses, perhaps a bit deeper / more extensive with use cases, or if there are new stackoverflow-esque forums where people discuss. Thid Discord is great, but hard to search!
Thank you!!
alighte#0403: Ok. Thank you. Gotcha. I actually uploaded the data in HF datasets but will train and load the tokenizer!
cakiki#9145: if your model is already trained; then i'm not sure it makes sense to train the tokenizer now
Emanuel Huber#5410: Rajko, I recommend to take a look into Aspect-based sentiment analysis task. https://paperswithcode.com/task/aspect-based-sentiment-analysis
alighte#0403: Started a thread.
Rajko Radovanović#8407: Very cool!! So in this model I predefine the aspects and get sentiments in those specific aspects? Super cool. Tbh, most often, I want the aspects auto-inferred for me … so I was just going to run a clustering algorithm on comments and see what gets combined? Any better approaches, I imagine there must be and this must be a common approach?
Rajko Radovanović#8407: I basically just want the aspects auto-generated
Emanuel Huber#5410: This is also a task in Semeval that is executed before the sentiment analysis, the aspect-extraction https://paperswithcode.com/task/aspect-extraction
Rajko Radovanović#8407: ahhh amazing, thank you so much!!
ois#8544: I have a general question regarding NLP. Let's say I have a term (which can be composed by more than one word) and a corpus. A simple example is to search for the term "gender equality" in a text, and I want to check if the text contains some similar term or terminology and where it is, also, if there is a metric of similarity, I would like to have that too.
marmiteCloud#5923: btw something I learned the hard way(YMMV), sentiment analysis / topic modelling (= product review work) is probably the most saturated commercial area (i.e. almost every medium sized company runs something else or has endlessly been approached, offered free demos, and some free analysis of their online reviews by finetuned models), so it's quite competitive vs other applications (which tend to have less consistent results that scare businesses). SM classification I don't know about so maybe better?
marmiteCloud#5923: most advanced easy method is to use asymmetric(=search query) embeddings like those sentence transformers use - there's some pretrained models for this, "Deep Passage Retrieval" and "Rerank asymmetric emebddings" will find them.
In that you embed all documents (say, each paragraph) with sentence embeddings. Sentences covering similar semantic topics will be "close" on a "cosine similarity" metric that way (e.g. cheese + milk closer than cheese + laptop)
You need the asymmetric part to be able to look up using just a topic word (because otherwise, a word is like an extremely short document and far in embedding space from most "long" documents (your paragraphs in the text)). Asymmetric embeddings are collected from search engine data (e.g. Bing, the most common), so they naturally map to longer form queries correctly.
|
Results will be similarity from 1 (closest) to 0 (furthest) across your list of possible matching paragraphs for a query. You can use the average similarity of the term to all documents with asymm embeds as a proxy for overall presence of the term in a semantic abstract way, and compare across terms to find prevalence.
Emanuel Huber#5410: From your experience with those products (product review), do you think they are good enough or there is space for improvements? I tried a couple, but they only returned a review/post sentiment, without extracting aspects, and I missed this feature.
Rajko Radovanović#8407: I’m the actual end user here haha, not trying to develop a commercial application, but make sense!
ois#8544: thanks @marmiteCloud , I'll check that out =)
marmiteCloud#5923: I don't have recent enough experience to say about today sorry - sounds like a good direction.
ppt#0965: GPT-3 seems to have a 4000 token context length limitation. What alternative big language models could I consider to process longer text (e.g. 20,000 tokens)?
Miraxe#6858: Hi! is there any correct way to choose candidate labels in zero-shot NLP classifier ?
cakiki#9145: LongT5 can go up to 16K https://huggingface.co/models?other=longt5
nashid#0071: Started a thread.
Robert1#0234: how can I promote a token or bias model towards a token? Lets say I really want the model (e.g gptj) to generate "hello" more often.
vad13irt#0534: Training large models such as Transformers is a very hard task because it requires a lot of memory and time even on modern GPUs. For this reason, I collected popular and powerful methods for reducing memory consumption and speeding up training in one notebook.
The notebook was published in kaggle because it provides free GPU and also a comfortable interface for exploring:
https://www.kaggle.com/code/vad13irt/optimization-approaches-for-transformers
Enjoy!
vad13irt#0534: Additionally, I implemented most of the described methods with HuggingFace Transformers libraries.
NULL#3726: hoii
|
NULL#3726: do you know any such optimizer for dalle mini?
vad13irt#0534: I didn't train DALLE, but heard about shampoo
JonathanSum#8528: Than do you know any such optimizer for shampoo?
vad13irt#0534: no sorry
Bleso#6085: Hello everyone, I am putting together a study group where we learn how to build AI applications with Large language models using a No-code approach. Ping me if you interested.
NathanJw#1862: hello. I am looking for a model for sentiment analysis trained on conversations between two people. (call center) do you know one except AWS ?
alighte#0403: First of all, I was literally on the internet looking for this the moment I saw it. And it was very helpful thank you! There was no implementation for Dynamic Padding so I'm working through it but exactly what I needed!
CrossProduct#5905: in the vanilla transformer, what is the output dimension of the linear layer for WO ? when concatonating all the heads outputs together? it is a hyper parameter and arbitrary?
CrossProduct#5905: https://jalammar.github.io/illustrated-transformer/#:~:text=The%20score%20is%20calculated%20by,product%20of%20q1%20and%20k2. the Wo matrix that is trained jointly with the model.
CrossProduct#5905: nevermind it's the embedding dim lol
lbourdois#8829: @CrossProduct
Step 1 : dimensions for Query, Key, and Value matrices https://cdn.discordapp.com/attachments/922424173916196955/989070576742187028/unknown.png
lbourdois#8829: Step 2 : You calculate your Z https://cdn.discordapp.com/attachments/922424173916196955/989070723702210560/unknown.png
lbourdois#8829: Step 3 : Let's do it again for all our heads (8 in Jay's blog post).
We obtain 8 matrices Zi of dimension 64 and then we concatenate them: 8×64 = 512.
So Wo is 512 x length of the imput (512 for BERT for example)
Arsive02#8749: Hello, any suggestions on what can i do for the following scenario?
I have a mail thread ( Conversation using mail by replying to the same thread )
|
For example:
Thread 1 : I need to know the details of this application.
Thread 2: ( Explanations about the app) , Would you like to subscribe?
Thread 3: What is the price?
Thread 4: Its 10$/month
Thread 5: It's reasonable price.
Thread 6: Do you want to subscribe to this?
Thread 7: I would, yes. Whom do i get in contact with?
Thread 8: Our rep will contact you soon. Thanks.
Output: "Would like to Subscribe because of reasonable price"
In short, I need my model to be aware of the whole thread's context so that I can use it to find what happened and the reason behind the result.
Is there any models that can handle long term dependency context awareness? because the result of the thread 9 might be at thread 3. Or is there a better way to approach this problem ?
CKtalon#7792: there's only one problem: context length (if you are using most transformer models and your emails are super long).
Otherwise, it's just a problem of formatting your data (with a clear indicator between threads), and then maybe studying how the model attends to the result based on which key terms
CrossProduct#5905: hey yeah I learned at the end of it by looking at the pytorch code that essentially the output must match the input dimensions. I also noticed he made a mistake in the diagram with the feedforward layer as well. Probably gonna shoot him a message.
CrossProduct#5905: also thank you!
|
alighte#0403: I tokenized my data and it does not have a `labels` column. However, this fails in training because my outputs don’t have a real loss. Can someone help me understand what could be the issue? The only tokenized columns in my data are `['text', 'input_ids', 'token_type_ids', 'attention_mask']`. The goal is also code generation.
selea#8026: I'm a bit confused about text classification with BERT.
As I can read, BERT assign to every token vector embedding with N dimensions.
Afterwards we shove that into linear classifier.
But if I understand everything correctly, if we shove text into BERT, we'll get matrix with N*L dimensions.
What precisely goes to linear classifier?
mr_seeker#1337: You might have it tokenized for CLM training (eg GPT-2), not for MLM training (eg BERT)
Jackson-ComputerScience#9409: I am interested
ramonzaca#1703: Answer: You are correct. You end up with a N * L. Thats why in the classification Task, a pooler is added to select the hidden state of only the first and special token [CLS] that consequently goes to the linear classifier (At least on default) https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py#L656
selea#8026: So, if I have big text, where I need to classify chunks depending on their role in text(for example, I need to extract descriptions of persons), I can use several CLS tokens?
alighte#0403: See that's what I thought too but I'm using InCoder (https://huggingface.co/facebook/incoder-1B) they use, what they propose, a causal masking objective, which is a combination of CLM & MLM. Since its generation, however, this would be more in line with GPT, as supposed to BERT? Maybe I'll restructure my training to not require labels. The tutorial was for BERT. That may be it.
ramonzaca#1703: If understand, what you are looking for is closer to the Token Classification task where you classify each token into a set a categories. https://huggingface.co/tasks/token-classification
selea#8026: I've seen it.
But I have a bit different task.
That example about classification of each token.
I need to classify subsequences in sequence of tokens, depending on their role in whole sequence.
So, I'm trying to figure out, how to do it.
selea#8026: However, if it'll work with several [CLS] tokens, I guess, problem solved theoretically.
selea#8026: Practically it's not recommended to feed into BERT more than 250 tokens. While I need to work with sequences of 100000-500000 tokens. Which isn't nice.
ramonzaca#1703: You can indeed classify tokens and reform "sub-sequences" based on labels grouping
|
selea#8026: Oh, thank you!
ramonzaca#1703: Can you split said sequences?
selea#8026: If I'll split them, I'll loose context, which is crucial for correct classification. It's even worse, one subsequence might be around 4000 tokens.
selea#8026: Interesting, if convBERT will work.
selea#8026: Also, if I understand correctly, I can use Bert results to perform named entity recognition on same sequence? I just need to use different linear classifier.
sin yee#3513: Hi everyone, how to use BERT embeddings and adapt to my CNN model? I don't want to use the whole BERT model.
first I need to extract the embeddings layer?
```
BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
```
|
sin yee#3513: I experienced Glove, fastext, and word2vec before. basically we just load the embedding and return the embedding matrix. Is this the same flow for BERT?
luckynozomi#2333: yeah you can just use the word_embeddings or the output from the whole embeddings layer
but I am not sure if that will work well, because bert embeddings aren't trained alone with a specific objective in mind
Another option (if you just don't want to fine-tune the whole big BERT model) is to extract the `last_hidden_state` as in the transformers library: the "embeddings" of each sub-word of the last transformer layer. But note that these "embeddings" vary for each token based on input sentence context.
Toucan#3067: https://cdn.discordapp.com/attachments/922424173916196955/989758672341192704/unknown.png
Toucan#3067: Anyone understand what n_components does? why 20
Asher1.1🇮🇳#7026: Explained in the documentation
sin yee#3513: oraits~ about the `last_hidden_state` .
What do you think about this resource: https://gist.github.com/shubhamagarwal92/37ccb747f7130a35a8e76aa66d60e014
Vladimir#0583: Hello everyone, aside from reading the chapter about LLMs in "Natural Language Processing with Transformers" book can you recommend a must-read overview paper about LLM discussing trade-offs, limitations, future directions etc. Much appreciated.
NULL#3726: Does anyone know any good model to categorize emails
NULL#3726: Im having a lot of spam stuff and google spam doesn't work well
cakiki#9145: Naive Bayes 😄
NULL#3726: I'm sure google spam handling would be much better than thay and thats still failing :| or it'll be more finetuned for my emails?
alighte#0403: For domain adaption, why is it the case that copying the `['input_ids']` column as the `['labels']` column serves as the ground truth for the model to learn from? Wondering before I do the same for finetuning.
sin yee#3513: this is the architecture of my TextCNN model.
To check the bottom line accuracy, my teacher requested I to remove `conv1d_list` .
Which means **embedding layer directly connect to the fully connected layer/dense layer.**
May I know is this doable? 😂 https://cdn.discordapp.com/attachments/922424173916196955/990197716569169970/unknown.png
|
blueberryπ#4623: Does anyone have a guide to creating custom heads on a pretrained bert model
Jiaming Kong#5528: Hi, I am wondering if anyone has successfully further pretrained mBART model for new languages? If so could she/he kindly point me to some instructions? Thanks
Rikz#9184: hey guys I'm getting this error while trying to do dynamic padding on my tokenized dataset for text generation:
`Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.`
this is my code:
```tokenized_datasets = raw_train_datasets.map(tokenize_function)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
train_dataloader = DataLoader(tokenized_datasets, batch_size=16, shuffle=True, collate_fn=data_collator)
for batch in enumerate(train_dataloader):
print(batch["input_ids"].shape)```
can anyone help?
cakiki#9145: Please use the #ask-for-help channel
RektKid#0638: Hello guys! I'm currently working on a NER/POS problem. However, I have a problem whereby I may have 2 entities that aren't separated by space
e.g. suppose the sentence has this text `.....James.London.....`
I would have labelled the chars associated with `James` as a PER (person) and `London` as a LOC (location). Yet, if I simply split this via `.split()` they would still be together.
|
Does anyone know how to approach such a problem? Would be grateful for any ideas! 🙏
Jiaming Kong#5528: If you have a lot of these entities `James.London` or `Tom.NewHaven` you might as well create a new type of NER that is called PERLOC, for example, then mark the whole word as PERLOC, and do some post-effect with it
ARRiel#1221: Question for people who did some LLM/transformer fine-tuning (specifically for classification). If I run the HF `Trainer` with mostly default arguments, should I expect it to learn about as well as possible? Or is some hyperparamater tuning generally required to actually get good performance?
My specific situation right now is that I'm fine-tuning a Bert and the accuracy plateaued at like 57%, which isn't amazing. That being said, there is a possibility that the current input data isn't sufficient to get good predictions, so I'm not sure if I should work on this model a bit more, or I should just add more text to each datapoint
cakiki#9145: i don't think you can expect defaults to inherently be suited to your task. Try finding a paper with a similar task than yours and see what hyperparameters they used.
muskannnnn#9204: Question: I want to create a social bot that is empathetic (EmpatheticDialogue dataset) and knowledgable (Wizard of Wikipedia). Could someone please direct me to a resource/research paper wherein I can learn how to create such an open-domain bot/get boilerplate or reproducible code?
Jiaming Kong#5528: Hi, if you are tuning some LLMs, definitely try some warm up periods (i.e. let the trainer have linearly increasing learning rate from 1e-7 to target rate in 1000 steps or so), large learning rate plus small batch size sometimes could cause catastrophic forgetting in these models. There is a great paper called "Transformers without tears" and they talked about it in detail
Arsive02#8749: Anyone know how to extract words from a sentence that has been paid most attention to ? (or has the highest attention weight )
let's say I am looking to extract the top 3 words from a sentence of length 10
AdrienB#9362: Hi everyone,
I haven't yet found anything, is there an implementation of D3PM for text in HuggingFace or PyTorch please ?
This is the text generation model of Structured Denoising Diffusion Models in Discrete State-Spaces (NIPS2021).
Ancalagon#7777: Is it possible to add new tokens to a pretrained GPT-2 model without completely screwing up the model’s current performance?
It’s my understanding that adding new tokens causes the token part of the model to be “resized”, essentially destroying it
Ancalagon#7777: My intention is to add some new tokens prior to finetuning a pretrained model
Jiaming Kong#5528: Hi, I have done that once or twice, so basically what you need to do is 1) change the tokenizer, pay attention to some special tokens attached to the end of the dictionary if there should be any; 2) copy out the original weights (encoder and decoder as well), change the shape to accommodate the new tokens, then just copy the original weight to the new weight; 3) before finetuning you can try to initialise the word embedding randomly or with some other averaging tricks; May I also ask which GPT-2 pretrained model are you targeting? some of them already have a very large sentencepiece dict and should be able to just cover everything
z3rkn#6630: Started a thread.
|
Rocket#3809: Hey all! We're exploring adding end-to-end models for TF, where the tokenizer is compiled into the model itself. It's probably not that helpful for training, but we think it might have a lot of use for people who want to serve or deploy an NLP model without needing transformers/tokenizers to be installed, especially to edge devices. https://twitter.com/carrigmat/status/1542163487551520770
Rocket#3809: If you're interested, let us know! We're currently at the stage of figuring out if users actually want this, so whether we push to expand and polish this depends on how the community feels about it.
Rocket#3809: We're particularly interested in specific use-cases, so if you want to use this but you need a feature that we don't fully support yet (e.g. tokenizers for a specific model, TFLite compilation), tell us!
Lawful#8117: Hi i'm not sure exactly where to put this so i hope this is a good place, i used the pretrained model of "DialogGPT" from the python's transformers module, but i was wondering how i can give this AI a context? like make the AI know what it's name is?
cakiki#9145: What would the point be of adding this context?
Lawful#8117: tbh only to experiment, but also to see if it somehow makes responses more natural
Ancalagon#7777: I've experimented with providing backstory context to a chatbot trained on Discord logs, and it didn't work well. I tried injecting them into the generation prefix as if the bot had sent those messages itself recently into the channel talking about itself (the bot's job, where it lives, what its aspirations are, etc.), but it never worked well
Ancalagon#7777: And this GPT-2 chatbot actually improved text generation quality after I removed the backstory injects. I'm very familiar with working with GPT-2 and in managing its generation prefix to get it to behave in certain ways, so I'm perplexed at why it didn't work as expected
Lawful#8117: interesting...
Lawful#8117: so for the moment it's kinda hard for a chatbot to keep a "backstory"
Ancalagon#7777: I only tried the backstory chat injects with 124M and 345M, though now I'm using 774M. So it's possible the larger models are better at using that information, but I haven't tried again
Ancalagon#7777: I recommend trying it though
Ancalagon#7777: Just try to have your generation prefix injects be in the same general format as the rest of your generation (so if you're working with chat logs, make it appear to the bot as if it recently talked about facts about itself)
Lawful#8117: I see, makes sense, although now i have a question i been experimenting with the DialoGPT-large i still haven't trained it on nothing but it seemed like adding a little bit of input saying it that it was an AI for some reason worked, but do you think it would be better to use GPT-2 rather than DialoGPT?
Ancalagon#7777: Nah, go for it
Lawful#8117: one last question, how did you trained the AI? I mean yeah with discord logs, but i always thought it need to had some kind of format?
Lawful#8117: cause rn, it's responses are still kinda vague, and not natural
Malphabet#8872: Hi all, I am currently trying to build a dense retriever with Huggingface Transfomers but But I'm struggeling with training the two separate Bert based models for similarity. Are there any tutorials or notebooks, either for DPR or for training two BERT encoder for vector similarity? I'm relatively new to NLP and Huggingface. Thx!
Arsive02#8749: Is there a way to extract the decoder attention weights alone ?
Ancalagon#7777: I'm not familiar with DialoGPT, but GPT-2 is powerful enough that you don't need any specific input format. Just pick an input format you like, and ensure you are consistent with it
|
Ancalagon#7777: Try to cut out unnecessary chat log entries, such as people sending !bot commands or links in chat, code blocks, etc. You want to avoid wasting the model's limited storage/generalization ability on trying to memorize extra stuff like links and other unimportant things
cakiki#9145: I would look into the sentence transformers library: https://www.sbert.netdocs/training/overview.html
marmiteCloud#5923: what's the lines for `tokenize_function` & `tokenizer` ? probably in arg to tokenizer set padding=True. Some models need padding for sequences to be understood by the model in the right dimension, like if a model or data loader expects dim 512, you need a tensor with padding 0 values and any special tokens (101/102/CLS/SEP etc) within it
Presume you have a reason to use data loader, if not, you can use autotokenizer and batch yourself. in some cases sorting by string length then batching will be readable and competitive on speed
Lawful#8117: in auto training which option should i choose, question answering?
cakiki#9145: Please use a thread, or preferably #ask-for-help for implementation questions
Lawful#8117: oh ok, sorry
Arsive02#8749: output = model(tokenized_dataset['input_ids'][0:1])
attention = output[-1]
attention.shape
[1, 12, 128, 128]
I was able to extract the attention weights. There are 12 heads and each head gives different representations of Query Key transformations.
I am doing a classification task and i need to find which tokens are responsible for contributing to the classification. That is, the tokens which has more attention paid.
But since there are 12 heads I couldn't figure out how to find it because each head represents different attention values. I would like to know if averaging works, or if there's anything I am missing here. There has to be a layer which kind of pools it ?
yp_yurilee#5781: Potentially dumb question: why is it that T5 isn't compatible with AutoModelForSequenceClassification? The paper frames classification problems as generating the classification, but is there a reason why we can't use the classification head approach on T5?
|
baldbaliff#2861: I think because it uses text to text. I am sure you can manually take off the head of the decoder or just use the encoder.
mr_seeker#1337: @Lawful bit late to the party, but we at koboldAI using 13B models to create chatbots with their own personality.
mr_seeker#1337: We give them a backstory, then give a conversation starter and then its generating the rest.
muskannnnn#9204: hello! i want to use a tts api for my english teaching chatbot which can help the learners with their pronunciation queries ("how to pronounce ecstatic", for example). could yall recommend such an api which i can use with python?
muskannnnn#9204: nvm, found it on hf. hf ftw.
Victor B.#8127: Guys, how I could set TensorBoard for a HF model? I'm training a ASR Brazilian Portuguese model. But I would like to have a tensorboard log,callbacks
Victor B.#8127: I've tried but I think I may be plugging the wrong classes.
Omar Sanseviero#6198: There is a TB callback (https://huggingface.co/docs/transformers/main_classes/callback) that you can use. (FYI for #questions please try to use #ask-for-help )
Victor B.#8127: I saw that one... but I was not able to make it work ```tb_writer = SummaryWriter``` on the Training_Arguments or Trainer Class.
doink#3458: https://www.usenix.org/system/files/conference/usenixsecurity18/sec18-harkous.pdf can you suggest ideas on how to optimize or come up with alternative ideas to do things in a more simpler manner or efficient manner?
Cesus#4515: I am reading the book “Natural Language Processing with Transformers”, and run the notebooks from GitHub nlp-with-transformers”. I got an error when run 02_classification.ipynb. At cell 66 trainer.train(), it throws a runtime error: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (When checking argument for argument index in method wrapper_index_select)
Cesus#4515: Got the same error on colab and sagemaker studio lab
Cesus#4515: I the notebook only works with cpu machine. Of course very slow
Cesus#4515: https://cdn.discordapp.com/attachments/922424173916196955/993729955940794368/IMG_2698.jpg
Leandro von Werra#9428: Hi @Cesus, yes this is a bug - if you run `!pip install transformers==4.13.0` it should work. A fix is on the way: https://github.com/nlp-with-transformers/notebooks/pull/57
darragh#5870: My understanding of it is that because T5 is an encoder-decoder causal model, typically causal models are not as well suited to classification tasks as they were pretrained to generate coherent sentences; as opposed to MLM which are pretrained to fill in gaps based on the input. So in pretraining, causal creates something new based on the context - MLM corrects the input. Some of the earlier causal models like gpt2 have classification outputs in HF, but they do not perform as well as the non-auto regressive models like BERT and Roberta on classification tasks. However, you could add your own(copying the SequenceClassification head form another model) and try it out...
Cesus#4515: It works. Thanks.
Savitar#1884: Hola people, I'm confused with a statement in this paper: https://aclanthology.org/2021.acl-long.563.pdf
In this paper they're talking about NMT generate the n-best list, then they applying reranker.
My confusion: How a NMT can generate the n-best list of translation of a source sentence
|
lbourdois#8829: Hi,
I'm responding on my lunch break so I don't have time to read the whole paper, but just by seeing the authors' names, they're probably using a beam search.
Marc'Aurelio Ranzato was invited to give a lecture in the course of Le Cun & Canziani last year. I think you'll find it interesting:
- the video: https://www.youtube.com/watch?v=fR42OOy9ROo
- the website: https://atcold.github.io/NYU-DLSP21/en/week12/12-1/
And I think this article from the HF blog does a good job of explaining the different methods of generating text (which can easily be extended to audio if the model used uses a LM): https://huggingface.co/blog/how-to-generate
Savitar#1884: Thanks, will go through the links.
kagankorkmaz#7630: Hello all, I created an auto-question generation from a text by first generating keywords/keyphrases by using keyBert and then generating questions from these keywords based on the text by using fine-tuned-t5(https://huggingface.co/mrm8488/t5-base-finetuned-question-generation-ap). What I want to do is making these questions multiple-choice by generating distractors similar(but not too much) to keywords/keyphrases. How can I do that? Thanks.
Arsive02#8749: Hey there, I am trying to get the final attention values that contains all the information about every other attention heads.
model = DistilBertForSequenceClassification.from_pretrained("custom_model_name", output_attentions = True)
output = model(input_ids = tokenized_text['input_ids'], attention_mask = tokenized_text['attention_mask'])
attention = output[-1]
This code gives me the attention of all the 12 heads. I would like to get the final attention output that is passed to the Feed forward network.
|
According to this pic, they concatenate all the attention heads and multiply them with a corresponding weight matrix. But how do i achieve the same with hugging face ? Or is there a way to get only the final values instead of all the attention heads?
It'll also be great if there's a way to get the weight matrix so that this process could be done manually.. https://cdn.discordapp.com/attachments/922424173916196955/994859533560205424/Screenshot_2022-07-08_at_12.19.30_PM.png
Miraxe#6858: Hi guys, how are you?
how does zero shot give results for candidate labels that don't exist, for example "asaoksaok" ?
doink#3458: I want to answer questions related to privacy policy in an automated fashion.
Here is one sample question, Does the policy allow personally-targeted or behavioral marketing on Github?
I want to answer this question, probably the answer I am looking for would look like this:
We do not host advertising on GitHub and we do not sell your personal information. We use User Personal Information and other data to make recommendations for you, such as to suggest projects you may want to follow or contribute to. For example, when you fill out an interest survey at account creation, we learn from it — as well as from your public behavior on GitHub, such as the projects you star — to determine your coding interests, and we recommend similar projects. These recommendations are automated decisions, but they have no legal impact on your rights.
This comes straight from the privacy policy of Github, I am curious to know what possible methods I can use to answer this question, I am a noob in NLP so curious to know on approaches to answer this question, would semantic search using transformer based methods work or do what about having some NER based method or something else?
muskannnnn#9204: Hello!
I am building an entity classifier for my language learning bot
There are about 15 entities, including:
```
Greet
Find_the_meaning
Give_the_translation
Tell_A_Joke
Correct_My_Sentence
|
Word_Of_The_Day
Recommend_A_Book
Recommend_A_Video
Give_A_Writing_Prompt
```
I myself will create a database for human utterances and the related prompts. However, I'm afraid that I won't be able to get more than 300-400 examples.
Do you still think I'll still be able to fine-tune the BERT on a multi-classification task of classifying the intent?
Christopher#8030: Yes! BTW, things like this are generally referred to as Dialog Acts not entities
Oviawe#5559: Hello
Oviawe#5559: How can one measure Coherence scores for Transformer models
Slinae#1774: @lewtun Hi, Lewis, I plan to translate chapters 4,5,6, and 7 into Simplified Chinese (zh-CN), and will complete it as soon as possible.
Robert1#0234: i would like to train gptj in huggingface and pytorch. I have an 80GB GPU machine and access to multiple GPUS. But I still run out of memory very easily. Can people people point me roughly in what I should research and use to achieve this.
luckynozomi#2333: read this page and the "train on many GPUs" immediately after
https://huggingface.co/docs/transformers/perf_train_gpu_one
Robert1#0234: thanks
aaronrmm#3198: Actually, T5 is a Seq2SeqLM and not a CausalLM (or at least loads as such). I don't know the difference between the two and came here hoping to find out. As in
```The class AutoModelWithLMHead is deprecated and will be removed in a future version. Please use AutoModelForCausalLM` for causal language models, AutoModelForMaskedLM for masked language models and AutoModelForSeq2SeqLM for encoder-decoder models.
```
|
RektKid#0638: Hmm if we fine tune a pretrained BERT model, the input embedding layers will be changed as well right? (Although I guess not for positional and segment embedding?)
wasooli#5317: Hey, new to huggingface here, I had some doubts and it'd be great if someone could help me out-
My final objective is to fine tune GPT-Neox for NER on my own data but I don't have the resources for that at hand. So I want to test it first by using HF hosted smaller CausalLM model say GPT2. I notice some options in GPT2 for token classification but absent in NeoX
1. I want to know what's the difference between just the Causal LM models and these, I assume it's because they are fine tuned for it already but am not sure.
2. I could not find any helpful guides for fine tuning a casual LM model for NER, most of them use AutoModelForTokenClassification (is that the only way to go about this?) so if anyone could guide me to one it'd be great
muskannnnn#9204: Thank you for the confirmation!
And yes, sorry, dialogue acts. 🤦♀️
Also, a humble follow-up, are** dialogue acts** and **intents** synonymous terms?
Christopher#8030: “Intents” might well be an equally good or even better term here, but I think they’re not quite synonymous. “Intents” is normally used for the classes of requests in a task-based dialog system, whereas “dialogue acts” are used to classify all sorts of turns in human dialogues, and might include things like “elaboration” [of the previous point] or “example” [illustrating the previous claim].
muskannnnn#9204: I see, both these terms are different, thank you for the clarification!
Oviawe#5559: Hi
when using sentence tranformers is it necessary preprocessing of text is done?
muskannnnn#9204: No, this article has a fairly straightforward way of generating sentence embeddings using sentence transformer.
https://huggingface.co/sentence-transformers
```py
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('paraphrase-MiniLM-L6-v2')
|
#Sentences we want to encode. Example:
sentence = ['This framework generates embeddings for each input sentence']
#Sentences are encoded by calling model.encode()
embedding = model.encode(sentence)
```
𝘾𝙤𝙨𝙢𝙞𝙘𝙇𝙡𝙖𝙢𝙖#1205: Quick question for those who are familiar with the inner workings of the `EncoderDecoder` model: When I compare the max likelihood training outputs for a particular set of `input_ids` to the text generated using the `generate` function with those same ids, they are quite different. Why is this? My understanding was that the beam search in the `generate` function is grabbing the max likelihood iteratively as it generates, using past tokens and the hidden states to make its predictions, so it would make sense for them to diverge as the sequence progresses. However, not even the first token matches, which is odd since I assumed only the hidden states would be considered when predicting that first token.
sudo#4266: Hi there, is there any guide on doing gradient accumulation with AutoModelForSequenceClassification?
sudo#4266: Ah I see this now: https://huggingface.co/docs/transformers/perf_train_gpu_one#gradient-accumulation
sudo#4266: General question: I am trying to learn how the innards of compute_metrics works by printing out the input so I can be sure my accuracy metrics is right.
Is there a way to train on 1 sample in a batch to do that?
Merve#3234: I participated in writing the TF side and under the hood it uses the metrics in datasets library (or evaluate, as it's released recently)
sudo#4266: yeah i am just trying to prove to myself that the function i wrote for compute_metrics is doing so accurately so i was hoping I could just run it on 1 sample and output the preds/labels to double check.
luckynozomi#2333: how about you copy the sample multiple times and train on these examples? Imo that's the easiest way if there is no way to train on just one sample XD
pat9393#8935: Hey everyone, my first questions here. As most people here I am pretty excited about BLOOM. Altough I new it would fail I tried to use it in Colab but of course the checkpoint is way to big for the resources there. Unfortunetely I do not have any computing power at my disposal that is anywhere near the size it would need to load BLOOM. Therefore my question:
1. Does anyone know if huggingface is planning to do something like an embedding API? Like the one from OpenAI.
Thanks a lot!
|
cakiki#9145: BLOOM is an autoregressive model, it can't be used to compute embeddings
muskannnnn#9204: Bloom keeps amazing me every day!
I am building a rule-based chatbot in the domain of English language learning.
But I want it to fall back on an autoregressive model if the user makes an utterance that can't be handled by any rules. Do you think I should use bloom for this task?
What transformer do people generally use for chatbots? https://cdn.discordapp.com/attachments/922424173916196955/997451887861301328/Screenshot_2022-07-15_at_3.59.59_PM.png
muskannnnn#9204: Hello! i want to encode my sentences for intent classification. should i user sbert for the task or bert?
Merve#3234: it depends. if you want to model it as a text classification, you need to go with BERT. (which is a simpler approach compared to encoding sentences imo)
if you want to model it using similarity, you should go with SBERT.
I solved many intent classification problems and I'd say if you don't have high cardinality in your classes, you should use BERT.
muskannnnn#9204: Got it, thank you very much! 😄
sudo#4266: I have a pretty basic question but is there a way to split up dataset by an array of indices? Or is it expected to use pandas, do it there, then call Dataset.from_pandas?
sudo#4266: I am trying to split up my training and validation set by a column using GroupShuffleSplit.
andreaschandra#4851: you mean something like this ?
```python
datasets = Dataset.from_pandas(df).train_test_split(test_size=0.2)
```
andreaschandra#4851: I usually assign to a variable for each split
sudo#4266: .select was what i was looking for
Oviawe#5559: Hello, how can one calculate Topic Diversity for a Topic Models using Sentence Embeddings
Robert1#0234: i’m using deep speed to load large models into memory when training. why do some people use TPUs. what is the advantage / disadvantage compared to deep speed + A100
|
Fish Curry#2757: I am using a GPT-2 model to do generate text. During training I feed in the model a concatenation of context+target and during inference, I pass in the context, and the model predicts context+target. In training, I would like to modify the loss function to calculate the loss between the logits and the target only, i dont want to have the loss for the context predictions. Any tips on how I could do this? It would be great if you could point me to some resources that might help me do this.
John Cordeiro#8937: Is there someone using bloom for contextual question-answering tasks? For example, I have a website's content and want to answer questions in that context and avoid getting responses outside it.
marmiteCloud#5923: Not Bloom, but I have used BART, finetuned T5 and GPT models in the past for that task. What are you working on John?
gg333#3895: Hi all, I've deployed a m2m100 model for machine translation using fastapi. Sometimes, anyway, it raises a RuntimeError: CUDA out of memory. Tried to allocate 6.69 GiB (GPU 0; 22.38 GiB total capacity; 11.86 GiB already allocated; 20.70 GiB reserved in total by PyTorch). Any idea to solve it?
gg333#3895: (at inference time!)
John Cordeiro#8937: I've fined-tuned a BERT model with SquaD dataset translated into Brazilian Portuguese for the Question-Answering task, however, I'm experiencing very short answers and in the use case (chatbots) I need a more natural and long answer which can be generated. I've been wondering if fine-tuning BLOOM for that would be a good alternative.
Robert1#0234: am I right that using multiple GPUs to train a model which has batch size of 1 is pointless because all the speedup comes from breaking a batch into chunks and running in parallel.
Robert1#0234: its not even possible I think?
Eliel#4676: Hi everyone!
A bit of an unusual request. I'm a scientist that is doing a podcast (+ real-world experiments) with a film data analyst about the future of storytelling in the era of AI. We'd like to talk to a Hugging Face team member/s deeply involved in NLP to have them as guests in the podcast. Do you have any suggestions about who I should contact? Happy to tell you more in a DM here if you are an interested team member.
marmiteCloud#5923: Yeah SquaD is designed for short, concise answers. There are not many longer answer datasets, except Yahoo Answers, Quora datasets I would think (they may be too long). Not sure about Bloom, but would be keen to hear what you find if you do...
Robert1#0234: do people reckon bf16 is significantly better than fp16 for training? I heard it is in theory but wondered what people see in practise?
INF800#5205: Hi everyone, and experience NLP practitioners. I want to build a question answering engine for greatest (atleast one of many) poetic composition in Sanskrit characterized by decorative elaboration - Valmiki Ramayana (https://www.valmiki.iitk.ac.in/sloka?field_kanda_tid=1&language=dv&field_sarga_value=1) the webpage has sankrit (devnagari, hindi, ... 11 languages) written text along with extremely shord description and word by word explanation.
What is the best way to build the question answering engine? Can I use BLOOM for this?
PS. I will be converting these texts into visuals by DALLE2 (I have access to it).
Note: I love literature and this is one of greatest original works on history/poem in Sanskrit (itihasa/maha-kavya).
|
**Self funded project :hugging_rocket: **
Omar Sanseviero#6198: Haystack could be interesting for this. THe NLP Book has a chapter building a QA engine with it
sMili#6973: guys i need help, i have 2 question, how can i load some bloom model as bfloat 16 and in gpu with offload in ram?
sMili#6973: i am using BloomTokenizerFast, BloomForCausalLM from hugging face
Omar Sanseviero#6198: Hey there! We have #ask-for-help for particular questions such as this @sMili. A better place is actually the Community tab in the bloom repo since the authors are there!
nan#3481: nice job!
Shubham Saboo#6898: Excited to announce the release of my O'Reilly book "GPT-3: Building Innovative NLP Products using LLMs". If you are interested in building products on top of OpenAI API and turning them into successful businesses. This book is the go-to resource for you, so don't wait and check it out now 🚀
https://twitter.com/Saboo_Shubham_/status/1551130871276584960
𓅬 gabriel_syme 𓅬#3220: Hello team! Can I ask why or when the NewT5 models will be publicly available? I can see the org and I followed the implementation issue on github. Everything seems to be there but models are not available. Wonder if I missed anything. Cheers!
𓅬 gabriel_syme 𓅬#3220: Oh nvm I guess they were released within the Google org page?
INF800#5205: Hi, thankyou for the kind information. I've been following deepset for some time. Will definitely use haystack! I'm a big fan of their elastic search code.
I'm stuck at one point though. I want to mine Wikipedia pages to increase my pretraining dataset. Is there some FOSS tool available which can mine all the connected pages in Wikipedia highlighted by blue text hyperlinks
INF800#5205: Hey Shubam, will definitely go through the book. Just wanted to know one thing - what is the parameters count of model that you will be using in this book? Definitely not 175B righ?
INF800#5205: One small suggestion though - would suggest you to add contents/index page in Amazon products website.
muskannnnn#9204: Hello, I have fine-tuned BERT on sequence classification using torch..could someone please share code/reference page on making inferences using this model? Thank you!
Robert1#0234: is gpt2 tokens the same as gpt neo 125m tokens?
Alan__#7521: Hi everyone, anybody had worked on bloom model ? I don't realy understaand how it works. Thks
Thomas Simonini#8611: Hey there! I didn't worked on the model but some resources that helped me to understand where the model card https://huggingface.co/bigscience/bloom that gives you some first information. And also this resource: https://huggingface.co/blog/bloom-megatron-deepspeed
|
Alan__#7521: Thanks you thomas, i will go investigate other task that génération text. I didn't see méthodes to Finetune this model
muskannnnn#9204: Hello people!
I am performing sentence classification using DistilBERT.
I've got 2 classes: One has 1200 examples, the other, around 1700.
muskannnnn#9204: Is the class balance very apparent or should i go ahead and train the model?
Omar Sanseviero#6198: You could downsample the larger class and see the results. I would maybe also do the other training with all data and then compare side by side
muskannnnn#9204: That makes sense, thank you!
These are the metrics by using the imbalanced dataset
| Train Accuracy: 0.996
| Val Accuracy: 0.946
| Test Accuracy: 0.933
Will downsample the larger class and see if the results get any better.
UPDATE: turns out most of the test errors were false positives, which is quite detrimental for my model. downampling and equal balance approach it is!
muskannnnn#9204: hello again!
Is there a way to find the similarity between query and an answer, im building a retrieval based bot. This screenshot is of SBERT Sentence Similarity, which didnt give ideal results. https://cdn.discordapp.com/attachments/922424173916196955/1001791468018597899/Screenshot_2022-07-27_at_3.30.03_PM.png
Shubham Saboo#6898: It is 175 Billion, GPT-3 via the OpenAI API. Also, you can find the index/table of contents by clicking on look inside at the book image on Amazon!
INF800#5205: Thanks!
|
INF800#5205: Is it free? Does prices look affordable?
ai#1933: You can check the models here. The qa ones are trained on question answer pairs and might do better
https://www.sbert.net/docs/pretrained_models.html
muskannnnn#9204: Thank you very much for sharing!
INF800#5205: The way I do it is use sentence-transformers project to minimise objective between custom generated data
muskannnnn#9204: Makes so much sense to tailor it to my project's needs.
Could you give me any estimate on the minimum no. of examples in the custom data you generate?
sMili#6973: guys today i see the bloom 1.3B distilled and distilled x10, there is some plan to do the same with biggers versions?
Oviawe#5559: Hi
Oviawe#5559: What pretrianed models on hugging face can be used for heart sound classification
Dr.Inc#8332: Going through the Natural Language Processing with Transformers chapter 7 Question and Answer. The author talks about a few questions and answers, like community and tabular questions. I was wondering if there are more types of questions and answers.
preeminentchaos#7450: okay, I need opinions on if it would be possible to build an Ai Model that animates Manga Chapters from "Director Instructions" to connect images and context for the storyline to make sense.
muskannnnn#9204: hello people!
when we are doing intent classification on custom task (dataset made by us), what is the baseline that we should compare it to?
myishere#4080: I am making question answering (unsupervised) for all sharepoint documents. But i am not getting good accuracy for this. Any suggestions ? Issue is not getting right context as the answer to the question.
Manel ALOUI#8001: Hi everyone, I'm working now on a projects that's designed to extract some relevent information from legal documents like dates, parties , price ., I don't have a dataset in hand, I'm a little bit stucked on the method to use, I saw some transformers in this domain in huggingface. is there anyone who worked before on similar project could help me with that? I have some questions and would be delighted if I find someone that could help me
muskannnnn#9204: Hello, community!
I am trying to classify TEDTalks into English CEFR levels based on the complexity of the tedTalk transcript.
Any suggestions on other way of modelling "how complicated a TED-talk is?" (especially for a person who is watching the talk to specifically learn English)?
cakiki#9145: interesting task! What have you tried so far? (Or what are you thinking of trying 😄 )
|
muskannnnn#9204: currently i have downloaded this dataset (it has ted talks + transcripts): https://www.kaggle.com/datasets/miguelcorraljr/ted-ultimate-dataset
then i ran the transcripts through a CEFR level predictor (model already.on someone's github and hosted on streamlit): https://amontgomerie-cefr-english-level-predictor-cefr-predictor-34m9g0.streamlitapp.com/
muskannnnn#9204: now i'm looking up research on encoding complexity of educational videos to imporove on the above baseline. will be grateful for any advice!
cakiki#9145: i'm really curious what a simple tfidf matrix with a linear classifier would achieve
muskannnnn#9204: sounds intriguing, will definitely look into it, thank you!
NULL#3726: do we have an NLP model to generate bad 1 star reviews ?😅
NULL#3726: I wasted so much time in thinking to write something long enough 😂
ppt#0965: How can I understand whether an answer answers a question?
I have data containing pairs of questions and answers in a conversational setting.
Some of the answers do provide an answer the question, while others may just get started or include irrelevant information.
For example:
Answered:
"What is the capital of France?" - "Paris."
Not answered:
"What did you have for breakfast?" - "Let me think about it. Is today Tuesday?"
|
How could I understand whether a question has been answered? I could build a dataset and train a classifier, but I fear that I would need to collect a lot of data.
vanviper#5081: You can Design some cases/forms of answers expected by Observing and analyzing of accepted answers and then Check answers collected whether they have expected forms
orgrim#1636: Take a look at the natural questions dataset.. it has long, short and yes/no answers if i remember correctly
orgrim#1636: I'd suggest experimenting with a model trained on MSMARCO from sentence-transformers.. build a small test dataset and check if a score threshold lets you do this
marmiteCloud#5923: Whether the answer is contextual? Above suggestions.
Whether the answer is true? RAG, domain-specific finetuning, eliminate low-confidence judgements as possibly false if possible to generate many answers for each question
sin yee#3513: anyone knows if FastText crawl 300d 2M dataset are all in lower case? any easy way to check?
https://worksheets.codalab.org/rest/bundles/0xe5c8c81715594eaaa32de75ac210dd29/contents/blob/
https://www.kaggle.com/datasets/yekenot/fasttext-crawl-300d-2m
muskannnnn#9204: Hi Chris! I tried TFIDF and sdgRegressor.
Made it a regression task where l converted the continuous values to classes.
I got 0.8172383881316656 accuracy.
Most of the errors are on-off misclassifications (since the boundaries between English proficiency levels are blurry.
Thank you for your advice!
muskannnnn#9204: https://cdn.discordapp.com/attachments/922424173916196955/1005039307171582013/Screenshot_2022-08-05_at_2.36.45_PM.png
cakiki#9145: Nice!
Robert1#0234: if I train with
```python3
5 GPUs and per_device_train_batch_size of 8
|
8 GPUs and per_device_train_batch_size of 5
```
so my overall batch size is the same -- should performance be roughly the same (subject to some noise as if using random seed)
Robert1#0234: it seems to me the 8 GPU version is not even much faster
cakiki#9145: But still faster?
Oviawe#5559: Hi everyone
Oviawe#5559: Each time I run a Topic model using BERTopic I get a different set of topics
Oviawe#5559: How do I fix it
Oviawe#5559: Or are these models like that always?
Robert1#0234: yeah still a bit faster
Robert1#0234: do you think using more GPUs but same batch size will change performance? (so smaller per device batch size so overal batch size is constant)
muskannnnn#9204: Hello everyone!
So I have been developing a chatbot for English language teaching. it is a goal-based bot with intents like: find the meaning, antonyms, synonyms, reading skills exercise, writing skills exercise
I predict the user-intent using a bert classifier (i created the intent dataset on my own), do entity extraction with regex and use external APIs like WordNet. I have also web scraped loads of English passages for reading exercises. Actually, I want to present my chatbot for a research conference.
However, today, I finally used GPT 3 (Very early, I know)
Most of the things which I have done for my chatbot can be smoothly answered with a single prompt on GPT-3.
So, my question is, should i dismantle all of my code and the chatbot pipeline that I have presented in the research paper and replace it with GPT 3.
|
Or should I keep my Chatbot Pipeline as it is, as it shows how I hacked my own chatbot?
This is my first research paper, so kindly provide your opinion, thank you!
Rikz#9184: Is it somehow possible to fine-tune nlp models dynamically for different datasets and then use them all respectively?
For example, there is an UI where we can upload our dataset and then from that dataset the model will be fine-tuned, and later the user who fine-tuned the model with their dataset can use it for their purpose.
if this is possible, what can be a optimal approach?
sin yee#3513: If all data were lowercase, will it affect NER?
Is casing important to NER?
NULL#3726: https://github.com/ivan-bilan/The-NLP-Pandect
Slick#7454: Do you know how to train it just to learn off a specific dataset I’m still stuck on that 🥲
Slick#7454: I fine tune it for 8 hours and then ask it a question similar to what I finetuned and it dosent know. Even if I ask it a question I finetuned directly
Rikz#9184: which model did you fine tuned and did you cleaned the dataset?
Slick#7454: I’m trying to fine tune gpt neo and I just used a big dataset I found online of math problems. I’m just following this article:
Slick#7454: https://happytransformer.com/save-load-model/
Slick#7454: It’s not a fancy dataset or anything it’s just
“What comes next in the sequence 1, 2, 3, 4?
5
|
What comes next in the sequence 10, 15, 20, 25?
30”
A huge Text file of that. Using text generation btw. But then I ask it “what comes next in the sequence 1,2,3,4,5” and it dosent know
Slick#7454: I’ll give you my code if needed
Slick#7454: Also I don’t care if it’s with happy transformers I’m just using them cause their simpler but I can use any packages or libraries or anything needed :)
Rikz#9184: I'm not much experienced myself fyi but if you can send your code maybe I can try to see if you're doing something wrong
Rikz#9184: plus bert would be a better choice for QnA imo
Slick#7454: I’ll dm you my code so u can see
Slick#7454: Alright i sent everything your way, any help is appreciated and I don’t mind changing any code or using a different extension or whatnot!
Rikz#9184: ig we should always keep the data set all lower case prior to feeding to the model
cakiki#9145: It depends on the model. The important thing is to stay consistent throughout your pipeline. (Use the same preprocessing steps for training and inference).
cakiki#9145: but you can't generate text (and therefore answers) using BERT
Rikz#9184: according what I know, bert doesn't generate text or answers, it just finds the answer from the context provided
cakiki#9145: Ah good point! (I keep forgetting about extractive QA, i.e. using BERT with spans like you described)
cakiki#9145: I only had generative QA in mind, my bad!
Rikz#9184: ah no worries, i myself keep forgetting all the things, there's too much to remember
Arsive02#8749: I was reading this paper https://arxiv.org/pdf/2205.10770.pdf and I came across these experimental results.
TL;DR
|
1. For Causal Language modeling, LLMs tend to memorize faster as the model size increases.
2. For Masked Language modeling, LLMs **initially** memorize **slower** but later reach memorization faster https://cdn.discordapp.com/attachments/922424173916196955/1006513354862497912/Screenshot_2022-08-09_at_4.11.36_PM.png,https://cdn.discordapp.com/attachments/922424173916196955/1006513355193864222/Screenshot_2022-08-09_at_4.11.50_PM.png
Arsive02#8749: Any idea why this happens for Masked Language modeling ?
sin yee#3513: Has anyone works with NER before? Which tool did you use for NER Annotation? :hugging_cool:
cakiki#9145: I've used doccano before and it was super easy to set up locally (https://github.com/doccano/doccano)
sin yee#3513: yeah used it for annotate line text before. Is it convenient to use for NER?
cakiki#9145: It is!
sin yee#3513: Btw, has anyone read this? https://www.researchandmarkets.com/reports/5547080/natural-language-processing-nlp-market-global
It looks very interesting but very costly. Worth reading?
Thomas Simonini#8611: Clearly not 2500$ 🤯 . For two reasons:
1. Nobody knows what the market will look like in 5 years, world is too complex to understand a thing in that timeframe it reminds me the "specialist" who explained after dot com bubble that internet is dead 🤦 and people should not invest in it
2. You can find the same type of information in free or cheaper articles (business insider, forbes etc)
muskannnnn#9204: hello hf community!
I am comparing the gramformer (https://huggingface.co/spaces/prithivida/Gramformer) and a gpt-3 finetuned by me for the task of grammatical error correct.
what dataset should i use for this comparison?
sin yee#3513: Abt GloVe 300D word embedding...
E.g. apple represented by [0.1 , 0 , 0 , 0.8 .... 0]
total 300 columns there, **how to find out the column headers**?
|
0.8 could probably mean 'apple' is 80% related to 'fruits'. so fruits is column header.
sin yee#3513: How good is BERT tokenizer?
I'm using BERT pretrained for the 1st time & found smtg weird here.
the original word 'demonstrator' has split into 3 tokens that have different meanings.😮
1. Will this affect the model performance?
2. what's the function of '##' here? https://cdn.discordapp.com/attachments/922424173916196955/1007129244138225704/unknown.png
B2HAN#7196: its not something weird actually. Bert tokenizer tries to find longest sequence that is available in the token list. In that case demonst is not a token that is why 'demons' are seperated from 'tra' and so on.
B2HAN#7196: ## indicates that, this token is not a new word but it is separated as a new token because the reason i told
B2HAN#7196: However make sure you are using the same tokenizer which model is trained on. It will affect the performance if your tokenizer is not the same used in the training, otherwise it is fine.
MvW#1425: As B2HAN wrote, the "##" are used to say "I am the continuation of a word". The purpose of subword tokenization is to reduce the size of the vocabulary. For example, in English, the morpheme "able" in "readable" and "comfortable" is the same. So we could consider "able" as a separate token in our vocabulary. If we tokenize by space, then the words "train", "trains", "trained", "training", and "trainable" would be considered different inputs in our vocabulary. But if we split by morpheme, then they become "train", "train"+"##s", "train"+"##ed", etc. Subword tokenization methods such as wordpiece or BPE are made to try to automatically (in an unsupervised way) generate a vocabulary of morphemes.
Since they are unsupervised, you can consider them as heuristics, so not perfect. In https://arxiv.org/pdf/2004.03720.pdf, they make a very nice comparison of the results between BPE and Unigram LM which are a very simple and a more complex subword tokenization techniques. And though it might impact the model's performances, I keep seeing recent publications using simpler methods, as they are much faster and don't seem to impact that much the quality of the downstream model.
To go even further, several papers tried getting rid of subword tokenization (https://arxiv.org/abs/2105.13626), mainly because it supposes a morphology of consecutive characters (which is not the case for many languages such as Arabic, Hebrew, or Amharic which use infix morphemes).
Morizo#1234: Excellent explanation about subword tokenization. Thank you
MvW#1425: My pleasure 🙂
EddyGiusepe#6796: Hello everyone!
|
Can someone tell me if AutoNLP is paid?
For all I know, it's from Hugging Face, ... I thought it was free.
EddyGiusepe#6796: https://huggingface.co/autotrain
cakiki#9145: It is a paid product, yes
EddyGiusepe#6796: @cakiki , Thanks for the information!
EddyGiusepe#6796: Hello, everyone!
Does anyone know of a repository for Binary Text Classification?
I'm wanting to build a model to classify very large texts (for example: Fraud and not-Fraud) and then try to put it into production.
EddyGiusepe#6796: Thank you!
Sangeetha Venkatesan#0414: Hello team, whats the best way to load the model, model_cluster = ClusteringModel('./Clusteringmodel', 'transformermodels','all-MiniLM-L6-v2.tar.gz')
Kind of this, where I have a class that fetches the file from the model tar from S3 and then calculate embeddings on it (without Sentence Transformer) or to have it downloaded from the hugging face hub like this ( model = SentenceTransformer('paraphrase-distilroberta-base-v1')) - I have to run this in AWS lambda. Any help would be appreciated
mz#9453: I have the same question. Have you got an answer?
yepster#9326: I'd try to find a few recent papers that would match the topics your research would be about, look at the evaluation and metrics used, apply it to your setup and if the findings are > sota for comparable model sizes, bingo 🙂
Ayenem#2103: Can someone explain to me the damping factor `d` in the TextRank formula? `S = d*S.M + (1 - d)` where `M` is the adjacency matrix of nodes.
It's explained in Mihalcea & Tarau (2004) as "the probability of jumping from a given vertex to another random vertex in the graph" but it doesn't quite sink it for me.
Please ping me if you reply
NULL#3726: https://cdn.discordapp.com/attachments/922424173916196955/1009287973474402405/unknown.png
NULL#3726: https://cdn.discordapp.com/attachments/922424173916196955/1009288005888004176/unknown.png
NULL#3726: https://hyperwriteai.com/
NULL#3726: Any dataset to try this on GPT2?
|
doink#3458: Hi, wanted to hear your take on Privacy Policy analysis for regulation for business.
https://www.producthunt.com/products/pribot here is a product I came across which does summarization of privacy policies, this is a research project by university professors curious to know this can be useful for business for compliance regulation?
muskannnnn#9204: Hello community
So i trained a text classification model on bert for 10 epochs and it yielded 100% train, test and val accuracy.
Granted, I had ample examples and trained on 10 epochs, is 100% train, test and val accuracy a red flag🚩 in any way?
cakiki#9145: Huge red flag; could be a data leak
muskannnnn#9204: 😳 i see
will take a look at that: something definitely happened while splitting the dataset
nidaks#9777: Hi everybody,
I am working on a student project and I aim to classify stories just by comments which are left below. Comments have a tree-like structure and therefore I am looking for a way to leverage that instead of just concatenating text and using that as input. I would like to use a pre-trained model but I am not sure how that will work since I didn’t find any model that works with nested text structure on input. What are the options that I have and what is the best practice for this kind of problem?
cakiki#9145: Can you show us an example of what the comments look like?
nidaks#9777: Reddit stories and comment structure are in question
V3N0M#5374: Is it possible to train any sort of text generation language models into conversational model (like what was done with DialoGPT?)
I do know that having a chat like prompt with bloom makes it act kinda like chat bot.
But, I want to know if there is a possibility like how DialoGPT was done
V3N0M#5374: Since bloom was trained with more data than gpt2 that DialoGPT is originally based on, I want to test it for the needs I have
jahid#9951: Is it possible to use NLP in game development? If yes, what are the scope for reseacher to do some further study in it, any suggestions
Omar Sanseviero#6198: Yes, but the scope of this is huge 🙂 there are many ways to apply NLP in game development, from generation, to intent understanding or analizing text. cc @Thomas Simonini has done a bit on this
|
fynn_matt#2586: Does anyone have a good recommendation for a labeling tool to use for text classification & NER? A tool that integrates natively with datasets & transformers would be great if one exists
cakiki#9145: Not sure anything exists that integrates natively with the HF ecosystem, but I've had good experiences with running `doccano` locally
fynn_matt#2586: Awesome, thanks @cakiki will take a look. Thought it would be nice if a labeling tool could read & write to datasets directly. Haven't found anything like this out there though
cakiki#9145: That's a great idea; I wonder whether someone at HF is already working on that. Maybe @Omar Sanseviero knows? 😃
fynn_matt#2586: Thanks! Sounds like a fun side project, might try to hack something together
Omar Sanseviero#6198: Hey there! Different labeling tools have direct integration with Datasets. Rubrix is a good example of it for example. Gradio also has utilities to automatically save data to a dataset
Omar Sanseviero#6198: https://gradio.app/using_flagging/#the-huggingfacedatasetsaver-callback
fynn_matt#2586: Thanks @Omar Sanseviero ! Will check out Rubrix and this guide with Gradio as well
Omar Sanseviero#6198: Sure thing! ❤️
Thomas Simonini#8611: Hey there, yes as @Omar Sanseviero mentionned there are some projects you can work with. I think the easiest way to start is how to use sentence similarity to let players type orders to a NPC (non playable character), the model will search what given a user prompt the most relevant action the NPC can take.
https://stadia.dev/intl/fr_ca/blog/creating-game-ai-using-mostly-english/
Nelis#2802: Hello
It is not so easy to finetune huggingface transformers in a Google colab notebook
Like gpt2.finetune
Do you know where I can find easy tenserflow or pytorch code for Google colab notebook
|
I mean python code
ultimawar#9712: Has anyone seen any interesting papers on exploring the significance of attention tensor values outputted by the attention heads? I remember the BERTology paper did a bit of this work.
jahid#9951: Thank you so much for the reply. That's helpful 🙂
jahid#9951: Thank you 😊 it's helpful, I will do some study on those things a bit to understand more about it.
mz#9453: Hi all, I have a question about the legality of synthetic data generation by models such as GPT-3 or Bloom? Is such synthetic data legal? Or is it considered a derivative work?
Boomer#5628: Hello, I am working on wav2vec models. I must finetune a French model : "facebook/wav2vec2-large-xlsr-53-french" and I get the following problema.
As a reminder, the model transforms an audio (a vector of (1.30094) in my case) into a matrix of logits (1.93.49) where 93 is the number of characters and 49 the number of possible characters in the vocabulary ( a,b,c,...,z,0,...) . The problem is that when you hear someone speak there may be blanks (possible character: blank). When I compare this vector of 93 values with the real tokenized sentence of 19 values it does not match. Do you know how to do it?
Boomer#5628: ```
from transformers import Wav2Vec2ForCTC,Wav2Vec2Processor
device = "cuda"
model = Wav2Vec2ForCTC.from_pretrained(model_name)
w2v_processor = Wav2Vec2Processor.from_pretrained(model_name)
loss_fn = nn.CrossEntropyLoss()
# Compute prediction error
x # shape (1,30094)
|
y # shape (1,19)
pred = model(x).logits # shape (1,93,49)
pred = torch.argmax(pred, dim=-1) # shape (93)
loss = loss_fn(pred, y)
```
NULL#3726: How would someone organize a whatsapp group chat data? If I want to make a chatbot?
NULL#3726: the text flow isn't always sequential, someone replys to some old message from any previous message
firebridge#9480: Hi! Could someone help me on what architecture or model can be used to rank the _meaningfulness_ of a sentence, either standalone, or within a certain context?
Put it simply, how could I find out the probability of a user input is mumble jumble, or how much it has deviated from the context of what we are talking at the moment?
For example, when someone is drunk or high on meth, the input will be very different from normal, how can I tell them apart?
Similarly, the same input "that girl is tall" would have very different ranking when we are talking about basket ball and when we are talking about programming.
firebridge#9480: You can try inserting the prompts `Me: ` and `You: ` at the beginning of each line, for example:
```
prompt='Me: I ate an apple.
You: '
```
|
You shall also include the recent history of the conversations into every input to the model.
I have tried this with Bloom-3b and it works quite well I should say.
V3N0M#5374: ohhhh
V3N0M#5374: Imma try that
firebridge#9480: Have fun!
madcow#9059: How does one use tokenizers with the inference api?
For example, I'm trying to translate text using `facebook/mbart-large-50-many-to-many-mmt`, but the languages here are set in the tokenizers. Are there any parameters in the HTTP API for controlling this? I can't seem to find anything on the topic
Samarth Garg#9321: Hello guys, I am Samarth Garg, I want to learn more about multimodals using transformers, can someone help or suggest some good references?
V3N0M#5374: yep!
This did help out
V3N0M#5374: So, what was the point of creating DialoGPT in the first place, when this was possible
Did they just train GPT on chat dataset and named it new model ? 🤣
firebridge#9480: Yeah, best is to fine tune the model with focus on chats and dialogues. This will give a much more convincing result, especially in industrial applications where there is always a limited and well defined scope of conversations.
V3N0M#5374: Now i have another doubt
When i was reading on fine-tuning conversational models (BlenderBot and DialoGPT) the training dataset included a row with response and 'context'.
|
Why is that so?
Is it just learning each row just as if it was some text?
MvW#1425: Hi Samarth,
For multimodal work (mainly text + image), maybe take a look at models like ViLBERT and openAI CLIP which were made to match text to images (or part of images). You can find good summaries of both models on the AI coffee break channel (https://youtu.be/dd7nE4nbxN0 and https://youtu.be/dh8Rxhf7cLU), before going in depth in the papers, if needed.
If you want to get more into image generation from text, then you'd have to look more into diffusion models or combination of diffusion models with transformers, like what Google imagegen does (https://youtu.be/xqDeAz0U-R4).
I hope that helps :).
Samarth Garg#9321: Thanks for the resources @MvW but with Multimodel I meant that my model was taking an image and text as an input and has a classification/regression kind of objective.
MvW#1425: Oh, I see. Do you have an example of application in mind?
Samarth Garg#9321: So there are some food images and reviews given and the task is to identify the class of the food, it's the upmcfood 101 dataset
Metroproxyn#2769: Hello everyone,
What is the advantage of Gensim compared with other libraries such as FastText, Word2Vec, etc?
MvW#1425: I am not aware of a transformer combing multimodal data, I am afraid. More of applications going from one modality to another, or matching them.
What I would do is use transfer learning on the image side only (using one of the latest state-of-the-art model on imagenet: https://paperswithcode.com/sota/image-classification-on-imagenet). Then use transfer learning with a transformer (perhaps https://huggingface.co/docs/transformers/model_doc/deberta-v2#transformers.DebertaV2ForSequenceClassification, as it does pretty well on SuperGLUE: https://super.gluebenchmark.com/leaderboard/) on the text only.
Finally, see if I can get better results by combining them. Which means, you concatenate their encoder outputs, and feed it to classifier head. What do you think?
Samarth Garg#9321: I was thinking of using vision transformer to extract the image embeddings and then concatenate it with the text embeddings from Bert like model. After that training it using early fusion technique.
Samarth Garg#9321: But I am finding it difficult to do it using the datasets and transformers library.
|
nidaks#9777: Hi, I want to classify stories into two categories by only looking at the comments bellow the story. If I concatenate all the comments to use it as an input, that string often have more than 20k words. I wanted to use pretrained BERT base model from transformers, but there is a constrain that max input length is 512 tokens. What should I do, should I train for every comment separately or pick the most informative comments first based on some heuristic or maybe use some summarization technique first? Any advice or clarification regarding this topic is very useful, thank you 🙂
Yulong Wang#2299: Depends on your data distribution. I usually try to do simple cleaning first (e.g., filter out short comments) and train a model to classify every comment. In evaluation, use a hard voting strategy to get a final prediction.
Robert1#0234: https://github.com/huggingface/optimum
I want to speed up inference of GPTJ using ONNX. Is the optimum library sufficient or will I have to do something more involved to get best results?
MarkOConnor#6175: Did anyone here work on BLOOM? I noticed it uses weight decay for the word embedding weights - was this an intentional decision? Did you run any experiments to compare with/without?
MarkOConnor#6175: I've seen people get surprisingly good results by encoding each comment separately and then averaging the encoded vectors to produce a "summary" vector over all of the comments. Would definitely try this.
MarkOConnor#6175: LM perplexity gives a measure of how "expected" a continuation is for any given language model, but whether it would be useful for spotting such subtle differences as deviating from the topic even for a very large LM is an interesting question in its own right imo. Alternatively and somewhat amusingly I would expect GPT-3 to perform quite well at this in a zero-shot setting. Give it a few examples of meaningful / meaningless continuations and ask it to spot the non-sequiturs!
MarkOConnor#6175: Are you aware of Anthropic's work here (https://transformer-circuits.pub/ )? Definitely interesting albeit in models modified to be more interpretable, but not exactly what you're asking for.
MvW#1425: Have you looked into the course (https://huggingface.co/course/chapter1/1) ? I used to struggle like you, and it really helped me. There are chapters on transfer learning with pre-trained models, and the datasets library.
gg333#3895: Hi! While using m2m100 MT model (from HuggingFace), I'm facing a problem with some sentences that are translated wrongly. In the specific, I obtain a translated sentence that is basically a loop ("God is the God of the world, the Lord of the world, the Lord of the world, ...." ), which is also not related to the original sentence. Any idea?
had#8070: just checking, are any translations in your dataset correct? would be interesting to know the percentage of the weird ones
ultimawar#9712: This was very useful! Thank you!
Pötiküvi#7506: Hey peeps, I am trying to export an onnx version of my roberta model, which I've built via pytorch lightning, however, for some reason, onnx perceives my 2 input model as a 3 input model. Furthermore, if I go and do a workaround for my two input model, and add a third input and pass a dummy variable, I get the error in the screenshot above. I am suspecting that my definition of forward, and having the loss calculation in the forward step is throwing a wrench into things, anyone able to confirm or suggest a fix is super appreciated! thanks^ https://cdn.discordapp.com/attachments/922424173916196955/1015576208865513512/Screen_Shot_2022-09-03_at_13.38.13.png,https://cdn.discordapp.com/attachments/922424173916196955/1015576209146527754/Screen_Shot_2022-09-03_at_13.38.46.png,https://cdn.discordapp.com/attachments/922424173916196955/1015576209368821850/Screen_Shot_2022-09-03_at_13.56.16.png
gg333#3895: actually yes!
gg333#3895: in the ARBML discord they're saying to insert no_repeat_ngram_size=1 to fix it...
ultimawar#9712: how are you exporting the model? Are you using torch.onnx.export?
ultimawar#9712: if you're following the pytorch-lightning paradigm you would keep loss calculations within your training and validation steps: https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.html#training
azam#6545: Question/suggestions: Hi all, any beginners material for sentence pair (2 input) based transformer fine tuning using Huggingface.
azam#6545: Similar to sbert - Siamese bert style two input and one output- classification or regression
azam#6545: I could find simpletransformer and sentenceTransformer but having a pipeline for trainer just for it would be the icing on the cake!!
|
azam#6545: https://github.com/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb
azam#6545: There is one notebook for the same from hugging face but quite complex in pytorch any dedicated trainer pipeline could ease the task and open new doors
ponchatoula#4556: Anyone familiar with transformer-xl? Im confused about the training phase where the author mentions a cached previous segment which corresponds to the 'mems' parameter, do we have to compute this ourself?
Moritz Laurer#9812: If anyone is interested in a new dataset for multilingual zero-shot classification with NLI, here is a new dataset and model on the HF hub: https://twitter.com/MoritzLaurer/status/1567822462343159810
deseipel#1066: Quesions: I have a model i trained and I've set the repitition penalty to like 100 and it still repeats stuff. What's goin on? https://huggingface.co/docs/api-inference/detailed_parameters#conversational-task
Interaction Team#8616: Hi, do you know how to translate with bloom? I tried instructions like "translate" but the model doesn't always respect the instruction or it not translate the full sentence.
Bilalexander 'The Great'#1098: Can anyone suggest models for metaphor analysis?
Robert1#0234: for onnx profiler I see all my time is spend in MemcpyFromHost
anyone know what this is?
matthijs#4697: It copies data to the GPU.
Robert1#0234: but I am doing all my inference on the GPU? looks like something is wrong and its not
matthijs#4697: Your data still needs to be copied from the CPU to the GPU.
Robert1#0234: MemcpyToHost: this is fine and I assumed this is copying to the GPU
matthijs#4697: I might be remembering this wrong but the Host is the CPU, I think.
Robert1#0234: Host = cpu I guess in this context
Robert1#0234: ```python
binding.bind_input(
name=input_onnx.name,
device_type=device,
device_id=device_id,
|
element_type=torch_to_numpy_dtype_dict[tensor.dtype],
shape=tuple(tensor.shape),
buffer_ptr=tensor.data_ptr(),
)
binding.bind_output(
name=out,
device_type=device,
device_id=device_id,
)
binding.synchronize_inputs()
model_onnx.run_with_iobinding(binding)
binding.synchronize_outputs()
```
Robert1#0234: although I am pretty sure I am binding both inputs and outputs to GPU so it shouldnt be copying to or from a device
luckynozomi#2333: As in the onnx runtime documentation: https://onnxruntime.ai/docs/performance/tune-performance.html#iobinding
> The key idea is to arrange for inputs to be copied to the device and for outputs to be pre-allocated on the device prior to calling Run().
I don't think the memory copying can be totally avoided
Robert1#0234: hmm but since both my input and output items are already on a device I would not think it required much work. the input tensor is already on cuda:0
|
Samarth Garg#9321: hello everyone, I want to extract a specific type of sentence for instance formal, rude, helpful, etc from a review, can anyone suggest any techneques/models/papers?
Thanks in advance!
jd007#9518: Hey folks
What are some strong / state of the art ways to augment text data to generate additional training examples ?
The ones I’m aware of:
a) Random Insertion
b) Random Deletion
c) Synonym replacement
d) TextAttack library.
Context: Imbalanced Class Distribution in data - for e.g. product descriptions
firebridge#9480: How about plain non-sense random characters / words sentences such as “D as asdf qwer the” or “Monkey car air pope would” this kind?
Samarth Garg#9321: you can also check for backtranslation.
Robert1#0234: what library do people use for softprompt tuning of huggingface models?
Stefan#3853: @jd007 you can generate adversarial examples, e.g. as described here https://arxiv.org/abs/2009.04007
jd007#9518: Thanks for the pointer @Stefan
andret#1280: Hi all, does anybody know of any good grammatical error detection models? Doesn't have to be a transformer based model. Also doesn't need to correct, just for analysing errors (number of errors for example)?
cakiki#9145: @manu.romero trained an error correction model recently, let me try to find it
andret#1280: GEC doesn't work very well for me I think, because I just want to get analytics for input text - how many grammar errors inside the input. I think GEC might generate synonyms for example "normally" to "generally" or other stuff like these. Do you agree with that?
|
I was actually thinking of using a dictionary of all words maybe? 😄
MvW#1425: If it's only for error detection, you could look for models which have been trained using CoLA (https://nyu-mll.github.io/CoLA/). For example: https://huggingface.co/textattack/roberta-base-CoLA
andret#1280: Hmm, interesting classification task. But I am not sure of the usability of this, if I want to measure number of mistakes for example. I think I need to use something like languagetool or a GEC tool which doesn't rewrite, like this (https://github.com/grammarly/gector) from Grammarly.
MvW#1425: A model trained with CoLA could be used to detect segments (perhaps sentences) which are grammatically incorrect. But it doesn't look like it has the granularity you are looking for.
ratatui#0101: Hello everyone is there any channel or forum post for openai/whisper discussion?
I am having issues transcribing long videos (20-30 min) where decoding starts to be nonsense repetitions after some duration.
Does anyone have experience transcribing long audio/video durations with openai/whisper?
JonathanSum#8528: I am planning to do a Portfolio website, so I want to add some Javascript Pytorch model example overthere.
Thus, I want some small NLP model.
Is there any small powerful NLP or NLPQA model that anyone can suggest me? DistillBertQA is from 2017, which is old. You may suggest T5 Tiny, but it is old.
p_barr#7880: Somewhat random Monday thought but curious if anyone has seen a lightweight locally hosted Grammarly / text processing plugin. I am a Grammarly user and value it to patch my fast typing mistakes but effectively am granting access to all keystrokes to cloud.
ayaka#7682: https://twitter.com/ayaka14732/status/1574627912162349056
lewtun#4548: Hi folks, I'm happy to share a research project about **few-shot learning with language models ** that I've been working on with colleagues at 🤗 and Intel. We've also open-sourced a library that let's you train our models with a few lines of code 👉: https://github.com/huggingface/setfit
tl;dr we found a way to apply pretrained Sentence Transformers in regimes where one has little labeled data. The method is illustrated in the attached figure, and involves a two-stage training process:
|
- Fine-tune the Sentence Transformer with a a few labeled examples (e.g. 8 per class) using a contrastive loss
- Freeze the weights of the tuned Sentence Transformer and train a simple classification head (e.g. logistic regression)
Would anyone be interested in a webinar about this type of research? https://cdn.discordapp.com/attachments/922424173916196955/1024279317250854974/setfit.png
cakiki#9145: Congrats Lewis! Is there by any chance a recorded talk introducing this paper?
lewtun#4548: Thanks @cakiki ! There's no recorded talk yet, which is why I asked:
> Would anyone be interested in a webinar about this type of research?
🙂
cakiki#9145: Apologies; I completely missed that last paragraph! Absolutely, I'd personally love a webinar! (I'm sure others will feel the same)
Omar Sanseviero#6198: This is very cool 🙂 I shared with the team as well. A very cool mix of jax + pyodide + other tools, very nice! 🔥
mattros#6976: Definitely keen for a webinar on your few-shot learning work that includes your thoughts/examples on applications.
DiegoT#8170: Hi! With the release of whisper I was thinking about the amount of text data that will be available and the possibility of running topic modelling on texts (no specific plan yet). However, I was wondering what models you guys used besides LDA or BERTopic?
abhi12ravi#2955: We have huge amounts of call center recordings in our org, I wanted to transcribe them using whisper and then run Bert topic modelling + sentiment analysis on generated topics
abhi12ravi#2955: I've used word2vec as well for Topic modelling
Omar Sanseviero#6198: #whisper-gpt might be an interesting channel for you 🙂
gg333#3895: I'm running m2m100 large on CPU, but i get a trucated output. Any idea?
Omar Sanseviero#6198: You probably need to change some parameters in the pipeline (e.g. the max length needs to be specified). How are you running this model?
gg333#3895: When I run the smaller (1.2B) on GPU, basically it's the same code except for the path in which the model is
|
gg333#3895: It seems related to https://github.com/huggingface/transformers/issues/12775. Changing the config, actually fix the problem
Emanuel Huber#5410: Hey guys! I am trying to understand the NLP community's opinion on current sentiment analysis. I will publicly share the results on my Twitter when it has a significant number of replies. Please, feel free to contribute with your thoughts https://forms.gle/27eFybZ6U9DUgZDE6
Robert1#0234: anyone know best way to get SFW responses from language causal generation.
Robert1#0234: like even if prompted to be NSFW i want it to still be SFW as much as possible. i know there is bad word ids but i don’t think this would cut it? or am i wrong
lewtun#4548: We'll be doing a live event about our recent research on SetFit on **Oct 18 at 15:00 CET** 🔥
Hope to see many of you there!
cakiki#9145: Be sure to RSVP: https://discord.com/events/879548962464493619/1027893081741201508 !
twoplustwoisfour#7261: I am trying to load a simple checkpoint of my T5ForConditionalGeneration model by populating my state dict and failing horribly because it's complaining of a size mismatch. Can someone please help me figure out where I'm going wrong. I've been losing sleep over this, would really appreciate any kind of help (have tried to summarize the issue here - https://discuss.huggingface.co/t/t5forconditionalgeneration-checkpoint-size-mismatch-19418/24119)
78Star#1371: Hey all, I am looking for a NLP ML that I can use for projects(I am a C# developer). I started BotSharp with articulate but they both haven't been updated in a while. Is there something that I can get running in docker you would suggest?
MvW#1425: Tough one... I'd probably do something at decoding. If you have a lexicon of offensive/toxic words (you can find a quite a few online), you can either:
* Generate several likely output using K random sampling (with or without temperature) and cut out all the output which have NSFW words.
* Use a beam search and immediately cut out path with NSFW words.
What do you think? Please let me know if you have any other idea, I am quite interested in this topic of model moderation.
spirit-from-germany#1488: Is there any good hash that can be used for text document near duplicate deduplication?
Want to use it for sorting billions of documents by similarity hash
spirit-from-germany#1488: I want to do near duplicate deduplication on the websites in common crawl.
MvW#1425: You mean like Locality Sensitive Hashing (LSH: https://en.wikipedia.org/wiki/Locality-sensitive_hashing )? James Briggs wrote a series of blog posts on these kind of approaches for indexing. Most of it uses Facebook research FAISS library (https://github.com/facebookresearch/faiss). You can combine it with sentence-transformers models (https://github.com/UKPLab/sentence-transformers), for example.
MvW#1425: If we are talking about sentences (and not full documents), another much cheaper option is to apply a very simple text simplification (stemming/lemmatization + removing stop-words and punctuations), and then eventually apply synonym substitution (perhaps using wordnet).
Arsive02#8749: Hey everyone, I am working on finding the root cause of customer complaints. I found certain ways that this could be done with. For example, Topic classification, Tagging Taxonomies using Clustering methods... etc.
|
Is there an efficient approach to this ? Or any Model based approach using T5 or something by guiding it to find the root cause ? The only thing that's stopping me from using T5 is the number of tokens I am restricted to use.
The content i get is in the form of threads. Eg: A mail thread.
So the max length of a sentence cause issues in finding the root cause properly
doink#3458: https://prose.biz/ how to make the classification in this more explainable? Curious to hear the take on this app?
buttercutter#1033: Do we still use BERT for pos-tagger ?
moath17#1827: anyone have fine tuning transformer ipynb example?
Mahmoud7#1241: I've done a couple of notebooks, one uses a pytorch loop, and one uses the trainer API.
https://www.kaggle.com/code/mahmoudlimam/research-idea-generation-huggingface-distilgpt2
https://www.kaggle.com/code/mahmoudlimam/roberta-with-huggingface-pytorch
MvW#1425: The `notebook` directory in github is filled with notebook examples: https://github.com/huggingface/transformers/tree/main/notebooks
I hope that helps.
itsdev#2054: Hey,can anyone share good resources for NLP case study interview?
Arsive02#8749: is there a way to find the amount of change in layer wise weights of large language model before and after fine tuning ?
I was thinking of taking the weights before finetuning (w1) and taking weights after finetuning (w2), and take the norm |w1-w2| and normalize it to get the amount of change. But is this a good approach ? Is there some libraries that already does this ?
Sander#3278: That seems like a reasonable thing to do, or just abs(w1-w2).mean() -- I've not seen this as a feature somewhere, or seen many people actually care about values like this.
|
Arsive02#8749: Alright i'll try this. Just trying to figure out which layer is being affected the most by finetuning for a domain specific task. If last layer is affected significantly than first layer, i'll probably go with freezing first few layers
Sander#3278: My intuition is that the last layers are always going to be more affected than the first. (when taking the mean, but maybe not when taking the max) Otherwise the pretraining was pretty ineffective.
Arsive02#8749: Yeah, makes sense.
doink#3458: I have been manually going through privacy policy pages and here is an observation I have come up with
most privacy policies are very similar in nature and the variance mostly likes in types of personal information type collected,purpose of collection and choices offered like opt-in,delete my data and then there are cases where sometimes not enough information is given in case of do-not track. Curious to know what EDA tests I can do to validate my hypothesis? What similarity-metrics can I use?
moath17#1827: Thank you bro
moath17#1827: Thanks I really appreciate that
moath17#1827: Anyone know about TF-IGM? I want to add it as a future in my dataset
Arsive02#8749: Found interesting results from this research paper @Sander https://cdn.discordapp.com/attachments/922424173916196955/1031518251475275856/Screenshot_2022-10-17_at_4.14.24_PM.png
Sander#3278: What paper is that?
MonsterMMORPG#2198: Are there any model that can punctuate given text block? like 1000 words text. like google automatic transcription.
MonsterMMORPG#2198: also which model is best to summarize long text like 10000 words into 500 words?
dex#2650: Hello, I have a question about the SimpleTokenizer module in the CLIP module
```py
class SimpleTokenizer(object):
def __init__(self, bpe_path: str = default_bpe()):
self.byte_encoder = bytes_to_unicode()
self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
merges = gzip.open(bpe_path).read().decode("utf-8").split('\n')
merges = merges[1:49152-256-2+1]
|
merges = [tuple(merge.split()) for merge in merges]
vocab = list(bytes_to_unicode().values())
vocab = vocab + [v+'</w>' for v in vocab]
for merge in merges:
vocab.append(''.join(merge))
vocab.extend(['<|startoftext|>', '<|endoftext|>'])
self.encoder = dict(zip(vocab, range(len(vocab))))
self.decoder = {v: k for k, v in self.encoder.items()}
self.bpe_ranks = dict(zip(merges, range(len(merges))))
self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
self.pat = re.compile(r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE)
```
dex#2650: What exactly is the purpose of the </w> in the `vocab = vocab + [v+'</w>' for v in vocab]`
Also the paper says that the vocabulary size is 49152, however we do: `merges = merges[1:49152-256-2+1]`, which seems to limit the overall size of the vocabulary to 48894. Why is that?
doink#3458: Can anyone suggest me NLP Annotation tool for Text Classification?
MvW#1425: If you are looking for a free tool, doccano (https://doccano.github.io/doccano/), is pretty nice. If you are willing to pay, I heard good things from prodigy (https://prodi.gy/).
doink#3458: can I use it for text classifcation? Yes I am looking for a free tool.
Omar Sanseviero#6198: You can also use the open source Rubrix https://www.rubrix.ml/
MvW#1425: You can use it to annotate data which then can be used to train a model for text classification.
|
Sander#3278: Labelstudio is also an option, but rubrix looks quite fancy
Sander#3278: One thing I noticed in using OpenAI for generation is that it is really good at knowing when to stop generating. If you ask for a list of 8 items, it stops generating after the 8th item. Most other services (or straight up using a model) will keep rambling on to the token limit. What kind of techniques are used to do this?
MvW#1425: Oh wow, I didn't know about Rubrix. It looks great. Thanks for sharing.
samtube405#0352: Hi All, Does anybody know publicly available codebases to train retrieval augmented language models (e.g., ATLAS from Meta)?
doink#3458: How large can embedding size be? Lets say you are searching for a word in a large corpus whose frequency is small how to approach for it.
Also for text segmentation curious to know what is the current state of the art for segmentation + embeddings + summarizating?
moath17#1827: Is there any function that I can use for model averaging?
moath17#1827: because here just I can train 1 model but I want to train 3 models and puts the model averaging https://cdn.discordapp.com/attachments/922424173916196955/1033158421823897690/unknown.png
moath17#1827: But I don't have an idea for that
blev_labs#1228: Hello, i was wondering if anyone knew of good toolkits to begin development for multimodal transformer networks?
blev_labs#1228: I’ve worked with and developed on transformer networks for quite a while, and want to get into the territory of developing their architecture and functionality myself
doink#3458: How to perform information extraction like this. https://cdn.discordapp.com/attachments/922424173916196955/1033981461008764959/unknown.png
cakiki#9145: can you elaborate?
doink#3458: I scrape over privacy policy text and extract information in the visual format mentioned above, how do I proceed?
ponchatoula#4556: Just to clarify, did you mean you want to list out all the data types collected given the privacy policy document?
doink#3458: Yes!
ponchatoula#4556: I think it will be easier to create a set of all possible data types (and a 'Other' tag for safe measure) then treats it as a multilabel classification problem.
Vassago630#1843: @buttercutter
doink#3458: I have to extract that piece of information from the privacy policy text
orgrim#1636: I'm facing an error while trying to run sentence transformers models on apple M1 - here are my code and the error:
|
Code:
```
from sentence_transformers import SentenceTransformer, util
os.environ['PYTORCH_ENABLE_MPS_FALLBACK'] = '1'
# model = SentenceTransformer('all-MiniLM-L6-v2', device = device)
device = 'mps'
model = SentenceTransformer('multi-qa-mpnet-base-dot-v1', device = device)
refs = ['hello', 'how are you', 'abc']
sents = ['i am good thanks', 'def','hello how are you doing']
ref_embeddings = model.encode(refs, convert_to_tensor = True)
sent_embeddings = model.encode(sents, convert_to_tensor = True)
scores = util.cos_sim( sent_embeddings, ref_embeddings).detach()
```
Error: NotImplementedError: The operator 'aten::cumsum.out' is not current implemented for the MPS device.
Can someone tell me if there is a fix for this? It links me to an issue where this isn't implemented in pytorch itself..
Omar Sanseviero#6198: Please avoid posting same thing in multiple channels. I just replied in #ask-for-help
|
MonsterMMORPG#2198: @here anyone has experience with summarizers? which summarizer would work best for software engineering courses? also what is the optimize output size? lets say you have 1000 token input and get out put as 50 100? also are there any model that supports bigger size input like 10000 tokens?
MonsterMMORPG#2198: https://stackoverflow.com/questions/74228640/which-summarization-model-of-huggingface-supports-more-than-1024-tokens-which-m
ardaaras#8903: Hi everyone, I need to train my own BERT model on a domain specific Turkish corpus. I need to find the approximate hardware requirements like number of GPUs, their sizes, brand and etc. Can anyone help me to find it?
buttercutter#1033: anything ?
Bukenya Lukman#7432: I have the same question here
Bukenya Lukman#7432: I want to use BERT for multi-lingual translating from English to 4 local languages. Either a pre-trained or one built from scratch can work for me. I have failed to put things together. Any pratical material or resources will be great
bob#1236: hey i uploaded a 12.2gb model to huggingface but when i redownload it, its 11.7gb and corrupted
bob#1236: any ideas?
Robert1#0234: I want to fine tune on small conversation snippets. I want to train OPT-30. How many small conversation snippets do I need to do this? what range? do larger models need more examples?
any insights into this area would be super useful.
Robert1#0234: is 100 conversation examples enough?
Elle Neal#0726: Hey 👋 can I ask a bit more about your use case? I’m starting to look at conversations between an agent and customer.
Daniyal#5266: Hi guys, I want add some words of other language to LayoutLmv2's vocabulary, as I am working on Bilingual Document entity recognition. How can I do this?. There's some Urdu language in the Document, so I want to add some of the words in the vocabulary of the model.
bob#1236: how do i get the "hosted inference API" preview thingy to work on my model, it gets an error "cached_path not found" when i load it
Omar Sanseviero#6198: Hey there! In general #ask-for-help is the best channel for #questions 🙂 It's harder to keep track and give higher quality answers
Robert1#0234: when running run_clm.py to train models. I notice for gpt2 it works well but when I try to tune opt-1.3b it denerates very quickly into nonsense
Robert1#0234: any ideas whats happening? do i need diff processing like <s> tokens or something?
darth_Vader#6916: You can use the Kaggle Kernel for training. It supports Multi-GPU now.
deadmeme77#5784: Does anyone know where I can get the full list on 1800 tasks of Flan-t5? And how to properly wrote them in the text input?
|
Razvanip#0466: does anyone have a github with a PyTorch architecture similar to this: https://cdn.discordapp.com/attachments/922424173916196955/1039116012446023750/image.png
Omar Sanseviero#6198: Yes you could check the paper https://arxiv.org/pdf/2210.11416.pdf
Omar Sanseviero#6198: There's a part with Finetuning tasks which shows the different categories of tasks
Omar Sanseviero#6198: E.g.
> Task mixtures. Prior literature has shown that increasing the number of tasks in finetuning with instructions improves generalization to unseen tasks (Wei et al., 2021; Sanh et al., 2021, inter alia). In this paper we scale to 1,836 finetuning tasks by combining four mixtures from prior work: Muffin, T0-SF, NIV2, and CoT, as
> summarized in Figure 2. Muffin3 (80 tasks) comprises 62 tasks from Wei et al. (2021) and 26 new tasks that we added in this work, including dialog data (Byrne et al., 2019; Anantha et al., 2021; Dai et al., 2022) and program synthesis data (Yasunaga and Liang, 2020; Li et al., 2022). T0-SF (193 tasks) comprises tasks from T0 (Sanh et al., 2021) that do not overlap with the data used in Muffin (SF stands for “sans Flan”). NIV2 (1554 tasks) comprises tasks from Wang et al. (2022c).4
deadmeme77#5784: Thank you very much. It's interesting to see how large of a jump between what's available for now (XL) and the smallest Flan-PALM model, almost 10% difference... But that makes a lot of sense.
Oscar Cumbicus#0585: Hello, here is a contribution on text simplification using a noisy channel
https://arxiv.org/abs/2211.03152
MonsterMMORPG#2198: does anyone know how to utilize multiple gpu? https://stackoverflow.com/questions/74380417/how-to-use-multiple-gpu-at-the-same-time-when-using-models-with-pytorch-and-hugg
dl_amit#8567: Has anyone been able to finetune the OpenAI Whisper one for domain specific terminologies? I am looking for some guidance on this. I have audio and their transcription files.
Omar Sanseviero#6198: I haven't but this might be useful https://huggingface.co/blog/fine-tune-whisper FYI there is a role in #role-assignment to discuss about ML for audio
teacookies#0825: Do anyone can access Hugging face website?
makya#2148: I can access it.
Moonknight#3334: Is there any architecture of NLP that is built on top of smaller NLP models? I have this crazy idea and I don't know if there is any or if anyone is working on it. I am Willing to talk to about this to anyone.
Razvanip#0466: is there a way in which we can make a multimodal pipeline? Or at least a tutorial on how to combine features from both audio and text for instance?
deadmeme77#5784: Hmm, I've done a multimodal pipeline but in gradio (speech to text then translation)
deadmeme77#5784: shouldn't be an issue using a python variable to hold the output of one model then use it as input in another
Razvanip#0466: the thing is that we make experiments on discrete emotions and my supervisor wants us to use multiple channels
|
Razvanip#0466: I actually tried to make a workaround and use the logits of two transformers (one for speech and one for audio) and then concatenate the result together and feed them in a series of multihead attention blocks
deadmeme77#5784: that works...?
Razvanip#0466: we shall find out, I don't have access to the dataset yet. I'm on hold because of this🥲
deadmeme77#5784: I mean I hear the newest Nvidia diffusion model outputs images and text at the same time now, so...
Razvanip#0466: if you find another workaround, let me know
milyiyo#9597: Hi everyone,
Which metrics can be used to evaluate paraphrasing models?
Valentina S [April 10]#1357: Hello. Can anyone guide me on how I can start working on Open Source Projects, I am a beginner but I am looking to learn and practice.
Raghav#6205: Hey, can anyone help me with NLP sentence classification ? What all techniques i can use for feature extraction and other stuff ...
myheadhurtstoday#0788: has anyone used the FLAN-T5's models for zero-shot-classification?
cakiki#9145: https://twitter.com/mervenoyann/status/1592862320828776448
spartan#6390: where can i see the memory requirements of language models like T0?
Valentina S [April 10]#1357: thanks a lot! I will check it out!!
Bukenya Lukman#7432: Hello Everyone. I am asking. How can I save the model state such that after training and power goes and I open the notebook again, I don't have to retrain the model to make the tests.?? It is an NLP model
Robert1#0234: Hey, could I ask what metrics you guys normally use or recommend when fine tuning?
Obvious there is loss value. And I have a an evaluation table of responses to fixed prompts too so I can examine what each checkpoint looks like in a imprecise kind of way
Do you use anything else?
|
Im thinking like "entropy" to capture overfitting.
Omar Sanseviero#6198: @Leandro von Werra Might have some ideas 🙂
Robert1#0234: would really appreciate it. i have this idea of using entropy to detect overfitting. others say hellaswag. but i think i should just see what best practise is
darth_Vader#6916: https://blog.paperspace.com/automated-metrics-for-evaluating-generated-text/
Leandro von Werra#9428: Ideally you use whatever metric you use to measure downstream performance. It sounds like you are doing LM fine-tuning so maybe BLEU/ROUGE/Perplexity might be could metrics to track. Hard to say without knowing your exact use-case.
Robert1#0234: in our case we are trying to train a chatbot to mimic a particular book. i guess in this case there is no classic metric. we want it to maintain the original models ability to speak in a conversation whilst also being able to say things from the book. we currently score it by manually talking to different epochs which is slow but quite effective
MonsterMMORPG#2198: why could be am I getting all duplicate repeated output with text generation models? https://stackoverflow.com/questions/74503607/text-generation-ai-models-generating-repeated-duplicate-text-sentences-what-am
cakiki#9145: try passing `do_sample=True` to your generate call
MonsterMMORPG#2198: testing now
MonsterMMORPG#2198: any other parameters do you suggest me to test for improving?
MonsterMMORPG#2198: wow immediate fix
cakiki#9145: you can try changing the value of `top_p` to see how it affects the output 🙂
MonsterMMORPG#2198: how much value do you suggest? also would top_k improve?
MonsterMMORPG#2198: and any guess about temperature?
MonsterMMORPG#2198: what are the optimal top_p for text generation models such as facebook opt
twoplustwoisfour#7261: Let’s say I have a `T5ForConditionalGeneration`. Now I feed it a sequence (say, `An apple a day keeps the doctor`). Now, if I want to get the probabilities for two words, say, `day` and `week`, how do I do it?
MonsterMMORPG#2198: Any help for this ? https://discuss.huggingface.co/t/how-can-i-run-this-code-on-kaggle-tpu-runs-fine-with-gpu/26408
Raghav#6205: Same i want to know
gughamors#5671: Hi guys! I am a teaching assistant for a deep learning course at the undergrad level. I have to create the project proposal the students will use as a base for the course's final project. Do you have any suggestions I can find inspiration? I would like to explore the use of the newer language models
|
sh3rcrypt0#0001: Hi guys, i posted a question about #deleted-channel i'm still new in using huggingface
MvW#1425: I am teaching introduction to natural language processing in an engineering school. Part of the projects I am covering goes from comparing more and more complex models on the same simple text classification problem. Starting with a simple naive Bayes and building up to fine-tuning RoBERTa. I also ask students to think about the pros and cons of every approach (speed, accuracy, explainability, ...). If I can give you one advice, it's to avoid the typical "transformer goes BRRR!" projects and make them look at the model's limitations. What is it good at and when does it fail?
I am always looking for ideas to update my course and projects. So if you have some, please share :).
MvW#1425: Also a question from my side. Are you aware of language specific communities trying to grow the NLP fields on their languages or language families, like Masakhane on African languages? For example, I heard there is a Turkic languages community, but can't find it.
ponchatoula#4556: When I was in school, but one thing I wished my prof did was walking through the code of the model along with the theory. Like you would have the paper on this side and show what is implemented how where in the repo. Bonus point if its an implementation from established packages like huggingface
Valentina S [April 10]#1357: Vale🥑 — Today at 4:06 PM
Hello Everyone,
I am looking for someone who could guide me with learning about NLP and how to use Hugging Face. I am a bit confused and your help would be greatly appreaciated.
Omar Sanseviero#6198: Please don't post the same content in multiple channels 🙏
Muhammad Agung#7254: Anyone have a guide how to integrate NLP model with KenLM?
cakiki#9145: KenLM is itself a model. Can you elaborate on what you mean?
Muhammad Agung#7254: I think I should ask how to beam search using KenLM
I want integrate my OCR and use beam search for typo correction
cakiki#9145: Ah I see, so basically scoring the candidates using a KenLM model to prune the search space. I'm actually not sure it HF's generate function lets you plug in outside logic to prune the beams; i'd be very interested to find out
Muhammad Agung#7254: I see, thank you
kabachuha#6934: So, I'm learning the transformers inner work. From what I've read, they are using attention masks for training: some parts of a sentence are hidden and the NN tries to fill these gaps when training.
Isn't it the very definition of denoising, but applied for sentences with word-noise instead of pics with pixel-noise?
muskannnnn#9204: hi guys!
|
muskannnnn#9204: today I'm doing text classification using countvectorizer features and the svc classifier.
I changed the runtime type on google colab to GPU.
However, while training the classifier, I get the notif that "your notebook is not using a GPU. do you want to turn it off?"
Merve#3234: @muskannnnn why do you require a GPU for support vector classifier?
dl_amit#8567: Hello..Anyone deployed whisper as aws sagemaker endpoint? Is there any guide available for this?
muskannnnn#9204: Hi Merve! I was using GridSearch to find the best SVC hyperparams
And the classifier wasn't getting tratrained even after one hour!
Merve#3234: @muskannnnn did you set n_jobs = -1?
also something I used to do was using sklearn-intelex instead https://intel.github.io/scikit-learn-intelex/ sklearn doesn't provide any GPU support. so it's normal
it's intended to be like that to keep sklearn lightweight + you reeeaallly don't need it even if you do grid search. set number of jobs to be high and you should be fine
mz#9453: I would like to fine-tune a masked model, but I plan to mask specific words and entities in training data. What is the best approach to train a model on such custom-masked data?
Merve#3234: @mz MLMs are used to learn distribution of a language. You mean, you want to mask specific words? Can you elaborate why you want to do it?
mz#9453: Hi Merve, I would like to train a model where the masked words are mainly entities such as names, currency amounts, geolocations, etc. so, in my training data, I would like to mask the majority of these entities, so my model can better generate meaningful sentences from passed entities during inference. Does this make sense? Thanks for your support.
muskannnnn#9204: yes, I set it -1, will change the value.
also, thank you so so much for your help!
singer#3395: i never used an api before. can someone show how to use gpt-neo and hugging face aceleterated inference API? ( few shot learning)
singer#3395: i am not know how to import it to colab
El Chapo#5065: hey guys, sorry im new here and looking for advice on an ML chatbot im making as my android final project. i'm trying to use the google/tapas-mini-finetuned-wtq model to make an open domain question and answer bot. I have it working in colab but im trying to run it on android through deepJavaLibrary but i cant get the tokenizer to work
|
El Chapo#5065: for something like bert-base-uncased it works fine cuz there's a tokenizer.json but tapas doesnt have that. anyone have any ideas how i could get a tokenizer.json (Im fairly new to NLP, we've done lots of theory but now im trying to build something)
El Chapo#5065: i was messaging someone that works on DJL and he was saying to compile the rust tokenizer for android but even then im not sure it would work without the tokenizer.json file
El Chapo#5065: my best guess is there's no fast tokenizer so it doesnt let me load it in DJL
El Chapo#5065: or does anyone know what tokenizer google/tapas-mini-finetuned-wtq uses? maybe i can get it from another model
El Chapo#5065: does anyone know how the tapas tokenizer flattens the table and query data?
tj#8842: Hello does anyone here have an experience fine tuning longformers?
tj#8842: I've been trying to fine tune it with batch size of 1 and vram of 24gb. But when the input_ids are long [more than 512 tokens] I get a cuda memory error.
My understanding was that the sliding window attention in Roberta would mean that I can process input_ids of more than 512 tokens with a constant model size [Effectively same size as a Roberta].
Is my understanding incorrect? Or is there a further nuance in pytorch implementation of it that I am misunderstanding?
tj#8842: Ok I think my understanding was mistaken.
tj#8842: It's not the entire transformer stack that gets slided, it's just the attention layers, so I think they just initialize multiple Roberta fully connected layers when the tokens exceed 512 tokens and hence causes memory issues during training
rankzoid#8950: @everyone is there anybody who wants to participate with me in manipulation of SEO search ranking using NLP ?
Merve#3234: also what you might want to do is to try and narrow down the grid iteratively. good luck!
Merve#3234: I'd encourage you to not do this and use regex for the amounts and already existing NER model for locations instead (see https://huggingface.co/spacy). back in the day I used this to handle all of them https://hub.docker.com/r/rasa/duckling and it works like a charm.
if you still want to try to do MLM do let me know 🙂
mz#9453: Thanks @Merve for your advice. I appreciate it. For the sake of learning and experimentation on other problems, I would be grateful if you can let me know the best approach to do MLM to tackle this. What I am interested in knowing is how to tweak a text generation model on specific custom domain data, so it can generate better phrases tailored for that domain. What would be the best approach?
dl_amit#8567: Hi All, I am trying to implement OpenAI Whisper as sagemaker transform to support bugger audio file. But I am getting following error in transform job " File "/opt/conda/lib/python3.8/site-packages/transformers/pipelines/audio_utils.py", line 39, in ffmpeg_read raise ValueError("Malformed soundfile")". I believe this is due to lack of soundfile package but is there anyway I can passon my own requirements.txt file for sagemaker batch job? Please note that I am following https://github.com/huggingface/notebooks/blob/main/sagemaker/12_batch_transform_inference/sagemaker-notebook.ipynb . I believe the error is there as per https://huggingface.co/transformers/v4.11.3/_modules/transformers/pipelines/audio_classification.html but I dont know how i can pass on my own requirements.txt file having transformer version and soundfile package,. Thanks
|
MonsterMMORPG#2198: Are they going to further tune the existing model of Whisper?
Omar Sanseviero#6198: We have an event in the audio category in which GPUs will be provided for people to do this
MonsterMMORPG#2198: original data and the number of gpus probably used in large model is colossal. how many gpus you gonna provide?
MonsterMMORPG#2198: That would be really cool if they have released further tuned improved large model
El Chapo#5065: does anyone know if there's a way to use torchscript to run on a tokenizer on Android?
KennethEnevoldsen#5860: Hei I am looking for papers (reports etc.) which record the performance on e.g. GLUE according to training steps so far I have only been able to find one example, but I am sure that there is more out there. Hopefully some of you can help me 🙏
spartan#6390: huge noob here, im trying to transcribe an entire audio file using openai/whisper but I'm not sure how to get true continuity. Do most just divide up the numpy formatted audio file from soundfile?
thoraxe (Erik Jacobs) [Red Hat]#4940: I've got a generation task where I want to use a CSV or JSON as a prompt and generate an article (long-form). I want to fine-tune one of the existing NLP models with pairs of CSV/JSON+articles. Think sports recaps, financials recaps, etc.
1) I'm struggling to figure out the right concoction of search terms to lead me to an example. I have found a fair number of fine-tuning examples but they are not really aligned with my use case. They are often dealing with simple language prompts that generate summaries or longer text, but not "data" as the input.
2) i'm dealing with large token quantities. On the input side: a meaningful CSV of results from a sports event could have hundreds if not over 1000 items. Think about something like baseball stats. On the output side: a small article can be 1000 words. I have seen references to maximum token counts and they're usually not in the 1000s but maybe I am misinterpreting.
Would love some thoughts here!
thoraxe (Erik Jacobs) [Red Hat]#4940: Started a thread.
IMJONEZZ#7844: Can we do any self-promo in this channel? I made a cool TTS thing with diffusion models and it’s doing voices for my YouTube channel now.
Robert1#0234: is text-davinci-003 a 175B model, anyone know?
paulius#5922: Does anyone have any resources about tile coding with RL?
MonsterMMORPG#2198: what kind of voices i am interested in you can direct message me
Raid'err#5389: Are there any good explanations of adversarial weight perturbation in the context of NLP somewhere?
|
BLT#7913: In terms of translation task, are todays open source LLM getting SOTA results already?
askaydevs#5262: Hi All. Recently I have been analyzing customer emails. The goal is to flag emails where the customer expressed frustration. I have tried out-of-the-box emotion detection models and got mixed results with quite a few false positives. Next, I gathered all such emails where the customer was actually frustrated manually, say a 100; then for every new customer email I am calculating a similarity score with every gathered email, and if the average similarity is greater than a certain threshold (0.75), classifying them as frustrated email else otherwise. Feedback from the community will be really helpful for me to proceed in a guided direction. Please provide any meaningful resources/guidance/comments. TIA.
Kiril Videlov#6597: which models did you try? i think your keyword to search for is 'sentiment analysis'. I think people have had success doing sentiment analysis using sentence transformers
CudiBavshvi#2328: Hey,
Please let me know if this is the wrong place to post. I'm looking for a partnership.
I'm Temur, Software Engineer from Georgia. (https://www.linkedin.com/in/temur-chichua)
I'm a co-founder of Anima Chatbotics, a startup providing chatbots that can speak and understand the Georgian language and I'm the head of the programming department at the Cyberlaboratory of Ilia State University.
I've been participating in different hackathons and workshops organized by Huggingface. I even got a cool shirt and a cup with the "Need for Speech" logo on it. You can check my work at https://huggingface.co/Temur. I'm an ML enthusiast and love to learn more and more about it.
In University with amazing students and help from Professors, we are building the open-source Georgian Language Processing platform that includes tools for lemmatization, normalization, etc. (https://qartnlp.iliauni.edu.ge/ - the project is still under construction)
We also have ~16GB of raw textual data and want to create the first open BERT-based model and publish it open through the hugging face platform. Unfortunately, we don't have enough computing power at the moment to pre-train the model on our data, and asking for the budget from the university takes months to get approved for projects like this.
Is there any way we can collaborate or get help to find the computational resources to train and publish the model? let me know
askaydevs#5262: Currently using EmoRoBERTa for emotion detection. From the analysis, and from talking to domain experts we understood that a small fraction of the sample (~5-10%) are true frustrations. At present after doing emotion detection and some manual correction on top of that we are seeing ~3% coverage.
mustafanow#7959: Hi folks,
|
I just started to work on general purpose Named Entity Recognition from unstructured text in English.
Can you share your experience on what is the best English dataset and best model to train?
PS: Google has an NLP API but I do not want to use it: https://cloud.google.com/natural-language/
Thank you 🙂
Cahya#4026: Hi Temur, you could try to apply for the TPU resource from Google https://sites.research.google/trc/about/
CudiBavshvi#2328: Thank you so much for sharing this.
Vision#6010: Are there any good resources for learning seq2seq, attention mechanism, transformers. I want to learn math intuition behind them and not the implementation
mindlessQ#1678: These two resources helped me out a lot:
If you'd like to learn about transformers purely from a mathematical point of view, these two resources are great: https://www.apronus.com/math/transformer-language-model-definition
Formal Algorithms for Transformers(https://arxiv.org/abs/2207.09238)
Or if you are more of a visual learner you could also check out this article (https://jalammar.github.io/illustrated-transformer/), followed by a video by Łukasz kaiser himself. Hope this helps
satsuroki#3326: Looks interesting
Vision#6010: @mindlessQ thanks for the resources
Arsive02#8749: What am i missing here ? Sorry the limit exceeded 2000 characters so i had to attach the code as file https://cdn.discordapp.com/attachments/922424173916196955/1050289669939662858/Screenshot_2022-12-08_at_11.18.13_AM.png,https://cdn.discordapp.com/attachments/922424173916196955/1050289670350712892/message.txt
askaydevs#5262: Hi. When resuming pre-training of BERT will it also include new words in vocab post-training? i.e. Let's say I have resumed pre-training of BERT for MLM and there is an unseen word "cms" in the training data, so post-training will this "cms" get included in the vocab?
ponchatoula#4556: If you didnt explicitly tell the tokenizer that it is a new word, no. It will be split into subwords that the tokenizer knows
askaydevs#5262: So there is no way to include an unseen word in BERT vocab without subword tokenization?
|
ponchatoula#4556: (assuming you are using hf, I know where we are but still) there is a series of steps involving some tokenizer methods and models methods that you need to call to add new (still un-tuned) words
ponchatoula#4556: I am not trying to be cryptic but I cant remember what it involves exactly
ponchatoula#4556: you might find an example somewhere in the doc
askaydevs#5262: Looking for something similar. Ping me if you come across something like this.
Aiko#5365: i remember seeing something on hugging face's website that they were training a GPT-szied model
Aiko#5365: did they finish training that?
Aiko#5365: and how many parameters was it?
Omar Sanseviero#6198: I think you're talking about https://huggingface.co/bigscience/bloom
Omar Sanseviero#6198: @cakiki Knows everything about it
Aiko#5365: yes, thats it
thanks
do you know if they are going to train a 100 trillion parameter one?
BenBot#3135: Damn. I doubt any time soon. I can’t imagine the cost of training a model that size.
cakiki#9145: The BigScience workshop has concluded. There are no such plans 🙂
cakiki#9145: Switch transformers are already on the hub 😃 https://huggingface.co/google/switch-c-2048 -> 1.6 Trillion parameters 🤯
Aiko#5365: damm
Aiko#5365: could i do inference of that locally on an rtx3090 in 1 second?
cakiki#9145: not without offloading to disk. The model card has code snippets that show you how to do that with `accelerate`
cakiki#9145: Not in 1 second though I don't think
Aiko#5365: ok, thanks
|
blev_labs#1228: Hey, looking for advice on fine-tuning code generative LLMs like: https://huggingface.co/Salesforce/codet5-large-ntp-py
Is this achieved like a finetuning a normal BERT model or something of the like, or is there a better way to finetune code generative models?
muskannnnn#9204: Hello server!
I'm working on a binary text classifier: toxic or not.
I'm working with this dataset from the Toxic Comment Classification Challenge
-> https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge/overview
However, the test labels provided by the competition organisers are kind of confusing (the entire data frame is populated with -1 values). Has anyone worked with this dataset before? https://cdn.discordapp.com/attachments/922424173916196955/1051423017999269928/Screenshot_2022-12-11_at_2.29.23_PM.png
cakiki#9145: Can you run `df.describe()`?
JoAmps#2144: Hello, can anyone suggest how i would develop a generative model(GPT2 or neo) , where it takes a text and generates modified texts of that, so its not completing the text, but it takes a text and generated modified versions of that, based on a finetuned model
askaydevs#5262: How can I utilise `facebook/bart-large-mnli` for doing "multi-label" text classification (zeroshot)?
Cahya#4026: I think you just need to train the model with something like “input: original text\noutput: modified text”
And during inference you can feed it with “input: your original text”
This will give you the text like “output: …”
JoAmps#2144: Okay thank you, regarding the input, i dont have modified texts currently(to give to the model to train), but i would like the generative model to give me the modified texts after training and inference, so in this how do i structure the inputs to the training model
MvW#1425: According to the file descriptions, I suppose you need to filter out the -1s given test_labels.csv. Note that they updated the competition on the following year: https://www.kaggle.com/competitions/jigsaw-unintended-bias-in-toxicity-classification https://cdn.discordapp.com/attachments/922424173916196955/1052250013297803285/image.png
Burn#1718: Hello! I am a hobbyist playing around with text generation from a creative perspective. I have an RTX 3090 available to me and was wondering if I could ask if anyone could point me in the right direction on maximizing the capability of that card for generating text and possibly finetuning. Happy to clarify if my question doesn't make sense. Thanks!
gsarti#9616: Hey all, our new interpretability library Inseq with a focus on NLG models was finally released! 🥳 Inseq is based on 🤗 transformers and can be used to attribute the generations of almost all causal/seq2seq models on the Hub! Find it here: https://github.com/inseq-team/inseq
aolko#7301: hi there,
|
i bet everyone heard of and/or used chatgpt
i'm trying to clone it using the available FoSS models, like bloomz, here's my PoC https://gitlab.com/aolko/ai-chatbot-poc (and i'm not a data scientist)
the problems i've noticed are:
the model is incredibly incoherent and dumb
how do i solve that?
aolko#7301: here's how a dialog looks like https://cdn.discordapp.com/attachments/922424173916196955/1052270252496519209/image.png
aolko#7301: - wtf with the sudden `is a problem with code` umprompted context?
- whytf it appends to the reponses? It should only happen in the input, not output
aolko#7301: 👀
Omar Sanseviero#6198: Very cool!
Omar Sanseviero#6198: Will share it with the team!
Skyfall#9082: Hello everyone !
I’m trying to train a codeparrot like model with default settings on a specific programming language. I have about 400k samples in my reduced dataset but the training seems to be very slow, I’m using accelerate lib on a nvidia A10G (24Go of VRAM), one epoch is about 24h, with a batch size of 6 (otherwise I have not enough memory). Do you have some tricks or things to check to speed up the process?
How many times in general the model need to see the dataset to achieve a correct training? Thanks for your advices 🤗
spartan#6390: Anyone know if Pipelines support zero copy? I have it set to load only to one gpu using a map but it still seems to preload the entire model onto DRAM
JoAmps#2144: Is it possible to develop something like this using open source tools like hugging face gpt neo? I generated this using chatgpt by just sending this prompt https://cdn.discordapp.com/attachments/922424173916196955/1052625276745551913/Screenshot_2022-12-14_163533.png
DiegoT#8170: Hey all, I am reading the transformers book, on chapter 2 they use emotion dataset, however a bug raises that a file is missing in dropbox. I found on github a workaround using emotions = load_dataset("SetFit/emotion") https://cdn.discordapp.com/attachments/922424173916196955/1052789503720439869/image.png
|
aolko#7301: guys, come on
aolko#7301: it's been 2 days
Omar Sanseviero#6198: This is not a Hugging Face bug unfortunately, so there's not much we can do. The dataset owner needs to reupload their files as they were using Dropbox for the hosting https://github.com/dair-ai/emotion_dataset/issues/5
satsuroki#3326: what are the current sota of machine translation like we have models like bloom and all these stuffs. Any idea ?
KillerKhan#5889: Hello everyone. I’d like to learn how to use transformer to train my own agent. My focus is digital marketing. I got a lot of books and training on the topic and was wondering how do I go about starting this 🙂
lritter#8349: what is the difference between mt0 and bloomz? i'm reading the model card up and down, i've tried to google and read articles about it, and nowhere does it seem to say, not one word, what sets these two models apart
lritter#8349: the only difference i've noticed is in usage; mt0 requires `AutoModelForSeq2SeqLM`, while bloomz uses `AutoModelForCausalLM`
cakiki#9145: Did you consult the paper? https://arxiv.org/abs/2211.01786
lritter#8349: ah, of course. i forgot.
lritter#8349: to answer my own question: bloomz is a transformation on bloom, whereas mt0 is a transformation on mT5, a model by google.
cakiki#9145: Both models are multilingual
lritter#8349: i'm also trying to figure out how to iterate through generated tokens one by one so i can start reading before it is done generating; i have found someone in `ask-for-help` who had the same issue and apparently solved it, but did not document how
cakiki#9145: Are you using the model's `generate` function?
lritter#8349: yep.
cakiki#9145: There's a stopping_criteria parameter or class (you'd have to double check) that lets you tell `generate` to stop after one token. So you could use that to generate token by token I suppose
lritter#8349: i suppose i'd also have to feedback the hidden state so it can continue where it left off
lritter#8349: a python generator based pattern would be more helpful here
cakiki#9145: Yes, that's handled by generate
cakiki#9145: Might be some better way to do it but I'm not sure
lritter#8349: this appears to work, though i don't know if it's the most efficient or most correct way to do it; particularly if sampling search or nbeams still work correctly this way. And I can't tell either when generation is actually complete. The model will just babble on forever 😉
|
``` while True:
inputs = model.generate(inputs, max_new_tokens=1, **opts)
print("\r" + tokenizer.decode(inputs[0], skip_special_tokens=True), end = "")
```
aolko#7301: i want to remind you about my questions
https://discord.com/channels/879548962464493619/922424173916196955/1052268063678005248
Burn#1718: Hi all, wondering if you could help me phrase a question? I want to look into approaches for improving GPT-2's "memory" of past prompts but I am having trouble finding good articles/papers on the matter. I think I may be asking the wrong question so any guidance would be really appreciated.
lritter#8349: @Burn GPT-2 is a text completion engine, so the only form of memory it recognizes is what else is in the input text
Burn#1718: Thanks for the reply! That tracks with what I understand. Are there ways of structuring the input to improve "memory", or is simply feeding back some of the previously generated text the best one can do?
cakiki#9145: Have you seen this blogpost? https://huggingface.co/blog/rlhf the coherence you're after comes from 1- bigger models, 2- different training objectives.
aolko#7301: not sure how big i should if:
- i'm going to run it locally
- my gpu ain't that beefy
aolko#7301: but i know for sure that bloomz is too big
cakiki#9145: You can "offload" big models into memory and onto disk (at the expense of latency of course). The Accelerate library handles that: https://huggingface.co/docs/accelerate/usage_guides/big_modeling
aolko#7301: yeah, that's too advanced, shouldn't be
regardless, no append answer still
Burn#1718: How bad of a hit of latency are we talking?
Skyfall#9082: I am interested to have the answer also ^^ using codeparrot project with the accelerate pkg it could explain why the training is slower than expected ^^
JeremyKirshbaum#6893: Hey, not sure if this is the right channel for this, but does anyone know how ChatGPT was able to increase the max token length for input by such a dramatic amount over vanilla GPT-3? Could that really be achieved with fine-tuning?
|
!!Puffy Bird!!#7496: I recall BLOOM could do that because it used alibi
Joneleth Irenicus#3903: when a system returns a natural log confidence how can i convert it to a percentage?
jonbot#0552: I am trying to build my own continuous bag of words model. I made a model class and it works for one input (word context) at a time. How do I modify the class such that it can take batches of inputs?
jonbot#0552: https://cdn.discordapp.com/attachments/922424173916196955/1055858464653578280/cbow.py
MonsterMMORPG#2198: looks like there is no model checkpoint for improved large model
MonsterMMORPG#2198: https://huggingface.co/spaces/whisper-event/winners?dataset=mozilla-foundation%2Fcommon_voice_11_0
grub#4239: Are there any embedding models on hugging face?
grub#4239: Also how do they compare to the OpenAI ones?
grub#4239: Anything that is small enough to run locally?
MvW#1425: Do you mean sentence embeddings? Like the ones on the sentence-transformers library? If I am not mistaken they are also on the hub.
Nils Reimers who is the main author of the sentence-transformers library wrote a blog post showing the openai sentence embeddings didn't seem worth the price: https://link.medium.com/SwAfy50R2vb
grub#4239: Yup, they recently released a new model thats significantly cheaper now and better performant.
KaratsubaButSlower#9891: Hi guys
Does anyone know about the origin of this error and how to fix it -
cannot import name 'AdapterConfig' from 'transformers'
I can't seem to find any references for AdapterConfig in the HF Docs too but a lot of places seem to be using this import including a code base I was trying to run
KaratsubaButSlower#9891: Incase someone faces this idk how or why but this installation fixes it -
|
`!pip install -U adapter-transformers -q`
cakiki#9145: For context: this is a 3rd party library on top of HF transformers
KaratsubaButSlower#9891: Yupp the import kept throwing me off
Thanks
Fayne#9881: EDIT: The post was about running flan-t5-xxl with 8bit precision to save memory, and the resulting error message. But turns out my issue was that tokenizer input_ids had to be sent to cuda device
rovo#5468: Im trying to find something that is inventive and creative with words by inspiration of something else along the lines of stable diffusion does with images, but instead it can be run with a text prompt to generate more text. Am I in the right place? 😀
rovo#5468: I initially was attracted to chatGPT for that but have realized its probably not the right tool.
ponchatoula#4556: Why do you think chatGPT is not the right tool? All text generators work pretty much the same afaik
rovo#5468: I don’t know enough about any of it to really say. My hunch is chatGPT is being put on rails to stay within a specific focus. It mostly seems good for research, great at condensing large amounts of info, creating outlines, generating text, some conversation banter. I haven’t found a way to get it to reveal any insights by combination of subjects. When you play with StableDiffusion or MidJourney, over time the results can become very inspiring. I was hoping to do something similar but just with text.
ponchatoula#4556: yeah that is pretty subjective, you might have to just try all the tools out there to see if anything fits what you want
tjamil-bwp#4627: Hi,
This is Tariq Jamil (https://www.linkedin.com/in/tj-yazman/)
I have something to share about NLP
pip install tj_preproc
My new optimized text processing package.
https://pypi.org/project/tj-preproc/
Kyoumi ao#2227: Hi everyone, I want to ask about federated learning in NLP.
For NLP task, how can we generate vocabulary for learning process?
|
Should I generate vocab using public data so that I have public vocab and distribute it to client side?
Thank you.
Pratibha#7658: Hi,
Need help on question answering
I have list of question and answer dataset , how to extract those questions which are not answered or left blank ?
MvW#1425: I guess it depends on the type of model you want to train with.
* For an extractive Q&A using an encoder only model like BERT, you can train your model to return a span going from 0 to 0, using the [CLS] token. One of the examples notebook does that: https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb
* For an abstractive model, like T5, you can probably generate an empty answer, or some kind of "No answer" answer.
Pratibha#7658: @MvW Thank you !! for sharing the link , i will go through .
thomasc#6333: Does anyone have any links/blogs about how to fine-tuning a model to be a chatbot using HuggingFace models?
Parth#2240: If I essentially want to take the LongT5 model (which is a text summarization model) and train it to work with a different language than it was initially trained on, would I have to pretrain or train from scratch? It would be great if someone could provide some guidance
maheshpec#9299: hi everyone, can someone point me towards any generalized models that Key Information Extraction from documents (Similar to https://arxiv.org/pdf/2206.11229.pdf)
maheshpec#9299: Model heads available with LayoutLM(https://huggingface.co/docs/transformers/v4.25.1/en/model_doc/layoutlmv2) seem to be geared towards Token classification but not trained on DocVQA
maheshpec#9299: something like token classification for financial statements but with output as a generalized json
MvW#1425: I might be wrong, as I am not that familiar with Key Information Extraction, but if it's similar to keyword extraction or topic modelling, you might want to take a look at BERTopic: https://maartengr.github.io/BERTopic/index.html
jugzy#4477: Hi all 👋 Can someone suggest me good models for NER for medical texts? Specifically tags like drug names and their reactions? Is there anything in HF model library or otherwise?
vilim#1404: super new to NLP- tokenizers convert text to integer sequences, and the model takes integer sequences as input? seriously? I would think integers would be one-hot encoded if the vocabulary were not so large
maheshpec#9299: Thank you Marc. I will go through it
|
MvW#1425: AllenAI developed custom pipes and models for scientific documents. Especially, they have a few NER models for biomedical data: https://github.com/allenai/scispacy
MvW#1425: To my understanding, using integers instead of one-hots is just an implementation trick. Instead of doing the full matrix multiplication one-hot x embedding matrix, which returns a line in your embedding matrix, you just do a lookup using the corresponding integer.
jugzy#4477: thanks @MvW ! I will give this a read. I am also looking into MedspaCy.
vilim#1404: thanks for the explanation :) looks like torch.nn.Embedding was my answer
Radi Cho#7753: People are nowadays using zero-shot agents such as ChatGPT as search engines. While they are good at answering the most popular questions, sometimes they miss historical, factual, and numerical correctness. So how could this aspect of LLMs be successfully evaluated (in an automated fashion)? Please point out research that you're familiar with.
I imagined a dataset that contains a large number of general questions to be given to zero-shot models. For example, "How tall is the Great Pyramid of Giza?" where the expected answer is "139 meters". Or reasoning questions again based on knowledge: "If you have three apples, two tomatoes, and 32 strawberries, how many fruits do you have?" with an answer of "34" or "thirty-four" (the model should understand what we call a fruit).
However, if we had such a resource, the evaluation process would still be fuzzy because the LLM may answer, "The Great Pyramid of Giza is tall 139m." or "193 meters". The first is not containing the exact answer but is correct, so we cannot just check if the exact entity is included in the response. But we cannot use closeness metrics (such as BLEU) either since "193 meters" sounds a lot like "139 meters" but is entirely wrong.
satsuroki#3326: Hi for the next decade what will be the trend in NLP ? can we say that the speech recognition problem is (almost) solved ?
karlitucha#3934: Do epochs matter when training LLMs due to the fact that there are so many samples in each iteration?
Vadi#3695: How can I run bloomz locally on my 16gb rtx 4080?
Vadi#3695: The bit that I'm struggling with is fitting it all into vram.
wassim#0589: hey guys, please anyone has already worked on affinity testing? i need some help
MvW#1425: On fine-tuning tasks, my experience is it still does, just a lot less. When I have a large training set (1M+ samples), the model I fine-tuned tended to converge in a couple of epochs (>=2). From there, gains were quite small.
Iron-Bound#2881: Looked at the batch size or lower bit precision
https://huggingface.co/docs/transformers/perf_train_gpu_one#batch-sizes
Vadi#3695: Thanks - and does that apply for inferencing as well?
Vadi#3695: I don't see the same options in the help pages :(
Kiril Videlov#6597: Hey folks! I would like to learn about inserting text between paragraphs (i.e. text generation with provided prefix _and_ suffix prompts).
|
OpenAI have this functionality on their API — https://beta.openai.com/docs/guides/completion/inserting-text and I basically I would like to learn how they do it.
I read through the "whole word" MLM example https://github.com/huggingface/transformers/tree/main/examples/research_projects/mlm_wwm but I'm not sure this is quite it.
Any ideas on what to check next?
Robert1#0234: i have a bug in transformers + torch
torch.mutinomial is often returning very unlikely tokens like 1e-7 when sampling
Robert1#0234: seems it cannot handle sampling from 50,000 values where some of them are very small?
Robert1#0234: anyone seen this before?
Robert1#0234: 100% a bug btw not ravings of a mad man
MediaProphet#9559: is there a platform that is able to generate system diagrams atm?
Ie; user describes a system or works with it to define a system (business process, software process, etc.); and its then able to generate a system diagram that describes the solution that's been defined otherwise, as a systems diagram...
MediaProphet#9559: by extension; a similar sort of thing might also be the creation of infographics...
include#7282: I don't know anything like that. Would be interesting to see.
Omar Sanseviero#6198: Hey there! Would you mind opening an issue in transformers with an example of this please? 🙏
ᴛᴇɴꜱᴏʀᴄɪꜱᴛ#0703: Yesterday I tried to load dataset 'srwac' (serbian/croatian) which was apparently very large, it took me hours to load it up to Colab, eventually, the connection was broken, so I wonder is there any way to load the dataset partially, like a smaller amount of it?
Kiril Videlov#6597: For posterity, I found somewhat of an answer to my questions in this paper https://arxiv.org/abs/2207.14255
Owl#1746: Hi everyone, I'm currently writing my master's thesis. Does anyone know what the best AI tools are to reformulate and summarize texts? Because I have two old master theses where I can take a lot of the theory from. In German, otherwise it's not much use
|
key47#9790: Hey guys, I'm not sure if this is the right place for this question but..I'm looking to fine-tune a model to classify occupation codes from a job posting/listing, I have a dataset of 2M labelled job postings, and I'm wondering which model & task would be the best place to start this project
ᴛᴇɴꜱᴏʀᴄɪꜱᴛ#0703: Try out summarization with PegasusXSum, I wrote an article about that kind of models: https://medium.com/artificialis/notes-on-abstractive-summarization-pegasusxsum-and-t5-497109d72029
Robert1#0234: I have a GPTJ model and want to do inference with 2000 token inputs. Its very slow compared to 500 token inputs. My understanding is that my only option is to use a larger GPU? and that using multiple GPUs wont help me
PantheRedEye#0901: Hello, how can I find out what arguments a model expects? For instance, `bert-base-uncased` expects `labels` but I have seen `label` used for other models?
PantheRedEye#0901: Started a thread.
demre#4307: hi guys Stefan & I have trained a large Turkish t5 we want to try it for a downstream task via fine tuning
demre#4307: any of you guys have a turkish dataset to fine tune with a colab notebook that i can try
demre#4307: the dataset should be t5 friendly summarization or paraphrase
zenbakery#8321: Hey folks! what would be the best models / tuning methods to extract structured job information from career pages? (e.g. https://www.uber.com/us/en/careers/list/?query=)
maheshpec#9299: @NielsR_ sorry to tag you - didn't know who to contact.
1. For layoutlmv2 there's a head for relation extraction `LayoutLMv2ForRelationExtraction` (https://github.com/microsoft/unilm/blob/master/layoutlmft/layoutlmft/models/layoutlmv2/modeling_layoutlmv2.py#L895). I didn't see the same thing for Layoutlmv3 (https://github.com/microsoft/unilm/blob/master/layoutlmv3/layoutlmft/models/layoutlmv3/modeling_layoutlmv3.py). Is there any specific reason for that
2. The model docs in 🤗 don't mention this training objective - https://huggingface.co/docs/transformers/v4.24.0/en/model_doc/layoutlmv2#overview.
MvW#1425: I don't know of any Turkish specific dataset collection, but you can maybe take the Turkish part of XTREME-R (https://arxiv.org/abs/2104.07412). If I am not mistaken, the T0 project (https://github.com/bigscience-workshop/t-zero) proposes a system for mapping any natural language tasks into a human-readable prompted form. This could be of help turning them into seq2seq tasks.
Abraham Owodunni#5583: Hi everyone, please how do I make translation from a non-english language to another non-english language using mT5 from the Huggingface HuB.
There are example codes for translation with monolingual models but none for multilingual model.
blackhc#4420: Is there a helper utility that makes it easy to extract common properties from transformer Configs in hf? Some of the configs have different names for the same thing, e.g. n_embd vs hidden_size
epicx#7921: What are the best parameters to get an absurd quantity of max_length text output?
Hypothetically, if I did want a model to produce a 50000 word story from a prompt, what might be the best way to do that keeping VRAM low?
epicx#7921: I had the same issue 6900xt 16GB - a) you can try to fit the bloomz-7b1 model (it's smaller) b) do a search for the Petals and try their bloomz implementation c) try to get NVME "disk" "offload" to work with "import accelerate"
|
Good luck
stascronberg#6495: Hi, I don't know if there's a study group going already for NLP 🤗 Transformers, but I would like to join/start one. Preferably an EMEA location based group to make planning meetings easier for everyone. My initial plan would be to work through parts of the HF Course / HF NLP Book also maybe do some mini projects and discuss important papers. I've already made a decent start in Transformers, kind of understand the different architectures, tasks and research datasets, but I'm happy for anyone of any level to join. Let me know if you prefer another approach, I'm open to discuss a different approach to learning together 🙂
toma#9910: this is one of the big challenges right now, unfortunately memory usage is increase quadratically as the output length, if you double the length the memory requirements quadruple
this is quite of an issue but some models manage to deal with memory usage while handling many tokens like longT5, pegasus, reformer or longformer, right now i dont think you can generate more than 16k tokens in a single inference call which is already very big
you could make successive calls with last output as input, or a bit tweaked not to get too far from the original subject
toma#9910: yea I think so, use a custom device map to fit a maximum amount of memory of your gpu and rent a bigger one are a solution, using multiple gpu will help since you can split the work with a device map, another one could be for you to generate your output in multiple inference calls, tweaking the prompt between each
ryaaan#3766: I am running the script `run_mlm_no_trainer.py `(https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm_no_trainer.py), to perform the training of a RobBERTa model. the idea is to start from an already existing dutch RobBERTa model, perform a second training on a specific topic (Job Vacancy Text), and later fine-tune it on a specific downstream task.
Dataset I have has one job vacancy in each row, I was able to run the the script, with `line_by_line` parameter, but I want to understand if that is the right way to do it. It would be really helpful if you could kindly answer or point me to somewhere in the documentation when to use it for language training task.
Robert1#0234: hi I have the tokens and logprobs and want to fine tune gpt2 on those logprobs rather than the text dataset. Is there a convenient way to do this? can I just set the labels to 0.4882 (the raw probability target)
silvos#5002: Hello! I wanted to gauge interest in datasets like SQuAD, and context / reading comprehension. Is this something still of interest to people? What applications would you have in mind for it?
apohllo#2383: @silvos sure I think yes. Models for QA like atlas use retriever and reader. And you can train them both on datasets as SQuAD. Yet I think people use natural questions and trivia qa more often since SQuAD was created not naturally and the questions are too much bound to the context. Moreover I think SQuAD had reached a point there's no much progress regarding sota. So it's too easy for the current models.
apohllo#2383: But for models like chat got qa dataset are a must.
apohllo#2383: *chat gpt
NielsR_#8974: You can do that, similar to distilGPT2. Details here: https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation. Basically it's a student-teacher setup, where the student learns the probability distributions of the teacher
silvos#5002: Thanks. Do you mean that models like chatGPT still use SQuAD during training (as well as other datasets)?
apohllo#2383: Not sure about chat gpt, but Atlas which beats GPT3 on qa uses nq and trivia qa as far as I remember. Flan t5 used more than 100 datasets for qa, reading comprehension and reasoning.
satsuroki#3326: have someone worked on real time speech transcription models ?
orgrim#1636: Not sure whether it'll actually work with that setup, but try out the huggingface accelerate library
|
z3rkn#6630: Hi everyone, are there any small language models (based on lstm, transformers, etc), which can be trained from scratch on mid-end GPU for text generation tasks?
MonsterMMORPG#2198: which punctuation is best currently? to fix output of whisper. e.g. this para
so I will open the horse image that I am going to use from here so this is the image and
it is loaded okay then I will enter my prompt okay I have used this command convert the
photo of a horse into an anime let me zoom in to zoom in click press keep pressing ctrl
button and use your mouse wheel like this then when you click the show icon here I know
it is small it will open the original image in the left and the generated image in the
right by the way I am using 512 by 512 image it supports any resolution by but I suggest
you to use 512 by 512 image then use AI upscalers to increase its size but you you are free
to try bigger resolution any resolution image as well it is not very fast so this is the
MonsterMMORPG#2198: is felflare-bert-restore-punctuation the best?
Martin9295#3755: Hi, I need some advice on information extraction.
Context:
I have a lot of public business documents in PDF format (80+ pages each). However, their layout is complex and contains text as well as images and tables. I want to extract concepts (entities) and relationships from the PDFs. Both can occur in both text and tables. The end goal would be to map concepts and relations to a custom knowledge base.
Procedure so far:
I have tested several libraries for text and table extraction (pymupdf, pdfplumber, pdfminer, tika, tabula and camelot). However, the results for table extraction are far from perfect. So instead I converted the PDF pages to images. Then I did a layout analysis with PaddleOCR to identify and extract text, figures and tables for each page. The result is quite good so far, except for a few misclassifications. Now I am working on a concept to identify entities and relations and extract both. According to my research, many approaches are based on either text or tables. Now I have come across models like TaBert, TaPas and TABBIE that can learn representations for both.
|
Questions:
- Do any of you have experience in extracting information from text and tables?
- Are models like TaBert suitable for my purposes?
- Alternatively: Could models from the field of document understanding be an alternative approach (e.g. LayoutLM variants)?
- Or does anyone have a suggestion on how to proceed?
Would appreciate any advice. Thanks in advance!
ClumsyTurtle88#3405: https://paperswithcode.com/paper/cramming-training-a-language-model-on-a
ClumsyTurtle88#3405: I'm not sure what you mean by mid end, but this paper should have lessons regarding resource-limited training procedures.
pikaduck#7955: I have worked quite a lot with data extraction from documents that have semi-tabular or full-tabular structure. There are multiple ways of approaching this which heavily depends on the resources you have at hand and if your use case is a commercial use case. Please let me know if you are open to discussing the possible ways of implementing by sharing what resources are accessible to you in terms of data preparation, etc.
pikaduck#7955: The models you've mentioned are not bad but they do have better alternatives, but again that depends on what resources you have.
Martin9295#3755: The use case is academic in nature. I am testing the applicability of automatic approaches to information extraction within a predefined domain. I need to investigate what challenges arise and where manual intervention may be required. Ideally, this will result in a prototype that covers the necessary steps.
I need to investigate which tools/libraries/models I can use. At the moment I am restricting myself to free / open source libraries.
Can I send you a direct message? Then I might be able to give you more details.
pikaduck#7955: Sure. I do have quite a lot of resources that you might find helpful.
key47#9790: Hey I messaged here a while ago about building a occupation coding model using a sequential model or BERT. I ended up building a model that achieves around 86-90% accuracy for just job titles, and another for job titles & descriptions for around 76-80% accuracy. I used a sequential model with an embedding layer, Conv1d, globalmaxpooling, dropout, and softmax layers. Also, I used GloVe word embeddings for this. Despite dumping 2 million more postings into the model, I can''t seem to increase the accuracy for job titles+desc. Does anyone have minute to DM so I can ask a few questions and maybe share my architecture for review?
key47#9790: To clarify the context, I can't seem to get my BERT model in the same ballpark as building it from scratch with Keras. Wasn't asking for help with my keras models
MonsterMMORPG#2198: Started a thread.
|
Eduardo Alvarez#9887: What is the best foundational model on HuggingFace to build general purpose chatbots? Not looking for anything perfect, just curious what the community thinks.
XcessiveSwag#9306: Hi, I have a very specific question. I'm looking at this page for executing "few-shot learning" on GPT-Neo https://huggingface.co/blog/few-shot-learning-gpt-neo-and-inference-api using the accelerated inference API. I do not want to use the API and run this query locally. I understand I have to use model.generate and set those parameters. However what is the corresponding parameter for "End Sequence"? when using model.generate?
XcessiveSwag#9306: Again, I have the model in memory and can generate text, just cannot figure out how to specify the End Sequence as is done in the link above
XcessiveSwag#9306: I'm using GPT-J instead of Neo but I doubt that matters
Vadi#3695: Also interested to know this
Vadi#3695: @XcessiveSwag I worked it out - neither 125M nor 1.3B model support `end_sequence`. Just 2.7B does.
XcessiveSwag#9306: Oh damn. And this is for gpt neo?
XcessiveSwag#9306: And so when hit generate you just do end_sequence = "###" ?
Vadi#3695: I think so. I can only load 1.3B in my GPU. I worked this out by playing around with the web api of huggingface
XcessiveSwag#9306: Got it, thanks a bunch man! I'll test it out later today
Vadi#3695: Nevermind that, I did fit 2.7B but end_sequence is still not supported. So for some reason it's only supported in the 2.7B web API and none other web APIs
XcessiveSwag#9306: Hmmm dang
XcessiveSwag#9306: There must be some way tp recreate the behavior tho
jeremy-analytics#8398: i just dealt with this problem. how i solved it was by doing a few-shot prompt for Q&A with a `\n\n` between each instance. then in the code i just split the completion on this and took the first one.
jeremy-analytics#8398: now, i'm trying to figure out what a good set of parameters are to generate text using flan-t5-xl. It ends up repeating the answers from my few-shot prompt.
Does anyone have good example prompts and settings to generate text for flan-t5-xl?
jeremy-analytics#8398: it seems like shorter prompts work better for flan-t5 ?
satsuroki#3326: hello I'm trying t5 model from huggingface the `t5-large` specifically for small translations. I built a client server to communicate but it doesn't seem to work on server side while in client side the translation is done and when I print the text given to the model I have: `translate from German to English: german_sentence_here`
satsuroki#3326: Here is what is wrote ```
|
from tf_transformers.models import T5Model
from transformers import T5Tokenizer, T5ForConditionalGeneration
from transformers import pipeline
# set the model type
model_name = 't5-large'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
...
# dialogue with client
enc = "You are connected now you can start chatting".encode()
con.send(enc)
clientMsg = con.recv(2048)
while 1:
# translate the text message received
clientMsg = clientMsg.decode("utf-8")
clientMsg = "translate {} to {}: {}".format(correspondant_language, talk_language, clientMsg)
|
print(clientMsg)
clientMsg_ids = tokenizer(clientMsg, return_tensors="pt").input_ids
outputs = model.generate(clientMsg_ids)
print("ClientMessage>",tokenizer.decode(outputs[0], skip_special_tokens=True))
...
```
satsuroki#3326: it's almost the same on client side
toma#9910: do you get an error in particular ?
toma#9910: @satsuroki
satsuroki#3326: no I don't have any error I just don't have the translation server side
satsuroki#3326: the message is printed as written client side
jeremy-analytics#8398: does the client side message include both the prompt and the translation?
> generated_text = tokenizer.batch_decode(generated_ids[:, input_ids.shape[1]:])[0]
try something like that?
cphoover#9606: Does anyone know of good pre and post-processing techniques to improve the accuracy of Named Entity Recognition inference tasks? I have tested my pipeline with large transformer models, as well as base models, and Spacy's `en_core_web_sm` model. I like the performance traits of the smaller models, but there are some weird occurrences with the results such as partial matches on names, and misidentification of orgs. ... I clearly need to do cleanup on the data even with the large models.
Robert1#0234: i have a fine tuned GPTJ model
|
I want to use this for generations but also for sequence classification of the generations
is there any way I can stick a gpt2-small model on top of the gptj model and use that for sequence classification (and train it whilst holding the gptj model weights fixed)
anyone know this kind of thing and want to pair program on it?
Robert1#0234: I dont want to use a seperate GPTJ model for classification because it would be slow - but I dont want to use a seperate gpt2 model because I want to use the knowledge inside the GPTJ network
stascronberg#6495: Quite an interesting seminar that came out yesterday on LLMs from Stanford
https://www.youtube.com/watch?v=-lnHHWRCDGk
satsuroki#3326: No the prompt is added when printing the message server side. what is input_ids here ?
jeremy-analytics#8398: https://mcminis1.github.io/jekyll/update/2023/01/27/model-comparison.html
check out the first section "Huggingface"
> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
jeremy-analytics#8398: Started a thread.
Radi Cho#7753: Hi folks,
I am currently hosting the "Multilingual Spell Correction" ML Olympiad, which is available at https://www.kaggle.com/competitions/ml-olympiad-multilingual-spell-correction/overview
The goal is to showcase a real-world problem (reconstructing texts from noisy data sources) and also challenge the participants to work with a more diverse set of languages. The dataset used is constructed to cover representatives from the language families used across Europe: Germanic - English, German; Romance - French; Slavic - Bulgarian; Turkic - Turkish;
Radi Cho#7753: Feel free to join as a participant or spread the word by sharing:)
https://twitter.com/radi_cho/status/1620909895041302528
|
G(r)EEK#4286: has anyone here worked with switch transformer from hf and can perhaps answer : https://discord.com/channels/879548962464493619/1070684253156814879 ?
Alan Kwan#7972: Does anyone have any recommendations for mass extracting the contents of tables from financial text? Think of a 10-K or 10-Q quarterly report and I just want to do data entry. Right now I’m trying
1. table document AI solutions (which is best? Layout? Donut?)
2. tesseract/other OCR + GPT 3.5
wondering if there’s another approach
Pratibha#7658: Hi all,
Please can someone suggest me what model to use to detect non inclusive words from the dataset for example , "hey guys " model should detect guys as non inclusive and suggest as folks .
zz99mz#7105: 1387: UserWarning: Neither `max_length` nor `max_new_tokens` has been set, `max_length` will default to 20 (`self.config.max_length`). Controlling `max_length` via the config is deprecated and `max_length` will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
warnings.warn(
What does this mean? I have them both set in my code? It's the flan t5 xxl model
Pratibha#7658: Please can someone help me with this
toma#9910: you should pass max_new_tokens to your model.generate call, that’s just a warning
toma#9910: you don t need machine learning, just a table with words to replace
CharlieNLP#2270: sounds like a token classification task to me. What kind of annotated data do you have?
Pratibha#7658: Thanks for replying
But my data is huge it's a chat message . From this in want model to detect words and predict appropriate word.
Pratibha#7658: Thanks for replying,
The data is in the form of chat message.
|
toma#9910: yeah CharlieNLP is maybe right
toma#9910: you will need a labeled dataset to train on
CharlieNLP#2270: yeah, you'll probably need to manually annotate at least a bit of it. If you only have a very limited set of data, you can try prompting
satsuroki#3326: @zz99mz I think it while declaring the tokenizer `tokenizer = T5Tokenizer.from_pretrained(model_name, model_max_length=the_lenght_here)`
satsuroki#3326: what's Instruction-Finetuning exactly I don't see any paper on it. what's the difference with on T5 just finetuning
toma#9910: model_max_length is relative to the model, not the tokenizer, it’s the maximum number of token that the model can handle in a generate() call
Masenko72#0138: Hi!
I'm currently developing a chatbot using RASA and for a specific and for a specific part of the flow on the dialog I'm developing a IR + QA system.
The idea is search between a bunch of documents and later once the specific document (because QA system needs context) is extracted use the QA system to answer the question.
All works fine but my problem is that I dont want a short answer like "year 1789 " because it doesnt feel natural for a chatbot answer.
How can I generate or retrieve the full answer (e.g. "the french revolution took place in 1899").
Thank you in advance!
toma#9910: hello, what you want to do is closed book question answering
you need to download pretrained model like t5, then fine tune it to produce long answers with a dataset containing long answers
|
since what you’re task is very close to the main task of t5, you actually don t need to finetune it, when calling model.generate you can pass min_length = 20 for example and that will do the trick
toma#9910: https://huggingface.co/models?pipeline_tag=question-answering&sort=downloads
ᵖᵘ®ᵖˡᵉ-ᵐ¡ⁿˣ💜#0448: Hello!
I'm working on custom training the whisper model OpenAI. Shall I know what are all the prerequisites to custom train a pre-trained whisper model (small/medium) like the dataset size, hparams, epochs, etc.
ᵖᵘ®ᵖˡᵉ-ᵐ¡ⁿˣ💜#0448: Can anyone please help me with this!
cakiki#9145: Hi there! I would ask you not to cross-post the same message across multiple channels 🙂
ᵖᵘ®ᵖˡᵉ-ᵐ¡ⁿˣ💜#0448: Ok sure:-)
cakiki#9145: cc @VB who might have a useful resource for you
ᵖᵘ®ᵖˡᵉ-ᵐ¡ⁿˣ💜#0448: Sure. Thanks for the help!
cakiki#9145: sure thing. Have you come across this: https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition Seems to have been used in a recent whisper finetuning hackathon.
Masenko72#0138: thanks!!! Im going to take a look!
silvos#5002: Julien Simon wrote a couple of really great blogs on doing distributed training (part 1) - https://huggingface.co/blog/intel-sapphire-rapids, and just recently on inference (part 2) - https://huggingface.co/blog/intel-sapphire-rapids-inference on the new Intel 4th generation Xeon CPUs here. Really excited about this! Also, Julien posted a YouTube video on his own YouTube channel to explain further. https://www.youtube.com/watch?v=-q9rmF6fK_w
Rajko Radovanović#8407: hey hey - trying to make a tool that uses GPT-3 to summarize hackernews comments for me... have done this by hand and it works well, but need to automate:
1) scrape page, clean out comments (assuming beautifulsoup)
2) break comments into chunks with max 3K tokens
3) load chunks into LLM and generate summaries + store/version prompt for this
4) Load summaries into LLM to generate a meta-summary + store/version prompt for this
Any recommendations for OSS e2e example project that already does something very similar? Or should I just go read docs and get started? Just want to be efficient with my time 🙂 Also, anyone have suggestions for better workflows vs. the above?
|
jeremy-analytics#8398: look at gpt_index.
MonsterMMORPG#2198: Anyone tested new audio thing and compared whisper
Gnom#8545: Hi, very sorry to be "that guy"
I trained a BERT MLM and got my .bin + config - file using torch.save(model) . ~~Is that not sufficient to load it up again?~~ Loaded using the doc: <https://huggingface.co/transformers/v1.2.0/serialization.html#serialization-best-practices>
I wanted to do some benchmark on it. Is there a simple benchmark to try? :)
satsuroki#3326: what are SOTA papers on speech recognition today ?
stascronberg#6495: If you haven't looked at papers with code, this is probably a good starting point
https://paperswithcode.com/task/speech-recognition
sajal2692#0662: Any recommendations for a non-ML approach for identifying entities (list of ~10k+: example "antique wooden table"") from in strings? Something like spaCy PhraseMatchers, but more tolerable to partial matches ("wooden table"), unordered tokens ("wooden antique table").
Stable Bob#6042: Hi, I'd just like to share my video about Chat GPT-3 called "Perfect Math with Chat GPT3" https://www.youtube.com/watch?v=1JJqYyNJEeI I hope you find it amusing.
satsuroki#3326: thank you so what will be the sort criteria `the github star number` ? because I don't think a paper being receint means that it's a SOTA. And now I am trying `speechT5` which is not that bad even if I haven't compute any metric yet
toma#9910: spacy word similarity ?
Yonz#3679: Does anyone else experience names behaving like they are the same when doing embeddings ?
When doing semantic search we started seeing that the model was thinking ChatGPT and WebRTC were similar things.
Any solutions for making NERs class names?
Dr.Inc#8332: I was wondering if there is a good guideline on how to deploy a BERT model.
Abraham Owodunni#5583: Please I'm looking for a notebook that fine-tunes the a Whisper or any model with the HuggingFace Trainer and it loads the data in streaming mode.
|
If I try to run the trainer with the dataset: `next(iter(train_ds))` I get a notimplementated error.
toma#9910: https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py
itami#9685: Hey guys, my obeisances to the learned ones here. I have a small doubt, I am training a whisper-small model and I want to visualize using tensorboard, the **report_to=["tensorboard"]** feature in the Seq2SeqTrainingArguments exists but I'm not sure how do I make use of it, not sure if I have to use the auth token in HF and visualize it in the remote repository at HF. Also I will have to run this script over a remote SSH, so can I have a helping pointer on how to go about this task of visualizing the losses while the model trains ?
no one#5859: Hello so I have been workings on an ai project lately but the problem is that I can't be confident if I am using the right tool or not
So going to the topic
I am using bb 98 to process the request in Addition to that I wanted to add Sentiment analysis, custom NER and custom dataset (like Mr X never existed on bb98 dataset so to add info about Mr X) without altering the existing; however, I barely got any information that could give me any direction, all the tutorial are on fine-tuning it on different languages
After all, idk if I need to create different model for each and run them sequentially(which is incredibly inefficient) or wait until I knew how to implement it on bb98 or if there's better alternative to bb98
Feel free to ping me
satsuroki#3326: if you got an answer can u let us know why it happened to be ?
satsuroki#3326: is there an open dataset of reference hypthesis- in any language(en->de, en->fr, fr->en...) for machine translation to compute BLEU score
Ani42#3643: I have a question about using the trained model for inference(pytorch mode). This is based on this chapter https://huggingface.co/course/chapter7/4?fw=pt
I trained a simple model and then saved this in a local folder. Now I do not want to use pipeline from transformers but I would like to run the text through the tokenizer, send those to the model directly, get the logits or the output tokens and then decode them(for learning purposes). So I instantiated the model with AutoModelForSeq2Seq.from_pretrained() with my local saved model and then tried to run it but it always kept saying I also need to provide decoder_input_ids. Since I am inferencing I didn't think I needed to provide this since that is the prediction I am expecting. So the way I solved it is through using AutoConfig and then sent the config to AutoModelForSeq2Seq and then it worked.
My question is that is this the right approach? Or there is a better way
Abraham Owodunni#5583: Please I have a question about the HuggingFace library.
For a binary classification, do people used a sigmoid layer to compute metrics and set the output layer dim to be 1, or they use the argmax function on a output layer dim with 2 ?
sebbi#8389: Hello, I also have a question: I haven't seem to see a "formal" parameter on the T5 algo used for translation. Are there any other trained algorithms that have one? I know it exists on deepL or Amazon services (but not Google for example). So I was wondering if it anyone had made one on huggingface
IMJONEZZ#7844: Hey guys, weird question, but does anyone know why return_tensors=“pt” changes the squeezing/unsqueezing of the tokenization and why the datasets library struggles to keep those tokens as tensors (keeps randomly switching to strings)?
|
ramu1115#9047: to get logits and the alogirthm can understand only numbers not the text
Joriczmx#9906: Hello guys, I'm relatively new to this exciting ML topic. I want to make a Q&A chatbot trained on specific documents from my job (specs, procedures, etc.). What is the best approach, fine-tuning or semantic search?
ramu1115#9047: hi @Joriczmx the best way to build a chatbot using the hugging face library is,use the model of the chatgpt2 pre-trained network. so you can easily build
Joriczmx#9906: Oh, thanks @ramu1115
ramu1115#9047: can any tell me ways to run my model fast for free of cost?
Pratibha#7658: Hi all,
Good day
Please can some one help me how to detect non inclusive words from a document or chat message
Any colab links or suggestions
Dr.Inc#8332: I was wondering is there any notebook that will pretrain a t5 model.
Sigma_Vivek#5582: I'm looking for a developer who can help me create a Discord bot that uses OpenAl for natural language processing. The bot will be used to interact with community members and provide helpful responses based on project-specific data. Here's what I'm looking for:
A dashboard that allows me to upload project data and fine-tune an OpenAl model accordingly.
The dashboard should also create a new Discord bot with a unique name for each project.
The ability to fine-tune the OpenAl model on project-specific data to make it specific to the project's domain and language patterns.
Configuration of the bot to use the fine-tuned models for natural language processing and responses.
|
A seamless user experience for community members interacting with the bot, with natural language responses that feel relevant to the project
If you have experience with creating Discord bots and using OpenAl for natural language processing, I'd love to hear from you! Please provide examples of similar projects you've worked on, and let me know how you would approach this project.
cakiki#9145: Please do not cross-post the same message to multiple channels, especially channels that aren't relevant to the message
Dr.Inc#8332: How does prompting a decoder model differ from prompting a Sequence to Sequence model, such as a Flan-t5 model ?
Robert1#0234: I want to create custom callbacks for transformers trainer.
But I dont want to work out the logits each time on the evaluation data. I would rather reuse the results / logits produced during evaluation.
Is this possible / a simpel way to do this?
satsuroki#3326: is it not better to ask this in #dev-discussions ?
IMJONEZZ#7844: Maybe, I thought I’d ask here because it only happens in the tokenizers and collators classes, so no one outside of NLP has a lot of reason to have touched them.
SwarmCognitionLocus#7451: How do I get accelerate to assign entire model to my GPUs? Device_map="auto" insists on using my CPUs for decoders.
SwarmCognitionLocus#7451: Would appreciate any suggestions.
nyadla-sys#4381: DOes anyone converted HF Whisper model into TFLite with multilanguage support ?
dackdel#4651: have you guys seen this https://www.youtube.com/watch?v=3f9YGdYIOXQ
lunarflu#6769: Interesting result from chatgpt;
Text prompt -> Text output = no issues
|
Text prompt -> Code output = no issues
Text prompt -> Code that generates graphics, no issues
Text prompt -> Using text itself as the building block for graphics = 5 minutes of thinking
Maybe because it understands the relationships between text prompts and appropriate code to draw something, but it doesn't have deep knowledge of how characters look to us on a screen. Could be trouble with formatting too. https://cdn.discordapp.com/attachments/922424173916196955/1079721502506750052/image.png
Q'rae#0310: Trying to run the example code for flan-t5-xxl:
```
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
Getting this error and not sure why:
```
|
TypeError Traceback (most recent call last)
<ipython-input-9-ea127ec1792a> in <module>
1 from transformers import T5Tokenizer, T5ForConditionalGeneration
2
----> 3 tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl")
4 model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto")
5
TypeError: 'NoneType' object is not callable
```
Q'rae#0310: Guess maybe there isn't a flan-t5-xxl tokenizer? Changing it to flan-t5-xl seemed to work for now
Robert1#0234: i trained an fp16 sequence classification model (gpt2).
I find my deployment performance is much better when I
```python3
1. torch.inference_mode()
2. .half() precision it before using
```
whats going on here? i want to learn why its so much better
|
its nothing to do with latency here. I mean the accuracy is much better
Robert1#0234: is it because im using the same precision that I trained with?
it is because not using torch.inference_mode() damages performance somehow? I dont see how because its just computing gradients
Joriczmx#9906: Hello guys, do you know if is it possible to use Google's Minerva as a pipeline in a chatbot
Karin#4781: Excuse me everyone https://huggingface.co/datasets/katanaml/cord
Does anyone here know how to make labels for this CORD dataset?
Karin#4781: Does anyone here know how to make labels for this CORD dataset? https://huggingface.co/datasets/katanaml/cord
am93#5162: A basic question: If I replace a models classification head by a custom classification head, do I always have to train the model? Specifically, I'd like to change the model so it doesn't predict e.g., 3 classes, but instead gives me a single value based on a sigmoid
Sahit#2159: Hi I'm trying to figure out the License of this model: https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-cos-v1 Can I assume its Apache 2.0?
Nima#7048: Yup, I think that's a safe assumption since all the other sentence-transformers models (and the SBERT library) are Apache 2.0 as far as I know. Looks like someone opened a discussion about this (https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-cos-v1/discussions/3), so hopefully the author replies!
MikeMik#4199: Is the following task:
{
“Short question, eg: “Are cats mammals?”: “yes, cats are mammals because…(explanaition here)”
}
Should be considered “completed generation task” or rather “text 2 text” generation one?
satsuroki#3326: is a CNN in ASR the acoustic model(on top of which we need to add the LM) or we can go from speech to the text ?
|
ayaka#7682: Jing Hua and I made a free ChatGPT based on the ChatGPT API: https://freechatgpt.chat/
isham#5640: Guys I made this chatbot using ChatGPT API and Gradio: https://github.com/di37/chatbot-chatgpt-api
tetrismegistus#7044: I had a lot of fun hooking up the API to my discord bot too
tetrismegistus#7044: even implemented conversation history for "memory"
satsuroki#3326: I tried and it's :hugging_cool:
Konrad Szafer#2118: Hi everyone! During development our open source project related to Hugging Face, which we hope to publish here soon, we have created the first Linux-related dataset on a platform :hugging_happy: https://huggingface.co/datasets/KonradSzafer/stackoverflow_linux
isham#5640: can we fine tune GPT-NEO models for our purpose on this?
isham#5640: the dataset looks great
Konrad Szafer#2118: yeah, you can do that
Shamima#6358: Hi I'm using Stable Diffusion in app that I deployed in production for users to use. I used stable diffusion pipeline to do so but I see that we can use the same with and without generating an access token. I did not use any access token to deploy it, will model calls be rate limited if I don't? What is the recommended way to deploy models in production, with access tokens or without it?
satsuroki#3326: Hello, have someone try to use end-to-end system for live speech recognition ?
BLT#7913: U can use whisper to do that. Near real time
tech n9ne#8572: Hi guys , i want to find the similarity between two sentences given a list of around 7k sentences.I am planning to use a pretrained sentence transformer model and then fine tune it with the sentence compression dataset . Now how do I know if my model is working good . I mean is there any measure to test its accuracy etc
yathin#9214: Hi guys , i want to work with the BERT model to feed content of a book after preprocessing it and then be able to ask questions to the same on the context i'm very new to ML and can someone suggest rescources which will help me in understanding how to fine tune a model and prepare a proper training set for the context
ramu1115#9047: Use question Answering pipeline of hugging face transformers
yathin#9214: The fine tuning part?
ramu1115#9047: For fine tuning select the model and fine tune your dataset to selected model
yathin#9214: @Joriczmx hey even i want to work on a similar project did you make any progress?? Im not sure about fine tuning the model
Dr.Inc#8332: I was wondering if the Hugging Face hub can have a filter for a model that is Fusion In Decoder models. https://huggingface.co/Intel/fid_flan_t5_base_nq . It took me 5 minutes to find this model.
kd90138#9368: can anybody help me with pipelines and keypairdatasets?
|
kd90138#9368: `for output_dict in tqdm(KeyPairDataset(dataset, "text", "text_pair")):
han_comment = translator(output_dict["text"])
output_list.append((han_comment, output_dict["text_pair"]))`
I'd like to do something like this and i'm wondering the proper way to do it
key47#9790: I'm having some issues with a clustering algorithm I'm applying to a database of company names, roughly 1.5 million rows. I know this is off-topic, hence the channel I'm posting it in. I'm trying to identify duplicates & franchises of the same companies, for example McDonald's & McDonalds are seperate entities in the database and using the TF-IDF vectorizer, pairwise_distances, and agglomerativeclustering I can label the clusters & identify matching companies. I can't seem to get the labels to persist through to new data though, it seems like everytime I want to check a new company name, I will have to re-run the full function on our entire database to see if the new company matches. Here is a quick snippet of code:
from sklearn.cluster import AgglomerativeClustering
from sklearn.feature_extraction import TfidfVectorizer
from sklearn.metrics.pairwise import pairwise_distance
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(df['business_titles']) # the dataframe with business info
distances = pairwise_distances(X, metric='manhattan')
clustering = AgglomerativeClustering(n_clusters=None, distance_threshold=0.95, linkage='complete', affinity='precomputed')
clustering.fit(distances)
df['clusterlabel'] = clustering.labels
Thank you for anyone willing to help or take a peak quickly !
|
Alex . aribo.eth#2870: Is the LLaMA Stanford's Alpaca fine-tuned 7B model available?
StellaAthena#3530: Not publicly (though some people do have access to it already). Publicly releasing it would be a blatant violation of Google’s ToU
awk#2557: Can anyone give me good tutorial guide on prompt tuning with completely open source pretrained model?
I have question answering problem with specific domain, I can build the training datasets, but I need guide on how I can tune the model further.
nx7-g3n#2617: Started a thread.
OliP#6390: Did anyone compare the new peft package with the more established adapter-transformers? Or has experience in both, e.g., by using LoRA with each of the packages? I am just starting to look into the topic and wonder what the key differences are.
One immediate difference: adapter-transformers is essentially copying and modifying transformers code, while peft is a modular extension, so I'd guess that the latter is easier to maintain and has brighter future.
kd90138#9368: Does anyone have experience with fp16 nllb distilled model inference?
toma#9910: y maybe
thoraxe (Erik Jacobs) [Red Hat]#4940: so i'm trying to do some custom training for a bloom model to do summarization/generation. I'm trying to figure out how much system resources I might need for it. This laptop has 32Gi of memory and no GPU and when I try to train with CPU it uses up all the system ram and then the kernel dies 🙂
toma#9910: training on cpu is very slow on big models and training requires at least 4 times the RAM space used by the weights, you might want to rent cloud computing online in order to train your model, or find another model that suits you best
clockwork#2792: Are there any successful attempts at using state space models instead of longformer for the natural questions dataset?
thoraxe (Erik Jacobs) [Red Hat]#4940: I know some of these words. _the weights_ -- what weights are you referring to? there are no weights in any of my settings
thoraxe (Erik Jacobs) [Red Hat]#4940: I have a GPU (3080) on my windows machine but it seems like a fair bit of effort to get CUDA working on Windows. I do have access to some cloud GPU
toma#9910: the weight file is the biggest file in the model folder, it is just a bunch of numbers
https://huggingface.co/bigscience/bloom-560m/tree/main
there is one weight file per framework, for pytorch it is pytorch_model.bin
if you have gpu at home you should try to make it work by installing cuda, even if its a bit of a pain. with pytorch it comes with the package at the installation, refer to the quickstart page and feel free to ping me anytime
thoraxe (Erik Jacobs) [Red Hat]#4940: sure thing
thoraxe (Erik Jacobs) [Red Hat]#4940: FWIW I'm using bloom-1b1
|
thoraxe (Erik Jacobs) [Red Hat]#4940: 2.13GB x4 = 8GB but this is definitely using ~50GB
thoraxe (Erik Jacobs) [Red Hat]#4940: so here's a sample notebook:
https://github.com/OpenShiftDemos/nl-summary-generator/blob/main/notebooks/prep-train.ipynb
This is heavily aped from some other stuff. After training on two XML:article pairs the "summary" article it generates is mostly literal garbage
thoraxe (Erik Jacobs) [Red Hat]#4940: ```
The weather forecast for the next 24 hours for the city of Le Mans is:
[...]
[...]
[...]
[...]
[...]
[...]
[...]
[...]
[...]
[...]
[...]
[...]
[...]
|
[...]
[...]
[...]
[...]
[...]
[...]
[...]
[...]
```
thoraxe (Erik Jacobs) [Red Hat]#4940: oh, wait, i see what i did wrong.
thoraxe (Erik Jacobs) [Red Hat]#4940: I was generating against the _filename_ and not the _file contents_
thoraxe (Erik Jacobs) [Red Hat]#4940: now my jupyter kernel is dying on the generate and i'm not sure why
thoraxe (Erik Jacobs) [Red Hat]#4940: infra error probably. will investigate
toma#9910: i will take a look this evening, keep me updated
thoraxe (Erik Jacobs) [Red Hat]#4940: was able to get it to work right, but the summaries generated are garbage. I think I need more pairs
thoraxe (Erik Jacobs) [Red Hat]#4940: notebook issue was resolved by running the notebook script as a plain python script. something weird in this environment i'm using
thoraxe (Erik Jacobs) [Red Hat]#4940: this is a work-related project. trying it on a GPU now
thoraxe (Erik Jacobs) [Red Hat]#4940: gpu ended up not working in this infra for some reason. back to CPU for the time being.
The XML may just be too big and repetitive to be able to logically map it to an article. I may need to do more pre-summarization of the XML before an article could be generated
MediaProphet#9559: Got a challenge and/or question.
|
I've been participating in https://twitter.com/WSISprocess
One of the issues that has come up is seeking to ensure education for all, throughout the world to achieve SDGs.
So, there's alot of different languages. And mostly English keyboards. So, I wondered about the feasibility of processing educational videos and converting the audio language into local language.
Not sure of the pipeline, but I thought... I'd flag the idea, see if anyone has any interest..
MediaProphet#9559: For that matter, perhaps even subtitles in local language would be a useful initial poc.
headless#4213: hey all, I'm aware of the LLaMA model and the OPT-175B model and the BLOOM-176B model .. .whats the current SOTA for GPT-3/3.5/4-like large language models that I can run on my home lab at home? (GPU or CPU) ?
headless#4213: I've got LLaMA working through llama.cpp
clockwork#2792: What hardware?
headless#4213: I have a 3070 and a Tesla A100 (24GB) and a machine with 256GB RAM
clockwork#2792: Well, I'm honestly not sure how you'd parallelize across the 3070 and A100 since they have different amounts of VRAM
headless#4213: is running on the A100 an option for those larger models? Are those the models I should be looking at?
clockwork#2792: It depends, I think you might be able to run a larger model but with 4-bit quantization
headless#4213: ah nice!
clockwork#2792: You would need to check out which open source models support features to optimize performance on limited hardware
headless#4213: Cool I will experiment, I was curious because I just spent the last few hours playing with LLaMA and it's working well CPU only after I quantized the 65B param model, then I saw those larger param ones and I was curious about what they were, and how they compare, I'm guessing roughly speaking "larger is better" but not sure if there's an enormous difference between 65B quantized LLaMA and a quantized (however much) OPT-175B or BLOOM-176B - or something else
|
I'm setting out to play around and answer these questions myself, but I thought I'd query the hivemind first 😄
clockwork#2792: Makes sense, I've even heard of binary quantization but who knows if that preserves performance well for 175B params
headless#4213: Oh wow, that is interesting. My boss just sent me this interesting paper on pruning large models too : https://arxiv.org/pdf/2301.00774.pdf I have reached out to the authors to see if they are willing to share code, and I'm going to attempt my own implementation tonight, I will paste the github link here if I get something working.
@clockwork thank you for your help!
awk#2557: what is the minimum gpu ram to load llama-7B in fp16?
headless#4213: @awk I'm not sure about GPU RAM sorry, but if it helps you to know : I am running the llama-7b in fp16 on CPU and ram usage goes up about 13 GB when it's loaded and doing inference
awk#2557: Thanks, I google around getting the same answer
Gothos#3704: Do you know of open-source, good, computationally cheap re-punctuators?
thoraxe (Erik Jacobs) [Red Hat]#4940: @toma i am having sisues getting cuda working in python on windows
thoraxe (Erik Jacobs) [Red Hat]#4940: I have a geforce rtx 3080 which supports cuda compatibility 8.6 per: https://en.wikipedia.org/wiki/CUDA#GPUs_supported
thoraxe (Erik Jacobs) [Red Hat]#4940: I installed cuda toolkit 12.1 from here: https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64&target_version=11&target_type=exe_local
thoraxe (Erik Jacobs) [Red Hat]#4940: I installed the pip wheels and metapackages
thoraxe (Erik Jacobs) [Red Hat]#4940: but torch still reports no cuda device available
thoraxe (Erik Jacobs) [Red Hat]#4940: i'm now trying to use visual studio to build the devicequery program
thoraxe (Erik Jacobs) [Red Hat]#4940: and now visual studio is completely borked/broken and i cannot open any projects or solutions. as soon as I hit open, it freezes.
thoraxe (Erik Jacobs) [Red Hat]#4940: ok, rebooted and got past that
thoraxe (Erik Jacobs) [Red Hat]#4940: devicequery says 1 cuda device is available
thoraxe (Erik Jacobs) [Red Hat]#4940: aha, no torch with cuda
thoraxe (Erik Jacobs) [Red Hat]#4940: ugggg cuda12 doesn't have torch support yet
|
thoraxe (Erik Jacobs) [Red Hat]#4940: blargh
thoraxe (Erik Jacobs) [Red Hat]#4940: time to make dinner
thoraxe (Erik Jacobs) [Red Hat]#4940: `tensor([0.], device='cuda:0')`
thoraxe (Erik Jacobs) [Red Hat]#4940: (oven is preheating... lol)
toma#9910: 😄
toma#9910: did you get it to work?
toma#9910: general advice is to restart command prompt between installations
awk#2557: actually, is there any program that let you easily switch cuda version? like pyenv?
Karin#4781: excuse me, i have a problem issue in my line where is my problem? anybody know this?
Karin#4781: https://cdn.discordapp.com/attachments/922424173916196955/1086542811446394880/image.png
Karin#4781: https://cdn.discordapp.com/attachments/922424173916196955/1086542897601593374/image.png
psk90#9720: General question I am training language model from scratch. But the loss is not go ing down. How to debug the missing piece? Please recommend me some tips and tricks
lunarflu#6769: Hey psk90, can you post the output you're getting?
psk90#9720: @lunarflu this is the output of the training console https://cdn.discordapp.com/attachments/922424173916196955/1086651022442774528/Screenshot_from_2023-03-18_19-32-27.png
psk90#9720: sorry this one https://cdn.discordapp.com/attachments/922424173916196955/1086651323031756800/Screenshot_from_2023-03-18_19-34-04.png
lunarflu#6769: Is that the learning rate on the right?
lunarflu#6769: these https://cdn.discordapp.com/attachments/922424173916196955/1086652363386933268/image.png
psk90#9720: yes
lunarflu#6769: I could be wrong, but I recall seeing lr calculated based off current loss, while in your example it seems to decrease linearly at more or less a fixed rate. Does that sound accurate? I could be missing something.
psk90#9720: i have added scheduler. when i do scheduler.step() its linearly decreasing the lr
|
lunarflu#6769: Maybe try decreasing it faster? My hunch is it may be overshooting due to large learning rate
psk90#9720: yea i think its something to do it with the learning rate only. when i give fixed lerning rate 3e-4 loss is going down to 2.000
psk90#9720: but after that loss is stuck again at 2.0
lunarflu#6769: you may have something like this (assuming thinking in gradient descent terms works) where you are getting better results, but it will just take a very long time since you decrease linearly https://cdn.discordapp.com/attachments/922424173916196955/1086655445940306002/image.png
lunarflu#6769: (as opposed to this) https://cdn.discordapp.com/attachments/922424173916196955/1086655485664571454/image.png
lunarflu#6769: test it out and let me know how it goes, maybe also consider running for many more steps and seeing if it slowly becomes better on average (if you have the time) 🙂
psk90#9720: thanks. how to decide my initial learning rate and the scheduler like how many warmup steps i should put? any video or tutorials on this. i am pretty sure i fixed all the model bugs but kind of new to training language model from scratch
lunarflu#6769: of course, I'd say check this out: https://youtu.be/pulR95FImrA
psk90#9720: thank you so much
lunarflu#6769: https://cdn.discordapp.com/attachments/922424173916196955/1086657929828106280/image.png
lunarflu#6769: :hugging_happy: happy I could help a little!
stascronberg#6495: I have an English NER model that I want to apply to other languages. I was thinking about translating the other languages to English and then back, but the obvious problem of translations changing order/amount of words creeps in. Is there any way to get a token level mapping which shows which word/group-of-words in language A got mapped to language B and vice-versa?
This way I could see what the NER entities in English get mapped to in the original language.
I saw that using attention one could get close to this, but are there any existing tools for this? And is it possible to do it with a high enough accuracy?
Thanks in advance for any advice! 🤗
lunarflu#6769: @psk90 let me know what progress you make! I'm interested to hear what solutions you find.
psk90#9720: `def lr_warmup_decay(current_step, warmup_steps, total_steps, peak_lr, end_lr):
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.