sub
stringclasses 4
values | title
stringlengths 3
304
| selftext
stringlengths 3
30k
| upvote_ratio
float64 0.07
1
| id
stringlengths 9
9
| created_utc
float64 1.6B
1.65B
|
---|---|---|---|---|---|
LanguageTechnology | Deep Learning on Electronic Medical Records is doomed to fail | nan | 0.82 | t3_tke72a | 1,647,986,024 |
LanguageTechnology | Researchers From China Propose A New Pre-trained Language Model Called ‘PERT’ For Natural Language Understanding (NLU) | PLMs (pre-trained language models) have excelled at a variety of natural language processing (NLP) tasks. Auto-encoding and auto-regressive PLMs are the most common classifications based on their training processes. The Bidirectional Encoder Representations from Transformers (BERT), which models the input text through deep transformer layers and creates deep contextualized representations, is a representative work of auto-encoding PLM. The Generative Pre-training (GPT) model is a good example of auto-regressive PLM.
The masked language model is the most common pre-training job for auto-encoding PLM (MLM). The goal of the MLM pre-training job is to recover a few input tokens in the vocabulary space by replacing them with masking tokens (i.e., \[MASK\]). MLM has a simple formulation, yet it can represent the contextual information around the masked token, which is akin to word2vec’s continuous bag-of-words (CBOW).
Continue Reading The Full Research Summary [**Here**](https://www.marktechpost.com/2022/03/22/researchers-from-china-propose-a-new-pre-trained-language-model-called-pert-for-natural-language-understanding-nlu/)
or
you can also read the paper [**here**](https://arxiv.org/pdf/2203.06906v1.pdf) | 0.77 | t3_tke5vh | 1,647,985,933 |
LanguageTechnology | BERT Information Extraction on unseen documents | Hi everyone, I am currently working on a project in order to extract data out of legal papers.
My approach (so far) was training a Q&A BERT as the set of questions I have for each document is fixed. One thing I am confused about at the moment is, by training my model with these questions, I can not get answers on solely new unseen documents, am I right?
I wanted to train the model the nature of questions and "what to look for" and then use this knowledge on new unseen documents in order to find similar answers in the new document.
Any clarification is highly welcomed, also feel free to roast my approach, I am happy for any feedback :)) | 1 | t3_tjzogj | 1,647,943,979 |
LanguageTechnology | Weighting embedding similarity by frequency/saliency | It seems like the new standard in search is sentence or document similarity, using things like BERT sentence embeddings. However, these don't really have a way to consider the salience of sentences, which can make it hard to compare different searches.
For example, when using [concept embeddings](https://github.com/cambridgeltl/sapbert), I'd like to be able to score "Exam" <-> "Exam" as less important than "Diabetes" <-> "High blood sugar". But obviously, the former has a similarity of 1.
I've tried using weighting with inverse document frequency of terms. But with the latter being unbounded, it's really hard to figure out how to weight similarity and frequency scores. It's also just not sophisticated enough to handle synonymy and such.
Has anybody come up with any solutions to this? | 1 | t3_tjp1d8 | 1,647,904,965 |
LanguageTechnology | Do i always need to finetune hugging face models? | I am using models as is to try classification,
Unfortunately result for phrase "i do not like you" is \[0.49945366 0.50054634\] meaning that 49.99% if thinks it's negative and 50.01% it thinks it's positive.
All tutorials i saw doo pretraining first on IMDb. Do i need to do that? I thought it's already pretrained and i only need to finetune it.
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased')
model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=2)
text = "i do not like you"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
res = tf.nn.softmax(output.logits, axis=1).numpy()
print(res)
Edit: Looks like that model is expected to be finetuned. I used "[**distilbert-base-uncased-finetuned-sst-2-english**](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)**"** model which is finetuned for sentiment analysis and it produces good results. | 1 | t3_tjjtr7 | 1,647,890,895 |
LanguageTechnology | OpenAI Releases New Version of GPT-3 and Codex That Can Edit or Insert Content Into Existing Text | Text processing is a common task in many machine learning applications. These tasks deal with large amounts of text to conduct classification or translation, which necessitates a lot of back-end labor. It’s challenging to turn text into something that an algorithm can understand.
The OpenAI team has launched new versions of GPT-3 and Codex that can update or insert stuff into it. Instead of only completing the existing text, [OpenAI API](https://openai.com/api/) can now be used to alter existing content. This includes rewriting a paragraph of text or reforming code, thanks to these additional capabilities. The new work has opened up new possibilities while improving existing ones; for example, insertion is currently being tested in GitHub Copilot, promising early results.
[**Continue Reading Our Summary**](https://www.marktechpost.com/2022/03/21/openai-releases-new-version-of-gpt-3-and-codex-that-can-edit-or-insert-content-into-existing-text%ef%bf%bc/) | 1 | t3_tjgdqu | 1,647,881,790 |
LanguageTechnology | Need help in extracting relationships and values | I have a dataset consisting of words like
\- The salary of this person is $10K.
\- The revenue is $100K.
\- The loss is 50K.
&#x200B;
These are some of the easier examples, I wanna extract the word and the numeric values its refeering to like
\- salary: $10K
\- revenue: $100K and so on...
&#x200B;
Is anyone aware of such techniques ? I need to do this for analysis of these numbers. | 0.81 | t3_tjd7oh | 1,647,873,303 |
LanguageTechnology | Contribution statements in research papers | What is an effective algorithm (rule-based) which can detect contributing statements in a research paper? Without necessarily diving into the nlp/nlu aspect I was wondering if anyone can help me come up with an efficient way to detect contributing statements in a paper. I think the reason why this is proving to be difficult is because I am unable to fully understand how to identify contributing statements so any explanation in that direction would help too.
Incase any specific context is required, please ask and I will go ahead and update this post.
**Reference :**
Semeval 2021 task 11: [https://ncg-task.github.io/](https://ncg-task.github.io/) | 1 | t3_tjbtlg | 1,647,869,260 |
LanguageTechnology | Useful Tools and Programs list for NLP | nan | 0.89 | t3_tiolcm | 1,647,793,143 |
LanguageTechnology | Extracting topic from single document from list of possible topics | I'm trying to extract the 'primary' topic from a single document where only words in a predefined list of topics are eligible. I.E., out of all of the words in this list, which word is "most relevant" or most likely to be the topic of the document.
I hope this question makes sense, as I'm new to NLP.
Bonus points if it's easily implementable in python.
Any resources or guidance would be greatly appreciated.
Thanks! | 1 | t3_timoa1 | 1,647,787,721 |
LanguageTechnology | Is it possible to train a model to understand English and then fine tune it for a specific purpose? | I want to start this project to learn more about NLP but I want to know if it is also possible to do so. I want to create a recipe generator in my native language (not english) but the dataset that I have for it is really small (max like 40k recipes) so I was thinking of teaching the model how to understand my native language first and then fine tune it using the dataset that I have for the final target purpose that is predicting a recipe and the steps given some recipe ingredients. Thanks in advance if you're reading this. | 0.93 | t3_tijs5i | 1,647,778,545 |
LanguageTechnology | Code + Data for Paper (AAAI 2022): Multiple-Source Domain Adaptation via Coordinated Domain Encoders and Paired Classifiers | You can find the paper here:
[https://arxiv.org/abs/2201.11870](https://arxiv.org/abs/2201.11870)
And the code and the data here:
[https://github.com/p-karisani/CEPC](https://github.com/p-karisani/CEPC) | 1 | t3_thyy8s | 1,647,707,516 |
LanguageTechnology | Paraphraser tool | Hi everyone, I need your suggestions on a tool project to paraphrase articles.
Here is what I want to do.
\-Input: text/URL (500-10.000 words per article)
\-Output: Paraphrased text/article
I googled it and found some proposed solutions which seems ok for my purpose, for example using Parrot Paraphraser:
[https://github.com/PrithivirajDamodaran/Parrot\_Paraphraser](https://github.com/PrithivirajDamodaran/Parrot_Paraphraser)
[https://huggingface.co/prithivida](https://huggingface.co/prithivida)
[https://www.youtube.com/watch?v=C6gBcL9sAIw](https://www.youtube.com/watch?v=C6gBcL9sAIw)
Or using an API:
[https://healthytechguy.medium.com/i-built-a-paraphrasing-tool-that-can-rewrite-text-and-made-it-opensource-3833e1b93c07](https://healthytechguy.medium.com/i-built-a-paraphrasing-tool-that-can-rewrite-text-and-made-it-opensource-3833e1b93c07)\]([https://healthytechguy.medium.com/i-built-a-paraphrasing-tool-that-can-rewrite-text-and-made-it-opensource-3833e1b93c07](https://healthytechguy.medium.com/i-built-a-paraphrasing-tool-that-can-rewrite-text-and-made-it-opensource-3833e1b93c07))
[https://rapidapi.com/healthytechguy/api/paraphrasing-tool1/](https://rapidapi.com/healthytechguy/api/paraphrasing-tool1/)
\-I do not want an expensive tool coded from scratch
\-I do not want to use paid tools
\-I want paraphrased text to be readable (no loss of meaning)
\-Without grammatical errors
\-With proper punctuation
&#x200B;
I guess I need to find existing trained models for every niche my articles in or train my models from scratch? I am not sure...
But, I am not technical to judge whether those proposed solutions is good for my purpose or not.
Expert suggestions would be highly appreciated.
&#x200B;
Thanks in advance. | 0.81 | t3_thyb5o | 1,647,705,766 |
LanguageTechnology | Are these scores ok to get published in ACL workshop (RepL4NLP) | I got score 2.5/2.5/2.0 on my short paper. They were very impressed with my results but they are unsure where the improvement comes from. Even though the metareview says that the paper needs a lot of work for the paper to be publishable, the comments from the reviewers can be straightforwardly addressed in the comments for committment to the workshop. **With that score, how likely will my paper be accepted to that workshop RepL4NLP?**
**Also, does is a paper related to quantization and/or pruning a good fit for that workshop (link below)?** The third point in "Topics" section kinda convince me it is.
[https://sites.google.com/view/repl4nlp2022/](https://sites.google.com/view/repl4nlp2022/) | 1 | t3_thpca3 | 1,647,671,636 |
LanguageTechnology | need some help in our collage 🤞 project its 80% is completed |
this is an NLP and python based project we are trying to achieve something new this project is almost there but the only file connecting thing is leftover😌
please dm me for more information/collaboration😊
we are open to welcoming you to this project**🤗**
**not a paid work🙄** | 0.14 | t3_thco9t | 1,647,630,679 |
LanguageTechnology | Need Help Choosing a Language Model for a Script Generating AI | Hello all, I'm a Computer Science major senior whos just finished a Natural Language Processing class. basically I want to build a very specific kind of text generator to create Star Trek (and potentially other) scripts based off scripts from the different shows. Basically what I'm looking for is
Grammar: Perfect, or at least as perfect as possible
Structure: There is a clear story/plot that can be followed
Story: It makes absolutely no sense and is hilarious.
Basically I'm looking for something like this: [https://geektyrant.com/news/an-ai-bot-writes-a-hilarious-episode-of-star-trek-the-next-generation](https://geektyrant.com/news/an-ai-bot-writes-a-hilarious-episode-of-star-trek-the-next-generation) or this: [https://twitter.com/keatonpatti/status/1014220692936589312?lang=en](https://twitter.com/keatonpatti/status/1014220692936589312?lang=en) or a personal favorite, Harry Potter and the Portrait of What Looked Like a Large Pile of Ash ( [https://botnik.org/content/harry-potter.html](https://botnik.org/content/harry-potter.html) )
I already have the scripts for several series in the form of:
Title: \[TITLE\]
Stardate: \[DATE\]
Original Airdate: \[DATE\]
\[some newlines then the entire script in a standard format, then a few more newlines\]
(Roll Credits)
For my NLP class I've already build an AI which generates DnD spells using GPT2, and I have the skills to format/tokenize the scripts however I need to to train a model. So basically right now I'm in the market for a good model I can use. Ideally something that's not to hard to get working (or is at least very well documented to get it working without much headache), can run locally on Linux or Windows 10 (been having problems with Google Collab's hardware limitations/timeouts and I don't want to pay for Pro), and get the kind of plots I want. GPT2 seems like it might be good from example's I've seen, but our DnD spells don't really have that comedy level I'm looking for or see in the examples. But maybe that's just because magic is inherently a bit nonsensical and weird. I also don't particularly care about the time it take to train a model as I can just leave it running overnight.
I also wouldn't mind any tips in general for this project. For example with our DnD one we couldn't find a way to get it to generate 1 spell, so instead we tell it to generate X characters and just hope there's a spell in there and we trim it down. So some model that could generate until "(Roll Credits)" or that also learns the length of a document would be great. I have all the scripts as separate .txt files right now, but the only method I know of training a model involves combining them into one document. But with my DnD model I would have problems where sometimes the tags I used to start/end spell components got generated when they shouldn't and ruined the text. I'm worried this might also happen if I combine all the scripts into one document. So a perfect model would maybe read all the files in a specific folder and take stuff like the length and how they start/end into account?
Finally if GPT2 really is a great model for what I want, some suggestions on good training parameters would be appreciated. Actually for any model this would probably be helpful. As much as I took an NLP class, it wasn't exactly the best class I've ever had and while I understand a decent amount of how it all works messing with stuff like that would pretty much be stabbing in the dark for me. | 0.9 | t3_tguqyi | 1,647,583,875 |
LanguageTechnology | Question about unsupervised fasttext learning | Hi,
I'm not sure if this is the right place but I thought I'd ask. I'm trying to implement fasttext (or something similar) myself to try and go beyond some simple DL models. I think there are really two tasks, the supervised one to classify and the unsupervised to get embeddings like with word2vec.
I think the classification one is pretty simple. You get the embeddings of all the words/subwords in a sentence, average them and use that to compare to the class.(like in the old Keras example here: [https://github.com/keras-team/keras/blob/keras-1/examples/imdb\_lstm.py](https://github.com/keras-team/keras/blob/keras-1/examples/imdb_lstm.py))
I am not entirely sure about the unsupervised learning though. Is it essentially like skipgram? I've been trying to build my own to make sure I get it... is the only difference now that the target vector is actually the sum of the vector for the word and subwords?
Using the Keras code below as a basis (from [https://www.kaggle.com/kesarianubhav/skipgram-word2vec](https://www.kaggle.com/kesarianubhav/skipgram-word2vec)) is the difference that my target embeddings (w) are now multiple ones that I would sum after the "Dot" layer (essentially the loss function)? It seems almost too simple!
dim_embedddings = 128
# inputs
w_inputs = Input(shape=(1, ), dtype='int32')
w = Embedding(V, dim_embedddings)(w_inputs)
# context
c_inputs = Input(shape=(1, ), dtype='int32')
c = Embedding(V, dim_embedddings)(c_inputs)
o = Dot(axes=2)([w, c])
o = Reshape((1,), input_shape=(1, 1))(o)
o = Activation('sigmoid')(o)
SkipGram = Model(inputs=[w_inputs, c_inputs], outputs=o)
SkipGram.summary()
SkipGram.compile(loss='binary_crossentropy', optimizer='adam')
Also, as an aside - if I was limited in memory could I use a common embedding layer?
Thanks! | 0.81 | t3_tgoog3 | 1,647,563,412 |
LanguageTechnology | Domain-specific pre-training of GPT? Help! |
I am looking to adapt GPT-2 to generate dialogue utterances in the style of a certain demography/population (let's call them Ogres). However, there are no large datasets that are both 1) dialogue datasets, and 2) are generated by this target demography.
In the absence of such data, I have been considering a few approaches for data augmentation purposes. Many of those approaches would benefit from a GPT-Ogre, which is at least capable of generating text similar to Ogres, if not necessarily dialogic.
Approach 1
**==========**
For this, I am considering performing additional pre-training of, say, GPT-2 on some medium-sized corpora generated by Ogres. This sounds like something that should have been done by a lot of people for a lot of different things by now, but except for some papers that have tried to do this with BERT in the Medical domain, I was not able to find any papers/GitHub repos that have done this with additional unsupervised pre-training GPT.
It would be helpful if someone could point me to some resources around this as I feel the space of hyperparameters to figure out the best learning rate, etc. is too large, and if somebody has already done this, it would be easy to replicate it.
Approach 2
**==========**
There are some dialogue-specific GPT models such as DialoGPT that have been fine-tuned (in a supervised way; mind you, not pretrained in an unsupervised way). However, it is not in the Ogre style. I am wondering if it's a ridiculous idea to perform additional pre-training of a fine-tuned GPT-2 model? | 0.81 | t3_tghp2v | 1,647,544,080 |
LanguageTechnology | Classification and Zero Shot Learning in dataframe | Hello. I already know how to use Zero-Shot-Learning. I would like to know how do I make it apply sentiment classification in dataframe in pandas. | 0.67 | t3_tgbmri | 1,647,527,685 |
LanguageTechnology | Multilingual NLP: how to perform NLP in non-English languages | Hello,
NLP has made great progress these last years but the main focus is on the English language (understandably). I think that many people are trying to do NLP in non-English languages but are disappointed by the results. It is especially hard with text generation models like GPT-3, GPT-J, GPT-NeoX...
In this article, I'm trying to quickly summarize what the options are today for people trying to perform multilingual NLP:
[https://nlpcloud.io/multilingual-nlp-how-to-perform-nlp-in-non-english-languages.html](https://nlpcloud.io/multilingual-nlp-how-to-perform-nlp-in-non-english-languages.html?utm_source=reddit&utm_campaign=le5u8885-ed8e-11eb-ba80-5242ac13d5jv)
If you can think of additional solutions not mentioned in this article please let me know! | 0.97 | t3_tg82du | 1,647,516,631 |
LanguageTechnology | Topic Modeling: Use SVD & NMF in Python to Find Topics in Text | nan | 0.92 | t3_tg489o | 1,647,500,534 |
LanguageTechnology | Explanation video of how to do Open-QA using ORQA formulation | Hi, in this video, I explain ORQA which uses a retriever to find the right context from the entire Wikipedia and then uses an extractive QA model to give a final answer. We discuss the task setup, architecture, and loss function.
The video is part of 8 video series on Open-QA, how it is different from normal QA, the difference in loss formulations, and key papers on different Open-QA architectures.
I will really appreciate any feedback. Thanks.
[https://www.youtube.com/watch?v=9bL2VbwZ9G8](https://www.youtube.com/watch?v=9bL2VbwZ9G8) | 0.88 | t3_tfh9zu | 1,647,436,068 |
LanguageTechnology | Eigendecomposition appears repeatedly in machine learning, sometimes as the key step of the learning algorithm itself. This video intuitively explains the maths behind one of the most important topics in linear algebra - Eigendecomposition. #MathsforMachineLearning | nan | 0.69 | t3_tf61s1 | 1,647,394,457 |
LanguageTechnology | Relation clustering between entities | Given two SVO triplets (U.S, is a, Country) and (U.S, is name of, Country). The relations between two entities U.S. and Country in two triplets represent 'is-a' relation (I believe its fair to say that). What methods exist to cluster such relations e.g. clustering 'is-a' and 'is name of' in same cluster? Any papers that point out to this direction, any resources would be super helpful. Thank you in advance. | 1 | t3_tf56ow | 1,647,391,827 |
LanguageTechnology | How to find different approaches for a NLP task within the industry? | Hi
I got my first job in NLP and am tasked with doing a in-house search engine, that could find various products and product details in a collection of documents.
I would like to have an overview of how others have tackled this and what the current state of the art is within the industry( not necessary in academia, since approaches used there often cannot be deployed in industry, due to limitations on inference, memory etc)
What is a good way to do competitor analysis/ find out how others have approached a task in NLP?
For example in Business Analytics different consultancy houses like McKinsey offer a good insight. Is there something similar for NLP? | 1 | t3_tev94m | 1,647,367,250 |
LanguageTechnology | Request for an alternative to 'Annotations' app for OSX | Hi,
I'm not quite sure if this is the appropriate subreddit for this request, but I am searching for an application compatible with OSX that has the same functionality of [https://www.annotationsapp.com](https://www.annotationsapp.com), which stopped receiving updates in 2018. It's an app that allows you to highlight and annotate text documents in a simple user interface not unlike the Notes app.
I do close readings of short fiction and perform analyses of the stylistic and narratological aspects of the texts that require me to make 100s of annotations. I frequently tag the same lines of text with different labels or annotations.
If anyone could direct me to an alternative application or another subreddit community that can offer advice, I would be eternally grateful. | 1 | t3_tes2py | 1,647,359,459 |
LanguageTechnology | What linguistic theories are most widely underpinning semantic analysis techniques today? | I’d like to try making a custom algorithm (over a long period of time) for semantic analysis. Mostly for fun and learning but if it ends up a viable option for my work, then hey, bonus.
What theoretical frameworks are most common in NLP today? I’m tempted to look to Structuralism given that computers work fundamentally on binary logic, but I remember reading somewhere that Chomskian Linguistics was the go to for most computer based semantic processing research for most of the latter half of the 20th century.
Anyone know which linguistic theories hold the most water or have the most practical use in NLP today? | 0.95 | t3_tersx7 | 1,647,358,721 |
LanguageTechnology | MSc. Program in Natural Language Processing Université de Lorraine, Nancy (France) | Did somebody start the 2nd year of the MSc. Program in Natural Language Processing Université de Lorraine, Nancy (France) without attending the 1st year? Could you tell me what courses you had to take in 1 semester of the 2nd year? Do you have to catch up a lot? | 1 | t3_te2cbm | 1,647,277,588 |
LanguageTechnology | Entity Extraction with Large GPT Models | nan | 0.75 | t3_tdzgwy | 1,647,270,072 |
LanguageTechnology | HealthPrompt: A Zero-shot Learning Paradigm for Clinical Natural Language Processing (Paper Summary) | HealthPrompt is a novel Zero-Shot Learning(ZSL) clinical NLP framework applied the paradigm of prompt-based learning on clinical texts.
Zero-Shot Learning(ZSL) refers to the use of deep learning models to classify instances from new classes without having seen any training data for those classes.
The authors show that prompts effectively capture the context of clinical texts and perform remarkably well without any training data.
Paper summary at https://youtu.be/1SJm6Zr5yAU
Paper link at https://arxiv.org/pdf/2203.05061v1.pdf | 1 | t3_tdxsxy | 1,647,265,398 |
LanguageTechnology | Master's degree in Computational Linguistics? | Hi everyone. I know close to nothing about Computational Linguistics, but a professor gave a presentation about this Master's degree and I was kinda interested. It looks to me like a very specific degree with niche job opportunities. Is it worth the effort? And could you recommend me sources to know this topic better? Thanks | 0.88 | t3_td8xqf | 1,647,184,355 |
LanguageTechnology | Confused about what my goal is in terms of recall & precision | So I'm studying a linguistic pattern and want to automatically compare its frequency in different parts of a corpus. The desired insight should be along the lines of "it appears significantly more often in this part than in the other". It can be detected in a classic binary fashion: a list of candidates can be extracted and each candidate either is or isn't relevant (true/false positive/negative).
I wonder how well a model must perform for the comparison to be valid.
A randomly guessing model naturally gets a recall of around 0.5 and a precision that's around equal to the proportion of Positives in the dataset – is that correct so far?
My question then is: What should I aim for so that my comparison holds water? What is more important for its validity, precision or recall?
My intuition says that if the model performs only a little better than randomly guessing, I should analyze lots of data from each part of the corpus. In this case, the corpus might not be large enough. And if the model performs reasonably well (not sure what values that would be), less data can be used. A perfect model, therefore, needs little data to get reliable results.
Following my intuition, even a bad model would be enough to get an insight about the distribution of that linguistic pattern, provided that the corpus is huge. I should still analyze some samples to see if there is a bias but basically: If the model is better than random, I just need enough data to compensate for the model's bad performance. I'm still not sure what to prioritize though, precision or recall.
I feel like maybe this is a complex topic with lots of literature about it, anyone have a good resource for me to read? | 0.86 | t3_td72sa | 1,647,178,627 |
LanguageTechnology | [D] Will Attention Based Architecture / Transformers Take Over Artificial Intelligence? | A [well popularized article](https://www.quantamagazine.org/will-transformers-take-over-artificial-intelligence-20220310) in Quanta magazine ask the question « Will Transformers Take Over Artificial Intelligence? ». Since having revolutionized NLP, attention is conquering computer vision and reinforcement learning. I find pretty unfortunate that the attention mechanism was totally eclipsed by Transformers which is just a funny name (animation movie/ toy) for self-attention architecture, although the Google's paper title on Transformers was «[Attention is all you need](https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf)». | 0.63 | t3_td1ybd | 1,647,157,783 |
LanguageTechnology | Best Sentence Rephrasing Libraries Python NLP / NLU | Hi team,
&#x200B;
Which tools are seen as best in industry for rephrasing sentences whilst retaining the intent of the sentence?
&#x200B;
I'm looking for the best libraries to rephrase paragraphs of text in Python.
&#x200B;
So far I have identified 12 tools and tested two in Natural Language Understanding (NLU) over [here](https://huggingface.co/models?pipeline_tag=text2text-generation&search=paraphrase). From the list I have compared Pegasus and Parrot for text rephrasing. Pegasus had a better vocabulary however Parrot was more aligned to the **intent** of each sentence was and was selected for the use case.
&#x200B;
Similar to CNN's for object recognition, Naive Bayes for Categorical Text Classification etc.
&#x200B;
Are there best known solution as the best sentence rephraser in industry at 2022?
&#x200B;
\* Outside of a pure NLTK library naturally. | 0.92 | t3_tcg9cl | 1,647,091,227 |
LanguageTechnology | Machine learning generated Regex | Has anybody tried generating extremely complicated Regex or other rule systems for some linguistic parsing operation line sentence boundary detection or syntax parsing?
Like creating something explicit but leveraging AI to make something far more sophisticated than a human would make, but taking out the blackbox of a neural network handling the entire task.
Thanks very much | 0.85 | t3_tcfnvj | 1,647,089,093 |
LanguageTechnology | Resources for social listening (preferably but not limited to Spanish) | Hi, guys! I'm getting into social listening for personal projects and for a new job, and I wanted to get input from anyone kind enough to provide it.
I have experience with data preprocessing, topic modeling with LDA and NMF, data visualization, different types of vectorizers... Basically the fundamentals of NLP for this area. But I'm trying to dive deeper, so I would greatly appreciate if you could share any resources you may have. Bonus points for resources of how to work in Spanish!
Books, articles, guides, videos, GitHub repositores, anything you can recommend or share will be greatly appreciated.
Thanks a lot and enjoy your weekend! | 1 | t3_tc8z4k | 1,647,061,476 |
LanguageTechnology | Sentence classification problem | Hello!
I'm looking to solve a multi label text classification problem but I don't really know how to formulate it correctly so I can look it up.. Here is my problem :
Say I have the document "`I want to learn NLP. I can do that by reading NLP books or watching tutorials on the internet. That would help me find a job in NLP`."
I want to classify the sentences to 3 labels (for example) *objective*, *method* and *result*. The result would be :
`objective : I want to learn NLP`
`method : I can do that by reading NLP books or watching tutorials on the internet.`
`result : That would help me find a job.`
As you would have noticed, it's not a classical classification problem, since the classification here depends on the document structure (unless I'm wrong?)
Any idea of the key words to better describe the problem ? or how I might solve it ?
Many thanks! | 0.81 | t3_tbxxfx | 1,647,026,633 |
LanguageTechnology | Researchers From the University of Hamburg Propose A Machine Learning Model, Called ‘LipSound2’, That Directly Predicts Speech Representations From Raw Pixels | The purpose of the paper presented in this article is to reconstruct speech only based on sequences of images of talking people. The generation of speech from silent videos can be used for many applications: for instance, silent visual input methods used in public environments for privacy protection or understanding speech in surveillance videos.
The main challenge in speech reconstruction from visual information is that human speech is produced not only through observable mouth and face movements but also through lips, tongue, and internal organs like vocal cords. Furthermore, it is hard to visually distinguish phonemes like ‘v’ and ‘f’ only through mouth and face movements.
This paper leverages the natural co-occurrence of audio and video streams to pre-train a video-to-audio speech reconstruction model through self-supervision.
[**Continue Reading my Summary on this Paper**](https://www.marktechpost.com/2022/03/11/researchers-from-the-university-of-hamburg-propose-a-machine-learning-model-called-lipsound2-that-directly-predicts-speech-representations-from-raw-pixels/)
Paper: [https://arxiv.org/pdf/2112.04748.pdf](https://arxiv.org/pdf/2112.04748.pdf) | 0.88 | t3_tbx1j8 | 1,647,024,153 |
LanguageTechnology | [Resource] How to Install Kaldi (Automatically) | Hey guys,
I was working with **Kaldi** about a month ago for the first time, and I found the installation process really tricky, so I created an [**automated Kaldi installation script**](https://www.assemblyai.com/blog/kaldi-install-for-dummies/) to take care of it in 3 lines of code.
I also lay out steps for a **manual installation** if anyone prefers that, but I thought I'd drop this tutorial in here for anyone struggling with Kaldi! | 1 | t3_tbu3ro | 1,647,016,776 |
LanguageTechnology | Snorkel weak labeling for NER. When a token does not fall under any of the class labels or abstains in NER? | I have a program that labels a sequence of words using ontologies. I have labeling functions for class negative, class positive, and class abstain. Should I keep the word unlabeled if it does not fall under any of these class labels and ignore it or should I force-label them under either class negative or abstain? I will be grateful for any hints or help.
&#x200B;
&#x200B; | 0.9 | t3_tbrfn8 | 1,647,009,267 |
LanguageTechnology | One Human Language | Noam Chomsky | nan | 0.38 | t3_tb3p23 | 1,646,932,844 |
LanguageTechnology | Comparing accuracy of two sentence similarity algirithm. | Hi, we were to implememt sentence semantic similarity matching algorithm. basically what we were supposed to do is to given database of about 300k sentence, we have to find 10 most similar sentence to the query sentence given by the user.
The approach we tried:
we created embedding vector from the sentence and created annoy index, to do approximate nearest neighbour search, so every time user puts query, we create sentence embedding of that query sentence (using same technique we used create embeddings of all question in database) and find its approximate neighbours
The problem:
say we used two methods to create sentence embeddings to see which one gives better result, lets say method A gives good similarity matching but method B is slightly not good as method A ( verified by human inspection) How do i compare which one is better mathematically? like which result have better accuracy. | 0.93 | t3_tb2brd | 1,646,929,266 |
LanguageTechnology | txtai 4.3 released - machine learning pipelines as SQL | nan | 1 | t3_tan05g | 1,646,875,449 |
LanguageTechnology | Open-source massively multilingual speech recognizer | nan | 0.76 | t3_tahdri | 1,646,859,365 |
LanguageTechnology | 15 Datasets for Word Segmentation on the Hugging Face Hub | nan | 0.93 | t3_tafr58 | 1,646,854,849 |
LanguageTechnology | Brat annotation: How to add URL links to each annotations | I have added some annotations to my file using brat annotation tool. The annotations display some text. I need to add URL links to each annotation text. How can I add URL links to annotations in brat annotation tool. Is there any configuration to do that?
[https://i.stack.imgur.com/pII3d.png](https://i.stack.imgur.com/pII3d.png) | 1 | t3_ta3rqu | 1,646,816,511 |
LanguageTechnology | Story Ending selection between two choices - How to create a model for it | Hello everyone, I am new to NLP and am working on a problem where users the model needs to return the correct ending of the two (either by giving it a 0 or a 1) based on the previous 4 sentences. I am using the custom version of the ROCstories datasets and the StoryClozeTest but the general structure is the same.
In the training set, there are 5 sentences (the last sentence is the ending sentence which fits the previous sentences). Each row/story has a storyID.
In my testing set, there are 4 story setences and two potential endings (ending 0 and ending 1). Each row/story has a storyID.
I am lost as to how to train the model properly and how to get a prediction from it when it is given the first four setences from the training set.
Here is my Google Colab link - it contains my attempt so far: [https://colab.research.google.com/drive/1biFwBrTfu7ce3hqUmjA2NqzJuUcWo98h](https://colab.research.google.com/drive/1biFwBrTfu7ce3hqUmjA2NqzJuUcWo98h?usp=sharing)
I am not able to upload the data here so please let me know if you would like it.
Any help would be much appreciated! | 0.67 | t3_t9pxh6 | 1,646,770,909 |
LanguageTechnology | How does one objectively quantify the quality of automatic Text Normalization? | I scraped sentences from the internet that I want to use for Text To speech input. For this purpose, I've done some text normalization. E.g. converting alphanumeric and numeric data in non-ambiguously words. (512 --> five hundred and twelve)
The purpose of this process of text normalization, however, is not this particular dataset, but to improve the method for automatic text normalization. There will always be manual checking after normalization, but the better the automatic text normalization is, the less corrections have to be made.
Would it be viable to manually normalize a bunch of sentences (that represent much of the variation in the intended text that will have to be normalized) and compare these 'perfectly' normalized sentences to the sentences that were normalized automatically (a duplication of the manual normalized text/sentence would be a 100% automatic normalization score).
If so, what would be viable metrics next to e.g. minimum edit distance. | 1 | t3_t9p261 | 1,646,768,543 |
LanguageTechnology | How to properly use a custom Trainer from the HuggingFace library? |
Does anyone know how a custom trainer from the hugging face library is meant to be used ? I have found some code to creat a trainer with a custom loss function. But I get an error saying NameError: name 'self' is not defined
. Any ideas how to fix it?
The code is following:
class MultilabelTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
labels = inputs.pop("labels")
outputs = model(**inputs)
logits = outputs.logits
loss_fct = torch.nn.BCEWithLogitsLoss()
loss = loss_fct(logits.view(-1, self.model.config.num_labels),
labels.float().view(-1, self.model.config.num_labels))
return (loss, outputs) if return_outputs else loss
MultilabelTrainer.train() | 1 | t3_t9mpn2 | 1,646,762,351 |
LanguageTechnology | Any good resources to learn BART? | I'm working on a personal project and want to use BART as a generative model. Does anyone know any good resources to learn how to use or fine-tune BART? | 0.81 | t3_t9li4j | 1,646,759,187 |
LanguageTechnology | GenQ: Fine-tune models for semantic search with unstructured text data | Hi all, I wrote about a super interesting technique called [GenQ](https://www.pinecone.io/learn/genq) for training models to do semantic search with *nothing more* than unstructured text data.
At a high level it works by generating synthetic queries for the unstructured text, producing (query, passage) pairs - that we then use to fine-tune a retrieval model.
In short, it allows us to build models for domains that we simply *could not* build for in the past due to a lack of labeled data. There's a lot of potential for a technique like this.
I hope some of you find it useful, let me know if you have questions, thanks! | 0.95 | t3_t9j24m | 1,646,752,808 |
LanguageTechnology | Retraining Stanza on new data | I've been tried to retrain Stanza NER by following the instructions from this page [Stanza - Model Training and Evaluation](https://stanfordnlp.github.io/stanza/training.html#training-with-scripts) and ran the following code after downloading everything from GitHub:
~ /stanza train/stanza-train$ source stanza/scripts/config.sh
~ /stanza-train/stanza-train/stanza/stanza/utils/datasets/ner$ python3 prepare_ner_file.py enp_DE.lft.iob tessmann.json
~ /stanza-train/stanza-train/stanza$ python3 -m stanza.utils.training.run_ner de_tessmann
But I get the following error
ValueError: dataset de_tessmann currently not handled
Has anybody been able to successfully retrain Stanza? If so, how?
Edit: at this link [https://stanfordnlp.github.io/stanza/training.html#training-with-scripts](https://stanfordnlp.github.io/stanza/training.html#training-with-scripts) I don't understand what `${corpus}` is supposed to be. A folder containing .json files? Where is it supposed to be? | 1 | t3_t9da9i | 1,646,732,545 |
LanguageTechnology | Hey Siri|Hey Google - What can/should we expect from personal assistants next versions? | We haven't seen any major breakthroughs on these over the past years, so what kind of advances are *reasonable* the next years? What pieces are missing for a more natural/advanced interaction with personal assistants? Can this tech get stuck for another decade? | 0.92 | t3_t8vfpg | 1,646,676,667 |
LanguageTechnology | *ACL Findings vs ACL Workshop | Hey, I'm doing my PhD in NLP and this is the first time I have attempted an ACL (main conference) submission (I have submitted in EACL though), my paper got rejected and I have worked on a revised version. As this year coincided with the ARR system, allowing revised versions of the same paper to be submitted to different venues, I have the following question:
For a paper with average reviews (3, 2.5, 2.5) and a metareview of 3, acceptance at the main conference is most probably out of the question.
However, there's a chance that it can be accepted to the NAACL2022 findings or to a relevant ACL2022 workshop.
In that case, what would you prefer and why? My personal opinion is that, while the Findings sounds better, you don't even get to attend a poster session. On the other hand, workshops usually attract less polished papers (although I am not sure in the case of ACL), hence having a mixed reputation. | 0.76 | t3_t8v2za | 1,646,675,800 |
LanguageTechnology | Mahalanobis Distance for Sentence Similarity? | Someone recommended using Mahalanobis Distance ([https://en.wikipedia.org/wiki/Mahalanobis\_distance](https://en.wikipedia.org/wiki/Mahalanobis_distance)) instead of cosine similarity when investigating the similarity of sentence embeddings (e.g., obtianed using [https://www.sbert.net](https://www.sbert.net)).
However, I fail to find any source (neither, website, blog or paper) that recommend doing so. Am I missing something or is the idea not really sensible for some reason? | 0.76 | t3_t8pfay | 1,646,660,350 |
LanguageTechnology | Accuracy didn't improve at all and loss keeps getting higher when training LSTM model. How could I fix it? | I am training a LSTM model for next word prediction in Indonesian Language. The Dataset used is around 100,000 sentences taken from wikipedia public corpora. Each sentence for the LSTM is tokenized and then sequenced into a maximum length of 9 words, 8 words for the input, and 1 word for the output.
tokenizer = Tokenizer()
tokenizer.fit_on_texts(sentences)
total_words = len(tokenizer.word_index) + 1
input_sequences = []
LENGTH_WORD = 8
for line in sentences:
token_list = tokenizer.texts_to_sequences([line])[0]
first_index = 0
for i in range(1, len(token_list)):
n_gram_sequence = token_list[first_index:i+1]
if i>=LENGTH_WORD:
first_index = first_index+1
input_sequences.append(n_gram_sequence)
# pad sequences
max_sequence_len = max([len(x) for x in input_sequences])
input_sequences = np.array(pad_sequences(input_sequences, maxlen=max_sequence_len, padding='pre'))
# create predictors and label
xs, labels = input_sequences[:,:-1],input_sequences[:,-1]
E.g. -> Input Sentence: 'I am eating an apple pie and a fried chicken inside my house' Sequence:
\['I'\]
\['I','am'\]
......
\['I','am','eating','an','apple','pie','and','fried'\] \['am','eating','an','apple','pie','and','fried','chicken'\]
....
&#x200B;
The predictor and the label is then created from the input sequences. Since it is not possible to encode the output using one hot encoding due to the vocabulary size (around 50000 words), I decided to use binary encoding.
The model code is as below
model = Sequential()
# size of the input dimensions = length of the longest sequence minus 1.
# subtract one since last word is used for ys.
model.add(Embedding(total_words, 20, input_length=max_sequence_len-1))
model.add((LSTM(64)))
# Sense layer sized as the total words, because we have as many outputs as the total word count.
model.add(Dense(max_length, activation='softmax'))
adam = Adam(learning_rate=0.001)
model.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['accuracy'])
print(model.summary())
#earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=0, mode='auto')
history = model.fit(xs, ys, epochs=100, verbose=1,
callbacks=[checkpoint])
#print model.summary()
print(model)
The accuracy keeps getting stuck betweeen 0.038 and 0.041 and the loss always increase in every epoch. I have tried changing the metrics to other accuracy such as categorical\_accuracy, the LSTM unit, and the embedding output size, but the accuracy never improved at all and the loss keeps skyrocketing. Is there any way to fix this? | 1 | t3_t8j8jc | 1,646,635,872 |
LanguageTechnology | Here is a cool research by Dartmouth researchers where they proposed a deep learning model for emotion-based modeling of mental disorders using Reddit conversations | According to the World Health Organization (WHO), [mental diseases impact](https://arxiv.org/pdf/2201.09451.pdf) [one ](https://arxiv.org/pdf/2201.09451.pdf)out of every four people at some point in their lives, according to the World Health Organization (WHO). However, due to the social stigma associated with seeking professional care, patients in many parts of the world do not actively seek it. As a result, there is a need to passively use interactions to detect mental problems.
It is feasible to persuade patients to seek diagnosis and treatment for mental problems through passive (i.e., unprompted) detection. That is precisely what Dartmouth University academics have proposed. The objective is to concentrate on a subgroup of mental illnesses marked by different emotional patterns.
The research team’s proposed model is entirely based on emotional states and their transitions using conversations on Reddit. Content-based representations, such as language model embeddings, have long been the focus of research in this area.
[**Continue Reading My Summary On This Research**](https://www.marktechpost.com/2022/03/06/dartmouth-researchers-propose-a-deep-learning-model-for-emotion-based-modeling-of-mental-disorders-using-reddit-conversations/)
Paper: https://arxiv.org/pdf/2201.09451.pdf | 0.96 | t3_t84aqq | 1,646,589,423 |
LanguageTechnology | [P] Small-Text: Active Learning for Text Classification in Python | nan | 0.67 | t3_t82flg | 1,646,584,320 |
LanguageTechnology | zero-shot domain adaptation for machine translation | Hi! My problem is to adapt the translation domain with no extra bilingual corpus (but may have monolingual corpus) of the adapted domain.
For instance, I have plenty of English-French bilingual pairs in the news domain, and I also have adequate English or French monolingual sentences in the biology domain. My goal is to conduct English-French biology translation system.
&#x200B;
I wonder is there a specific term for this problem? Are there any papers on this subject? Thank you. | 1 | t3_t8053r | 1,646,577,739 |
LanguageTechnology | Representing multiple words with word embeddings | Hi!
I have short texts, but they are with different lengths (1-14 words).
My goal would be to represent them with word embeddings (fastText). I've tried to use mean, so every text is represented by a mean vector of all the vectors in the text, but there must be a better way. Do you have any idea/resources about this issue?
Thank you very much.
Edit1: Problem is, those texts are not always sentences (and they are not in English). Those are responses by humans for questions like "What do you see on this picture", so sometimes it's longer description, sometime just a single word. | 0.81 | t3_t7y4tm | 1,646,571,148 |
LanguageTechnology | Detecting English names without context | Hi. Is it possible to detect if a given string contains a person's name without additional context?
For example, if the string was a sentence or phrase like "I gave John Doe my thanks", then there is enough context to use an NER model. But what if the string only contains "John Doe"? In another words, the string is not a sentence or a phrase, but rather just 1-3 nouns with title/uppercase text. Is there an algorithm that can tell if the string is a name without any (or minimal) additional info? The only assumption is that the name is in title or uppercase text. | 1 | t3_t7d8fm | 1,646,498,198 |
LanguageTechnology | Language technology master’s programs | Hi everyone! I’m still looking at LT master’s programs and I was wondering if there’s any difference between a Master in Arts or a Master of Science when it comes to LT? I’ve seen most programs in the US and the one in Edinburgh are MS while the two I’ve found in Sweden are MA. Thanks!! | 0.8 | t3_t74yw0 | 1,646,468,442 |
LanguageTechnology | Gadsby - Constrained Text Generation with Transformers | nan | 0.67 | t3_t71nr2 | 1,646,455,302 |
LanguageTechnology | Reg: Self-studying CS224n | Hello everyone! I'm looking for some help. Has anyone gone through Stanford CS224n (Deep Learning for Natural Language Processing). If yes, could you pls recommend if there's a study group or forum that I can consider joining? I really need some peer support + discussions. Thanks in advance! | 1 | t3_t6okot | 1,646,416,046 |
LanguageTechnology | Experimental methodology NLP | Hi!
I have a question regarding the experimental methodology in NLP/ML.
Are there any methodological recommendations regarding this research method?
I'm from social science, where this research method has a strong foundation and specific "guidelines". Is there something similar in NLP, or is used loosely? Any paper would be appreciated! Thanks! | 1 | t3_t6kwcr | 1,646,406,197 |
LanguageTechnology | Interesting paper on zero shot classifiers | Metadata-Induced Contrastive Learning for Zero-Shot Multi-Label Text Classification | nan | 0.91 | t3_t6j1ku | 1,646,400,736 |
LanguageTechnology | Researchers from Tel Aviv Propose Long-Text NLP Benchmark Called SCROLLS | Although lengthier texts contain a significant quantity of natural language in the wild, NLP benchmarks have always primarily focused on short texts, such as sentences and paragraphs. Short text classification has consistently been a driving force behind standard benchmarks like GLUE, WMT, and SQuAD. A considerable amount of natural language is produced in the context of lengthier discourses, such as books, essays, and meeting transcripts, as is widely known. As a result, model structures are required to address the computing constraints associated with processing such long sequences.
Researchers from Tel-Aviv University, Meta AI, IBM Research, and Allen Institute for AI (AI2) introduce Standardized CompaRison Over Long Language Sequences (SCROLLS) to tackle this issue. SCROLLS is a collection of summarization, question-answering, and natural language inference tasks that span a variety of topics, including literature, science, commerce, and entertainment.
[**Continue Reading My Full Summary On This Paper**](https://www.marktechpost.com/2022/03/03/researchers-from-tel-aviv-propose-long-text-nlp-benchmark-called-scrolls/)
Paper: [https://arxiv.org/pdf/2201.03533.pdf](https://arxiv.org/pdf/2201.03533.pdf) | 0.87 | t3_t6abfi | 1,646,367,326 |
LanguageTechnology | Looking for dictionary of common English words | I'm doing some text parsing work in Python and trying to detect and ignore phrases that use common English words. I've used NLTK and some spell checker modules but the problem is that these recognize too many words, such as names of people and places.
Does anyone know of a lighter weight or "dumber" dictionary that's readily available? I can probably build one using word frequency on some publicly available corpus, but trying to avoid that. | 0.67 | t3_t69ogr | 1,646,365,261 |
LanguageTechnology | Detecting random character sequences in tabular strings | Hey everyone,
I started a project with regard to analyzing **open-text answers** of surveys. I mainly deal with single words, half-sentences and single sentences, however sometimes people just face-roll over their keyboards (xP) and input an arbitrary character sequence (e.g. jdfaskl, adskfls, etc.). I do know the language, so there is not the problem that these character sequences could have a meaning in other languages.
Among other problems, I have to detect these sequences and flag them accordingly. While I have some approaches in mind on how to solve them (checking words against dictionaries, detecting recurring character sequences, checking for vowels, etc.), I still struggle to find a good solution for that. Especially because sometimes people use common abbreviations (like MS for Microsoft) and I really would like to first detect random character sequences and in a second step check against common abbreviations.
I do have some experience in this field, but this is my first big project and I would really like to explore options on how to solve this problem thoroughly.
Any hint to how this could be solved is appreciated ;) | 1 | t3_t5zrmt | 1,646,336,337 |
LanguageTechnology | Vanilla Transformer | Hello, do you guys have any resources for training a vanilla tramsformer model for machine translation?
I tried the Tensorflow tutorial(en-pt) , but it seems like I cant get it to work when using a custom dataset from text files. I get confuse with the l["setup input pipeline stage](https://www.tensorflow.org/text/tutorials/transformer#setup_input_pipeline)"
can someone explain why it has a tokenizers.pt and tokenizers.en and why cant i use a simple text vectorization layer and such?
if any of u guys can help itll be very much appreciated, much thanks! | 1 | t3_t5sms8 | 1,646,317,264 |
LanguageTechnology | Document-Term Matrix in NLP: Count and TF-IDF Scores Explained | nan | 0.64 | t3_t5rskc | 1,646,314,759 |
LanguageTechnology | What language is this? | I'm apart of an ARG and I need an idea of what this code/cipher/language is:
796f 2c20 706c 6561 7365 2064 6f6e 2774 2064 656c 6574 6520 6d65
PLEASE HELP! | 0.5 | t3_t5eebi | 1,646,267,021 |
LanguageTechnology | Semantic treebank of English Holy Bible? | Hi. Does anyone know of a version of an English language bible (preferrably Douay-Rheims, but beggars can't be choosers) that has been parsed into a semantic treebank? I've seen syntactic treebanks of ancient languages (Latin, Greek, etc.) using dependency grammars, but I haven't come across an English bible in a semantic treebank.
Any idea where to look? | 1 | t3_t56chi | 1,646,244,821 |
LanguageTechnology | [Mozilla + Coqui] Speech Technology Hackathon | nan | 0.75 | t3_t4swgi | 1,646,199,428 |
LanguageTechnology | Tutorial: Apply sparsity and quantization to BERT question answering for up to 14x better performance on CPUs | nan | 1 | t3_t4gbxs | 1,646,163,586 |
LanguageTechnology | I am considering a master in Language Technology - advice? | [https://www.gu.se/en/study-gothenburg/master-in-language-technology-one-year-or-two-years-h2mlt](https://www.gu.se/en/study-gothenburg/master-in-language-technology-one-year-or-two-years-h2mlt)
&#x200B;
I am looking at this one specifically. I have a bachelor in languages but I have been working in software for 5 year. This would require me to pause my career for 1-2 years, so I wanna make sure it's the right move. Thoughts? | 1 | t3_t4el8r | 1,646,159,112 |
LanguageTechnology | Hey Folks, A team of researchers at Meta AI is working on developing language and machine translation capabilities that will cover most of the world’s languages. Their work includes two projects: No Language Left Behind and Universal Speech Translator. | A team of researchers at Meta AI is working on developing language and machine translation capabilities that will cover most of the world’s languages. Their work includes two projects: No Language Left Behind, a new AI model that can be trained to learn languages with fewer examples allowing expert-quality translations in hundreds of languages, from Asturian to Luganda to Urdu. The second project is Universal Speech Translator, which includes unique methods for translating from one language’s speech to another in real-time. This will enable models to support languages that do not have a regular writing system.
**You can read my full summary** [**of this research here**](https://www.marktechpost.com/2022/03/01/meta-ai-introduces-no-language-left-behind-project-an-ai-model-to-support-machine-translation-for-low-resource-languages/) **or even check out** [Meta's Blog](https://ai.facebook.com/blog/teaching-ai-to-translate-100s-of-spoken-and-written-languages-in-real-time) | 0.75 | t3_t4bwt2 | 1,646,152,264 |
LanguageTechnology | [D] Synthetic data for AI among the 10 Breakthrough Technologies 2022 of the MIT Tech Review | [Synthetic datasets](https://www.technologyreview.com/2022/02/23/1044965/ai-synthetic-data-2/) are computer-generated samples with the same statistical characteristics as the samples from the original dataset. Synthetic datasets are becoming common to train AIs in areas where real data is scarce or too sensitive to use, as in the case of medical records or personal financial data. I was involved in [textual data augmentation](https://arxiv.org/pdf/1812.04718.pdf) for my thesis. | 0.82 | t3_t425fq | 1,646,118,903 |
LanguageTechnology | Dropout pre-training vs. fine-tuning | Is it okay to use a higher dropout during fine-tuning than was used during pre-training a transformer? Are there any best practices around this or any related literature? | 1 | t3_t41oo6 | 1,646,117,123 |
LanguageTechnology | Scores in ACL rolling review | I heard that if I want to respond to the reviews then I have to make a resubmission. How fast will they respond to the resubmissions?
So if I submitted to ACL rolling review 1 month before a targetted conference and I don't get the scores I wanted, I basically kinda miss an opportunity to submit a target conference?
Most conference that I know like Neurips and ICLR, you get to respond to the reviews without making a resubmission and the reviwers will have enough time to read the responses and make changes to the score before the commitment deadline.
What's the difference between a "review" and a "meta-review"?
How bad is 2.5/2.5/2 for NAACL or even COLING? | 1 | t3_t3u22d | 1,646,093,370 |
LanguageTechnology | WMT22 announced to take place in Abu Dhabi | This year, WMT22 will take place in December [in Abu Dhabi](https://machinetranslate.org/wmt22)! | 0.5 | t3_t3obb0 | 1,646,078,208 |
LanguageTechnology | Resources for sentiment analysis | What are the resources you have used to learn sentiment analysis?
If you could suggest any resources on building a sentiment dictionary that would be great.
Thank you for your kind replies. | 1 | t3_t3lffl | 1,646,070,814 |
LanguageTechnology | Hey folks, here is another cool Natural language research from Meta AI where they introduce Project CAIRaoke and using this project they built an end-to-end neural network-based model that can power much more personal and contextual conversations | The need of the hour is a better conversational AI, not just AI assistants who can’t do more than what has been fed. AI assistants are underwhelming irrespective of whether we interact with them via text or voice. They are easily stumped by a bit of complexity added to the conversation. Imagine how it would be to converse with AI assistants the same way we do regularly, to our people, most naturally and colloquially.
Researchers from Meta AI come to save the day with their project CAIRaoke. The team has created an end-to-end brain model capable of considerably more intimate and contextual dialogues than current systems. The researchers have already used the model that evolved from this effort. Portal is what they call the product, and the purpose is to connect it to augmented and immersive virtual devices. This integration would benefit the community because it would allow for more comprehensive, multi-modal interactions with AI helpers.
You can [**continue reading my summary here**](https://www.marktechpost.com/2022/02/28/meta-ai-introduces-project-cairaoke-an-end-to-end-neural-network-based-model-that-can-power-much-more-personal-and-contextual-conversations-for-the-future-augmented-and-virtual-reality-devices/) or even check out the [Meta AI blog page](https://ai.facebook.com/blog/project-cairaoke) | 0.88 | t3_t3lcd4 | 1,646,070,591 |
LanguageTechnology | Wish list for a data engineer? | Let's say you were an ML scientist working at Mega Corp FAANG, and you had a team of data engineers working for you
What would you ask them to do?
What are some of your most common issues that you think the dev tooling and engineering teams could help with? | 1 | t3_t3a752 | 1,646,034,364 |
LanguageTechnology | Google USE versus BERT | Hey folks - first post here! I’ve been reading a lot about different techniques to build chatbots, and I’m struggling to understand how something like a Google Universal Sentence Encoder is related to BERT? I know USE has a transformer based architecture option and basically pretrained embeddings, but BERT seems lower level than that? When would I use each? Is USE simpler than BERT? | 1 | t3_t312um | 1,646,005,078 |
LanguageTechnology | Hey Folks, Here is a really cool research by Deepmind researchers where they probe image-language transformers and propose SVO probes for verb understanding | Fine-tuning is required for a range of functions performed by multimodal image–language transformers. Researchers worldwide are interested in whether these algorithms can detect verbs or merely use the nouns in a phrase. A dataset of image-sentence pairs with 447 verbs that are either visible or widely encountered in the pretraining data was compiled to perform this task.
Researchers from DeepMind propose to evaluate the pretrained models in a zero-shot manner using this dataset. In comparison to other elements of speech, it is found that the pretrained models underperform more in scenarios that demand verb interpretation. An investigation into which verbs are the most difficult for these models is underway.
You can read this [**full summary here**](https://www.marktechpost.com/2022/02/27/deepmind-researchers-probe-image-language-transformers-and-propose-svo-probes-for-verb-understanding/) But if you are interested in only paper then you [check the paper here](https://arxiv.org/pdf/2106.09141.pdf) | 1 | t3_t2tn1x | 1,645,984,326 |
LanguageTechnology | Can you help solve a mystery? "Sync and corrections by n17t01" | I apologise if this doesn't fit this subreddit, but I can't think where else too look.
I was using the DeepL translator app on Android today and translated the German word "genauso" to English, which should translate to "just the same". instead I got the response "Sync and corrections by n17t01".
Odd. then I searched for that phrase on duckduckgo & Google and it turns up on web pages all over the Internet in nonsensical ways. there must be an explanation. any suggestions?
thanks. | 1 | t3_t2bgqj | 1,645,922,110 |
LanguageTechnology | Database/Pretrained model for Confident Messages? | Hey y'all! I'm trying to use a ML model to analyze text and give me the confidence level of the text itself. Most places I look for guidance I get confidence intervals of other pretrained models and predictions, but I am looking for a pretrained model (or a database) of text that is confident.
An example would be "The answer is X" as confident vs. "I'm not entirely sure, but I think the answer is X" as unconfident.
Would love to see if someone can help point me in the right direction! | 0.88 | t3_t270tc | 1,645,909,379 |
LanguageTechnology | Quality metrics for text dataset | Hi guys, i'm Data science student and i'm doing a nlp project. For this, i must measure the quality of my 4 text dataset to understand how the input influence the model output.
Reading various papers and surveys on the similar nlp task, I found the metrics proposed in this work interesting: [https://btw.informatik.uni-rostock.de/download/workshopband/C2-5.pdf](https://btw.informatik.uni-rostock.de/download/workshopband/C2-5.pdf)
&#x200B;
any suggestions? Thanks all. | 1 | t3_t1yqiu | 1,645,886,920 |
LanguageTechnology | Is "the" included in the mention? e.g. "the Channel Tunnel", "the tunnel" | When we have
> "the Channel Tunnel"
> "the Tower of London"
is "the" included in the span of the mention? The names of the entities are "Channel Tunnel" and "Tower of London" according to wikipedia, but not prefixing the names with "the" would not be grammatical in any context (I think).
Then what about noun phrases?
> "the dog"
> "the tunnel"
Should the mentions include "the" or not? Is there anywhere I can read more rules on what constitutes a mention according to some well defined rules?
edit: I am leaning towards "the" being part of the mention for definite noun phrases, as otherwise they may not be distinguished from eg another indefinite noun phrase, eg if we had "Spot came home for tea. The cat had met another cat while out." Well "cat" and "cat" would not tell the interpreter much, but "the cat" and "another cat" clearly refer to different cats.
edit 2: I am leaning towards not including "the" in the case of "the Tower of London", but would in the case that the definite article was part of the entity's name eg "The Hague". I justify this by the fact that this is the wikipedia article naming convention https://en.wikipedia.org/wiki/Wikipedia:Naming_conventions_(definite_or_indefinite_article_at_beginning_of_name) and users of my software may want to perform wikification.
But I could be wrong, these are just my guesses. | 0.76 | t3_t1c1uu | 1,645,816,764 |
LanguageTechnology | Using Unsupported Huggingface Models in SpaCy | Hey,
I've been tinkering a lot with SpaCy as of late, and I was wondering whether it is possible to use any of the models published on the Huggingface website apart from those that are already SpaCy compatible. More specifically, I for example would like to use the Dutch BERT model, BERTje ( [GroNLP/bert-base-dutch-cased · Hugging Face](https://huggingface.co/GroNLP/bert-base-dutch-cased)).
&#x200B;
I've searched a lot for info about this, but to no avail. Would appreciate it if anyone could tell me whether this is possible without too much hassle (and without any required retraining of the model) and perhaps point me in the right direction.
&#x200B;
Cheers, | 0.96 | t3_t16scr | 1,645,803,295 |
LanguageTechnology | txtai 4.2 released - Build once, run anywhere serverless NLP applications | nan | 1 | t3_t134do | 1,645,792,944 |
LanguageTechnology | Corpus for sentiment in Dutch tweets | Hi,
Does anyone know about an annotated corpus for sentiment in Dutch tweets?
Thanks | 1 | t3_t11ce8 | 1,645,786,811 |
LanguageTechnology | Multi-Class Text Classification | So I'm working on my graduation project and our main NLP idea is about classifying graduation projects into categories like someone writes (> 1000 words) description and the application model should classify the project into (Robotics, IoT, Machine Learning, Blockchain, etc...). I've made my search and I found many ways some with Naive Bayes some with (CNN + Word2Vec) or (LSTM + Word2Vec). I just want 2 models to compare and pick the best results but I don't know what are the best 2 suitable models in this case. Can someone suggest?
&#x200B;
***NOTE: I'll be using a research topics dataset that is labeled.*** | 1 | t3_t0xvk0 | 1,645,773,357 |
LanguageTechnology | For the ones who are overwhelmed by the maths for machine learning... | nan | 0.85 | t3_t0sth0 | 1,645,757,270 |
LanguageTechnology | [Research] Looking for volunteers to evaluate multilingual dialogue models (chatbots)! | [cross-post]
Hi! Are you frustrated that most chatbots are only in English? Would you want your native language to be part of a conversational AI system as well? I certainly do! That’s why I’m doing my master’s thesis on multilingual open-domain dialogue systems.
If you also feel the same and want to contribute to research on multilingual, intelligent chatbots, then you’re the perfect fit for my thesis project!
I’m currently looking for people who can speak one of the following languages fluently:
* Arabic
* Bengali
* Finnish
* Japanese
* Korean
The task is to
1. post-edit some data in your language (after I used machine translation to translate it from English to the languages) or
2. evaluate the models’ responses in your language.
There are 30 short dialogue exchanges in the post editing task. Your task is to check the post machine translated text and correct the grammatical and/or fluency issues if needed.
As for the system evaluation, you will chat with the chatbots and rate them on a scale of 5 about your view of their fluency, engagingness, naturalness, etc.
I’m looking for 1 volunteer in each language for the post editing task and 2 volunteers in each language for the evaluation task. The tasks are pretty small so they won't take too much of your time. And it'd be fun to chat with the bots (I think) :) The post editing task preferably starts asap and the evaluation task will kick-start by the end of March! If you’re interested, please fill out [this form](https://forms.gle/eA5vwGGxbos9s4jY8) :)
I’ll of course credit you in the paper unless you would like to stay anonymous. This is a project I’m really passionate about, and I hope it’ll encourage more research on multilingual dialogue systems in the community!
Please feel free to let me know if you’d like to know more! I really, really appreciate it. You can dm me or just leave a comment here.
To sign up, please kindly fill out [this google form](https://forms.gle/eA5vwGGxbos9s4jY8).
Thank you so much for reading this post patiently :) Hope you have a great day! | 0.89 | t3_t0ndy9 | 1,645,741,928 |
LanguageTechnology | How do you extract structured data from NL? | Specifically, people's current weight, goal wight, height, and weight change from their posts in r/loseit. How do I tokenize and vetorize numbers? What ML models are used for these kind of tasks? | 0.83 | t3_szmsiq | 1,645,637,042 |
LanguageTechnology | Best algorithm for grammar checking | From a computational linguistics perspective, what is the currently highest performing algorithm for either identifying grammar mistakes in a language or also that plus suggesting corrections?
Thank you | 0.72 | t3_szl2v9 | 1,645,632,650 |
LanguageTechnology | What state-of-the-art NLP systems utilize bayesian modeling? | Additionally, for which tasks is bayesian modeling more useful? | 1 | t3_szl240 | 1,645,632,597 |
LanguageTechnology | JS library for tagging words emotion in text | Hello everyone.
I found many javascript library for "sentiment analysis", but they only seem to provide some kind of general "score" of positivity/negativity. Instead, I'm looking for a more refined library that could have more emotions into which to tag each words (for example, anxiety, excitement, etc, not just a number score).
Is there such a thing? Is it also called "sentiment analysis" or is there a more appropriate term that I should use to search for that? | 1 | t3_szc20r | 1,645,603,442 |
Subsets and Splits