sub
stringclasses 4
values | title
stringlengths 3
304
| selftext
stringlengths 3
30k
| upvote_ratio
float64 0.07
1
| id
stringlengths 9
9
| created_utc
float64 1.6B
1.65B
|
---|---|---|---|---|---|
LanguageTechnology | Searching participants for art project about AI | Hi,
I’m part of an art group from Switzerland currently studying at HSLU Design & Arts ([https://www.hslu.ch/de-ch/design-kunst/studium/bachelor/camera-arts/](https://www.hslu.ch/de-ch/design-kunst/studium/bachelor/camera-arts/)).
The group consists of:
Karim Beji ([https://www.instagram.com/karimbeji\_/](https://www.instagram.com/karimbeji_/) [https://karimbeji.ch/](https://karimbeji.ch/))
Emanuel Bohnenblust ([https://www.instagram.com/e.bohnenblust/](https://www.instagram.com/e.bohnenblust/))
Lea Karabash ([https://www.instagram.com/leakarabashian/](https://www.instagram.com/leakarabashian/))
Yen Shih-hsuan ([https://www.instagram.com/shixuan.yan/](https://www.instagram.com/shixuan.yan/) [http://syen.hfk-bremen.de/](http://syen.hfk-bremen.de/))
At the moment, we are working on a project on the topic if AI can augment the happiness of humans. To answer this question, we are mainly working with chatbots. The end result is going to be an exhibition at the end of March.
For that exhibition, we want to conduct a trial in which people from over the world chat with a chatbot to find out if and how it augments the mood of the participants.
We would give you access to a GPT-3 (OpenAI) chatbot and ask you to a) record yourself through a webcam (laptop) while you are chatting and b) simultaneously screen record the chat window.
In the exhibition we would have a) a book with all the chats and b) small videos with your faces (webcam) to assess your mood.
We would have a Zoom meeting beforehand to discuss everything.
Looking forward to your message! | 1 | t3_sgypv1 | 1,643,626,972 |
LanguageTechnology | Structure identified Named Entities in a meaningful manner | I have trained a SpaCy NER model which can identify Name, Address, Institute, Degree, Skill , Company, Designation, School, Society and Location in a Resume. Now I want to structure recognized entities in such a way that CV owners name, address, skills and other details are together & Referees name , address, Designations separately. Is there a way to do it? I mean I want to have CV owners data together and Referees data together. | 1 | t3_sgxqg3 | 1,643,623,169 |
LanguageTechnology | academic ethics issues in NLP | Hi all. I was wondering what are the most interesting academic ethics issues we have to account for while researching in this area? | 0.92 | t3_sg4teu | 1,643,531,903 |
LanguageTechnology | Using fasttext to load word embeddings | I want to use pre-trained fasttext model to load word embeddings for all words ( including out of vocab words ) in English. I have tried to read up how to do this , and so far haven't been able to get it to work.
I would appreciate it if someone could give me the code for it or point me to some tutorial that would help me do it.
I am from a Statistics background, and a new Python User. Trying to navigate through everything to get things done ! | 1 | t3_sg20z5 | 1,643,521,198 |
LanguageTechnology | Using embedding model in C++ app | I am looking for a good way to use pre-trained embedding models (from huggingface or tensorflow hub) in my C++ application running on a local CPU.
My solution so far: I am using a compiled Tensorflow C DLL in combination with cppflow (https://github.com/serizba/cppflow). However, I get problems when I take models which use operations from the tensorflow_text python module since I don’t know how to get their C++ API.
Has somebody experience in doing so? Or in general, has someone used local embedding models in a C++ app before? | 1 | t3_sff0hf | 1,643,450,476 |
LanguageTechnology | Replicate WebNLG 2017 challenge with OpenNMT-tf | Hello guys, i'm data science student, i'm trying to replicate WebNLG 2017 challenge with OpenNMT-tf.
I have already performed the same challenge with OpenNMT-py and everything went well.
When using the tensoflow version, some doubts arose:
* how to build vocabularies from webnlg\_baseline\_input.py output files: \['train-webnlg-all-delex.triple', 'train-webnlg-all-delex.lex', 'dev-webnlg-all-delex.triple' , 'dev-webnlg-all-delex.lex'\]. since in the tensorflow version a transformation step in a bpe file is required;
* how to build the default model of openNMT-py (LSTM with 2 layers of 500 units);
I tried to follow this notebook but, given the doubts expressed above, the results were not the same.
([https://github.com/Parkchanjun/OpenNMT-Colab-Tutorial/blob/master/OpenNMT\_Tensorflow\_Tutorial.ipynb](https://github.com/Parkchanjun/OpenNMT-Colab-Tutorial/blob/master/OpenNMT_Tensorflow_Tutorial.ipynb))
How can I do? Thanks all. | 1 | t3_sesv7z | 1,643,382,418 |
LanguageTechnology | Confusion about BERT masking? | I am trying to understand the masking in BERT model.
I have confusion in following line taken from paper
>The training data generator chooses 15% of the token positions at random for prediction. If the i-th token is chosen, we replace the i-th token with (1) the \[MASK\] token 80% of the time (2) a random token 10% of the time (3) the unchanged i-th token 10% of the time
at point 3 it say unchanged token (i think it mean unmasked token) 10% time. If we have to use original token 10% of 15% tokens, then why we need to mask it.
This can be more clear in **Attempt 4: Masked LM with Random Words and Unmasked Words** section of [this guide](https://mlwhiz.com/blog/2021/07/24/bert-sketches/).
The guide say
>So if we have a sequence of length 500, we will mask 75 tokens(15% of 500), and in those 75 tokens, 7 tokens(10 % of 75) would be replaced by random words, and 7 tokens (10% of 75) will be used as it is.
So if we have to use 7 tokens as it is, then why we masked them first? | 0.92 | t3_ses9wt | 1,643,380,762 |
LanguageTechnology | Has anyone ever used spacy with a fit predict structure | Spacy is an industry standard, and I have been using it ever since I've been in the field. One thing I have always wanted is for spacy to have fit and predict methods, much like sklearn. I understand spacy has its own forms, like the evaluate method. Of course, it probably won't be hard to build a fit predict method based wrapper around spacy, but **I am wondering if anyone has ever come across any such wrapper?**
Benefit of such a wrapper would be that when building retraining pipelines with models from various libraries, most of which use fit and predict, being able to call fit and predict on a spacy model would simplify things. | 0.75 | t3_se6nzq | 1,643,312,255 |
LanguageTechnology | Extracting information about entities automatically | I came across an interesting problem, Let's say given two sentences, "Meet Harry and David and take them to London and Athens. These two cities are worth exploring. Here, the second sentence mentions that the two entities are cities. Is there any method to assign these two entities to category city? I am more concerned about different approaches we can use, may be rule based methods to deep learning based methods. | 1 | t3_se48zj | 1,643,305,999 |
LanguageTechnology | Benchmarking NLP Datasets | Hello Everyone,
I am a newbie in NLP research. My question is - How should we benchmark a new Language dataset/corpus (ex. dialogue dataset, q/a dataset) when there is no publicly available dataset for that particular language? Also what are the possible directions to perform evaluation on the newly prepared dataset. Need suggestions, please. | 0.76 | t3_sdurp5 | 1,643,276,829 |
LanguageTechnology | [R] ML & NLP Reasearch Highlights of 2021 - by Sebastian Ruder | [ML and NLP Research Highlights of 2021](https://ruder.io/ml-highlights-2021) by Sebastian Ruder, actually research scientist at Google in London, ex-DeepMind: Universal Models, Massive Multi-task Learning, Beyond the Transformer (aka cross-attention), Prompting, Efficient Methods, Benchmarking, Conditional Image Generation, ML for Science, Code Synthesis, Bias, Retrieval Augmentation, Token-free Models, Temporal Adaptation, Importance of Data and Meta-learning. | 0.98 | t3_sds8q8 | 1,643,266,290 |
LanguageTechnology | OpenAI Releases Three Embedding Model Families To Optimize Text Search, Code Search and Text Similarity | In the last few decades, neural networks have been used for a wide range of tasks, including image segmentation, natural language processing, and time-series forecasting.
One promising use of deep neural networks is embedding, a method for representing discrete variables as continuous vectors. An embedding is a low-dimensional space into which high-dimensional vectors can be translated, making it easy for computers to understand the relationships between those concepts. Numerically similar embeddings are also semantically identical. Word embeddings for machine translation and entity embeddings for categorical data are two applications of this approach. [**Continue Reading**](https://www.marktechpost.com/2022/01/26/openai-releases-three-embedding-model-families-to-optimize-text-search-code-search-and-text-similarity/)
Paper: https://arxiv.org/abs/2201.10005
Documentation: https://beta.openai.com/docs/guides/embeddings | 0.9 | t3_sdbq8w | 1,643,219,157 |
LanguageTechnology | Dependency parser from scratch in Python | I’d like to challenge myself to write my own dependency parser for natural language (English) in Python.
I’m picturing taking in the sentence one word at a time and somehow working backwards to figure out the tree structure of the sentence.
Of course, it should start with tokenization. Perhaps part-of-speech tagging is a necessary next step, to attempt to group certain parts of speech that never dislocate from their complements, like “the”, or other words with predictable behavior, like “and” and conjunctions and so on?
Like the game “Mindmaster”, one can work backward layer upon layer to deduct what the original structure of the sentence is.
However, maybe I need effective segmentation for this to work? I’m wondering how periods will affect this procedure. I could ignore them and hope the dependency parse can still work. For example, “and” never appears at the end of a sentence.
Another idea is to try to design a neural network myself rather than using a library like Spacy. I just need to know what architecture is optimal and practice training it a bit.
Anyone have any recommendations on this?
Thanks very much | 1 | t3_sd3sh6 | 1,643,196,266 |
LanguageTechnology | Import part of Spacy library | Does anybody know if you can just import part of the Spacy library you need?
I find “import spacy” to be the slowest part of using spacy.
What takes so long to load? All the pipeline scripts or something? Because loading the language model with spacy.load() and constructing the doc object with doc(text) are faster than the initial import.
I’d like to just load the pipelines I need or something from the module, like “from spacy import …”
Does anyone know of this is possible?
Thanks very much | 1 | t3_sd2kmw | 1,643,191,489 |
LanguageTechnology | Visual Intro to Basic Semantic Search | nan | 1 | t3_sd1gbu | 1,643,186,757 |
LanguageTechnology | Search query suggestion/autocomplete | I’m planning to add this feature to a search bar. Any starting package/model/tool suggestion? | 0.76 | t3_sczm2z | 1,643,179,398 |
LanguageTechnology | Question regarding the denominators in Kneser-Ney Smoothing | Hi everyone!
I am currently studying smoothing techniques, specifically Kneser-Ney smoothing. I understand that it helps to handle the case where the next word hasn't appeared in the given context previously. For eg, the corpus could have non zero trigram counts of 'This is a', but no occurrence of the 4-gram 'This is a car'.
The count C(This is a) is captured in the denominator of the lambda term as well, and this lambda term is multiplied with the recursion term. My question is, what if that particular count is actually zero? I hope the following the mini example can make my question clearer -
Corpus:
<s> <s> You are my friend </s> </s>
<s> <s> They are my enemies </s> </s>
<s> <s> I have friends and enemies </s> </s>
Say we would like to find the probability of the trigram 'are you friend', or P(friend | are you). As per the formulation given in page 9 of the document:
[https://u.cs.biu.ac.il/\~yogo/courses/mt2013/papers/chen-goodman-99.pdf](https://u.cs.biu.ac.il/~yogo/courses/mt2013/papers/chen-goodman-99.pdf)
The denominator of two terms consists of the count of the bigram 'are you'. But from the corpus this is zero, and at the same time, each of the individual words 'are' and 'you' aren't UNK, as their unigram counts are 1 and 2 respectively. So how does the recursion proceed now, since we cannot divide by zero?
Thank you! | 0.81 | t3_scyef9 | 1,643,175,048 |
LanguageTechnology | Interesting NLP project ideas | Hello, I'm a postgraduate student and I have a NLP project that I have to come up with and do (as part of a course, not a MSc thesis). What are some really interesting ideas that you could recommend?
An example of an interesting and good project is: [https://www.ucl.ac.uk/computer-science/news/2019/oct/msc-machine-learning-paper-receives-international-acclaim](https://www.ucl.ac.uk/computer-science/news/2019/oct/msc-machine-learning-paper-receives-international-acclaim) . | 0.62 | t3_scfo4t | 1,643,122,856 |
LanguageTechnology | How to create a question answering model that can trigger specific actions? | I want to create a model that can trigger specific actions based on the input given to the model. For example, if the user asks where is the nearest petrol pump the model will trigger the google maps API and calculate the distance. | 0.92 | t3_sccj0j | 1,643,113,459 |
LanguageTechnology | How to tackle the problem of Dangling Modifier (Help Needed) | Hello Geeks, I am trying to solve the problem of dangling modifiers and have not been able to think of any solutions or a better way to put this would be from where to start to solve the problem. If you people have any solution or any pointers which you could share it will be really helpful.
Dangling Modifier Examples -
1. Orig - Fumbling in her purse, the keys could not be found.
2. Modified - Fumbling in her purse, she could not find the keys.
3. Orig - Having injured his dominant hand, it was difficult to write the exam.
4. Modified - Having injured his dominant hand, John had difficulty writing the exam.
Thank you. | 0.63 | t3_sc54sq | 1,643,085,110 |
LanguageTechnology | Is there a way I can see connections between two senses in Wordnet? | For example, let's say I have the first sense of the verb "go" and the first sense of the verb "walk". I want to see the connection between these two words. In other words, if I start with go#1 (v), how can I arrive at walk#1 (v)? | 0.72 | t3_sc2u7d | 1,643,078,132 |
LanguageTechnology | How Alexa learned Arabic | nan | 1 | t3_sbs4lj | 1,643,047,091 |
LanguageTechnology | Is there no standard train/dev/test split for the Quora Question Pairs dataset of duplicate questions, *with labels for all splits*? | QQP is a dataset of duplicate and non-duplicate question pairs from Quora. I think it was originally developed as part of a Kaggle competition:
https://www.kaggle.com/c/quora-question-pairs
The competition released a training set with labels, and a test set without labels. QQP has subsequently been used in many papers developing new architectures for document similarity and duplicate detection, but unfortunately, as far as I can tell, there is no standard train/dev/test split of the dataset *that has labels for each of train, dev, and test*, and therefore people make up their own splits on the Kaggle training set. Am I mistaken? Is there a widely agreed on train/dev/split of this dataset with labels for each split? | 0.76 | t3_sbpmnj | 1,643,040,605 |
LanguageTechnology | I made a tutorial on how to do Speech Recognition with Kaldi! | Hey everyone,
Kaldi is a really powerful toolkit for ASR and related NLP tasks, but I've found that the learning curve is a bit steep.
[**Here's**](https://www.assemblyai.com/blog/kaldi-speech-recognition-for-beginners-a-simple-tutorial/) **a tutorial I made that takes you through installation and transcription using pre-trained models**, but the cool part is that you can decide how advanced you want it to be!
Included are Python scripts to automate the entire process, so you can generate transcriptions in just a few lines of code, but I also dive into the code itself to explain what's going on under the hood!
I'd love to hear any thoughts and feedback, or future topics you want to see covered! | 1 | t3_sbnic5 | 1,643,034,696 |
LanguageTechnology | How to test statistical significance on text data? | So, I was in an interview and I was asked so many questions about statistical details on text data. For example
1. How would you sample million sentences from billions of sentences? What strategies will you use for sampling?
2. Having sampled, how would determine that the sampled data follows actual data distribution? (In nutshell how would you determine whether two text data distributions are similar or not)
Follow up for these questions were, When will you decide to re-train your model. (Yet again how would you determine whether data distribution has changed)
Now I am confused about how to perform such statistical analysis over text data. I have understanding about DL approaches within NLP, but stats is something bugging me a lot during the interviews. (I have worked less/no in stats than actual model building)
Please advice me how to solve these about mentioned questions as well as where should I start working/learning on stats for such questions. Will be very helpful. | 0.96 | t3_sbhz36 | 1,643,015,375 |
LanguageTechnology | First foray into NLP at work, need feedback on the workflow. | Hi everyone, could use some input/thoughts on an NLP workflow I am helping to design based on a pretty unique(?) situation .
Essentially this is a binary classification problem; we have around 1000 documents and because of the sheer messiness of the text, a team has provided us with the key phrases taken from these documents that denote whether the document should be a 'Yes'. As opposed to labeling the entire document itself, if that makes sense.
Based on this, I was thinking that instead of using the entire set of documents as a corpus (standard approach), that we would take the collection of key phrases as a corpus instead. Then vectorize it using the standard TF-IDF approach, then use that as the input to the classifier.
Couple questions on this:
1) Firstly, does the construction of the corpus in this manner even make sense, since we're not using the entirety of the document?
2) Is there a way to incorporate BERT embeddings into this? My supervisor is very keen on the cutting edge stuff and wants to incorporate BERT, but best I can tell there's not really a way to do this based on the workflows I have seen. The standard TF-IDF approaches seem different from the BERT workflow, which typically involves fine tuning on my corpus and then making predictions from there.
Thanks! | 1 | t3_sb6xiq | 1,642,979,852 |
LanguageTechnology | Run your own locally-hosted translation service with a couple lines of code | nan | 0.94 | t3_sat7e6 | 1,642,942,975 |
LanguageTechnology | NLP architecture for paragraph extraction based on fixed question | Hi, I am looking for leads on how to tackle a NLP problem.
In short: I have 5 fixed (not changing) questions where I extracted in sum 160 answers to from 5 longer text documents (approx 21 pages of text per doc).
I want to build a sort of question answering model that gets another text document (\~20 pages of text input) and suggests the question answering paragraphs of the document.
&#x200B;
During Research the following questions arised:
1. Looking through question answering models I am not sure if my text input might be too big?
2. Also is the problem too "supervised" for classical question answering approaches?
3. Is there a way to transfer learn the supervised question answers from the document into existing models, any experiences on similar approaches?
&#x200B;
Thank you all, appreciate your feedback. | 1 | t3_saqjvi | 1,642,932,846 |
LanguageTechnology | Personalization for semantic search using vector db from another organization | I have a question around personalization for semantic search- when using semantic search api through a third party, for instance pinecone or any vector db company, how is personalization possible? Do you also pass the last 10 session actions/ or info about the user through the api? | 1 | t3_saiq21 | 1,642,904,455 |
LanguageTechnology | Authorship Attribution dataset for short texts. | Hi, I'm looking for an Authorship Attribution dataset with small-medium texts (Mostly social media excerpts if possible). Looked everywhere but couldn't find any, found a great dataset for large texts (Blogs) but none for small texts. Would like to search if one exists to hopefully not have to scrap it myself. | 0.75 | t3_sac5np | 1,642,885,609 |
LanguageTechnology | [P] Finding and correcting text classification label errors with cleanlab and Rubrix | https://rubrix.readthedocs.io/en/master/tutorials/find_label_errors.html | nan | 0.88 | t3_sac14a | 1,642,885,272 |
LanguageTechnology | Starting to learn NLP, need some directions. (I'm using catalyst NLP, inspired by spacy) | I'm currently in the process of learning NLP. I am using catalyst on c#.
I was able to run the sample programs and it was able to determine if the word is an noun, adjective, etc. But I can't find any sample for what I need.
Here is a summary of what I would like to achieve.
I would like to extract certain information on a sentence. Lets say i have the following texts:
"Sally ate an orange this morning. "
Or
"Sally is hiding behind the cabinet and she is eating an orange. "
How do i use the nlp to extract what sally ate? | 0.5 | t3_saas64 | 1,642,881,778 |
LanguageTechnology | Problem downloading - wikipedia split used for evaluating DPR | Hi all,
So I've been trying to download the wikipedia split that is used for evaluating [Dense Passage Retrieval (DPR)](https://github.com/facebookresearch/DPR)
&#x200B;
I just pulled up a google colaboratory session and followed their instructions to download the dataset as shown below.
`python data/download_data.py \`
`--resource {key from download_data.py's RESOURCES_MAP} \`
`[optional --output_dir {your location}]`
&#x200B;
I don't know why, but for some reason, the colab automatically exits the download cell (Output shows "\^C"), and all I get is a .tmp file.
&#x200B;
I believe I should be getting a .tsv file as instructed here (this is taken from the repo BPR - binary passage retrieval, this is an improvement over DPR) :
[https://github.com/studio-ousia/bpr#:\~:text=python%20data/download\_data.py%20%2D%2Dresource%20data.wikipedia\_split.psgs\_w100](https://github.com/studio-ousia/bpr#:~:text=python%20data/download_data.py%20%2D%2Dresource%20data.wikipedia_split.psgs_w100)
&#x200B;
I am attaching the .ipynb file here too for convenience.
[https://colab.research.google.com/drive/1OFAV9yUO0khCdQCoaa6ukcZC-7WVvvgt?usp=sharing](https://colab.research.google.com/drive/1OFAV9yUO0khCdQCoaa6ukcZC-7WVvvgt?usp=sharing)
Anyone know what I'm doing wrong?
&#x200B;
Thanks. | 1 | t3_sa9n1f | 1,642,878,603 |
LanguageTechnology | Google AI Introduces a Method Called Task-Level Mixture-of-Experts (TaskMoE), that Takes Advantage of the Quality Gains of Model Scaling While Still Being Efficient to Serve | Large-scale language model scaling has resulted in considerable quality gains in natural language understanding (T5), generation (GPT-3), and multilingual neural machine translation (M4). One typical method for creating a more extensive model is to increase the depth (number of layers) and breadth (layer dimensionality), essentially expanding the network’s existing dimensions. Such dense models take an input sequence (split into smaller components known as tokens) and route each token through the whole network, activating every layer and parameter. While these big, dense models have shown cutting-edge outcomes on various natural language processing (NLP) applications, their training costs rise linearly with model size.
Building sparsely activated models based on a mixture of experts (MoE) (e.g., GShard-M4 or GLaM), where each token supplied to the network follows a distinct subnetwork by bypassing some of the model parameters, is an alternative and more common technique. Small router networks that are educated with the rest decide how to distribute input tokens to each subnetwork (the “experts”). This enables researchers to increase the model size (and hence performance) without increasing training costs proportionally. [***Continue Reading***](https://www.marktechpost.com/2022/01/21/google-ai-introduces-a-method-called-task-level-mixture-of-experts-taskmoe-that-takes-advantage-of-the-quality-gains-of-model-scaling-while-still-being-efficient-to-serve/)
Paper: [https://arxiv.org/pdf/2110.03742.pdf](https://arxiv.org/pdf/2110.03742.pdf) | 0.92 | t3_s9sjqz | 1,642,820,797 |
LanguageTechnology | Questions from a clueless student | Hi everybody,
I've become interested in NLP and would like to get started on preparing for some master's courses. My background is modern languages, and I've been teaching myself Python alongside my internship. I'd love to work in machine translation, or even build my own machine translation engine. What advice do you all have for a clueless boi on getting started in NLP? Thank you! | 0.72 | t3_s99cke | 1,642,766,992 |
LanguageTechnology | spacy ner introduction and usage ( english) | nan | 0.67 | t3_s8s3mx | 1,642,710,478 |
LanguageTechnology | Item2Vec - Word2Vec from gensim wrapped as sklearn estimator for GridSearchCV | Prod2Vec or Item2Vec produces embedding for items in a latent space. The method is capable of inferring item-item relations even when user information is not available. It's based on NLP model Word2Vec. [Click here](https://arxiv.org/pdf/1603.04259.pdf#:~:text=Inspired%20by%20SGNS%2C%20we%20describe,user%20information%20is%20not%20available.) to know more
This project provide a class that encapsulates Item2Vec model ([word2vec](https://radimrehurek.com/gensim/models/word2vec.html) gensim model) as a [sklearn estimator](https://scikit-learn.org/stable/developers/develop.html).
It allows the simple and efficient use of the Item2Vec model by providing :
* metric to measure the performance of the model ([Precision@K](https://arxiv.org/pdf/0704.3359.pdf))
* compatibility with [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) and [BayesSearchCV](https://scikit-optimize.github.io/stable/modules/generated/skopt.BayesSearchCV.html) to find the optimal hyperparameters
&#x200B;
[https://github.com/MathieuCayssol/Item2Vec](https://github.com/MathieuCayssol/Item2Vec) | 0.93 | t3_s8o288 | 1,642,699,387 |
LanguageTechnology | Questions about BigBird | Hello, people. I still have some questions after reading the paper about Big Bird model ( [https://arxiv.org/pdf/2007.14062v2.pdf](https://arxiv.org/pdf/2007.14062v2.pdf) ) and will be happy if some Big Bird specialists will help me to understand this model better.
1. Is distribution of random attention (Figure 1 (a)) fixed from advance for all inputs, or it somehow can be different for different inputs even on the same head?
2. In BIGBIRD-ETC, do they add some additional global tokens, aside of \[CLS\]?
3. In BIGBIRD-ITC, how is the subset of tokens for global attention chosen?
4. Why is infinite precision necessary for sparse transformer to be Turing complete?
Thank you. | 0.86 | t3_s8lxk7 | 1,642,693,719 |
LanguageTechnology | Detecting the Presence of an Object in a Sentence | Is there any way to differentiate between the absence or inclusion of an object in a sentence? For example: there is not a cat in my yard, the cat disappeared from my yard, a cat does not exist in my yard, etc. versus there is a cat in my yard, the cat appeared in my yard, a cat exists in my yard.
Any help would be greatly appreciated! | 0.57 | t3_s8j2qk | 1,642,685,503 |
LanguageTechnology | The Structure of Language | Noam Chomsky | nan | 0.4 | t3_s85d57 | 1,642,638,974 |
LanguageTechnology | [D] Did you also feel that Snorkel's LabelModel is really slow? | Has anyone here used Snorkel AI's LabelModel for automatically labeling text? Have you found it to be super slow? | 0.9 | t3_s7vype | 1,642,614,770 |
LanguageTechnology | Seeking string "Readability" metric. | Hello fellow enthusiasts,
I have a corpus of 150k documents, and their respective OCR outputs.
I'd like to assign a Readability score to each document, is there a metric out there for something like that?
In retrospect to my OCR extraction, which took almost a month of runtime to run, I *could* have extracted an OCR-accuracy score along with my strings. I'd like to find an alternative solution instead of re-running it. Knowledge for next time, anyways...
I'm open to all thoughts and considerations. | 1 | t3_s7r2pj | 1,642,601,826 |
LanguageTechnology | Simplest keyword extractor | I’ve been trying to do some basic keyword extraction and finding it harder than expected.
KeyBERT seems good but it requires a powerful GPU to be usably fast. That’s possible with AWS, but there’s a bit more set up.
I just tried PyTextRank, and I was surprised at the quality of the output - I wouldn’t say it was perfect either. Maybe I should set a threshold, like choose the top 200 ranked keywords? It’s fine if we exclude potential good keywords just to have a smaller list of good ones.
Here’s a good article about 7 different methods, which is helpful -
https://towardsdatascience.com/keyword-extraction-a-benchmark-of-7-algorithms-in-python-8a905326d93f.
In theory, Spacy and BERT seem like the best options but they’re both a little complex.
I think KW extraction really only needs a few layers or as Spacy would call them pipelines.
1. accurate tokenization of words and punctuation symbols
2. accurate recognition of multi-word expressions (phrases) - think of it as “chunking”
3. Strong assessment of keyword “candidacy” for each MWE (could be purely rule-based, corpus based, or machine learning based)
Of course, a good algorithm can often skip steps. Like BERT is so smart it doesn’t need anything but the input text.
Does anyone know of a simplest way to run a fast, effective keyword extraction?
I’m talking 200 keywords in one second on a fast CPU.
Thanks very much | 0.91 | t3_s7qml8 | 1,642,600,495 |
LanguageTechnology | Create a good prompt for openAI text-generation model | Hi guys, I'm a data science student and i'm trying to build a new dataset using OpenAi framework.
My idea is to use 5000 phrases and triples to train a data-to-text OpenAi model which given a triple generates of the text.
reading the OpneAi documentation, I found this script for reference:
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
response = openai.Completion.create(
engine="davinci",
prompt="English: I do not speak French.\nFrench: Je ne parle pas français.\n\nEnglish: See you later!\nFrench: À tout à l'heure!\n\nEnglish: Where is a good restaurant?\nFrench: Où est un bon restaurant?\n\nEnglish: What rooms do you have available?\nFrench: Quelles chambres avez-vous de disponible?\n\nEnglish: What time is breakfast?\nFrench:",
temperature=0.5,
max_tokens=100,
top_p=1.0,
frequency_penalty=0.0,
presence_penalty=0.0,
stop=["\n"]
)
i would like that the above script follows this scructure:
prompt = "Triples: subject, predicate, object.\\nEnglish:text generated"
import os
import openai
openai.api_key = "mykey"
response = openai.Completion.create(
engine="ada",
prompt= "Triples: Aarhus_Airport, cityServed, Aarhus_Denmark.\nEnglish:The Aarhus
is the airport of Aarhus, Denmark.\n
Triples:Alan_Bean, almaMater,UT_Austin_B.S._1955.\nEnglish:The Alma Mater
of Alan Bean is UT Austin, B.S. 1955.\n
Triples:103_Colmore_Row, architecturalStyle, Brutalist_architecture.\n
English:The architecture style of 103 Colmore Row falls under Brutalist
architecture.\n
Triples:The_Phoenix, Fast_food, riverside.\nEnglish:",
temperature=0.5,
max_tokens=100,
top_p=1.0,
frequency_penalty=0.0,
presence_penalty=0.0,
stop=["\n"]
)
expected output from the last triple (The\_Phoenix, Fast\_food, riverside.):
The Phoenix is a fast food place in the riverside area.
For example, given the webnlg dataset (xml file) with this strcuture:
<entry category="Airport" eid="Id1" size="1">
<originaltripleset>
<otriple>Aarhus_Airport | cityServed | "Aarhus, Denmark"@en</otriple>
</originaltripleset>
<modifiedtripleset>
<mtriple>Aarhus_Airport | cityServed | "Aarhus, Denmark"</mtriple>
</modifiedtripleset>
<lex comment="good" lid="Id1">The Aarhus is the airport of Aarhus, Denmark.</lex>
<lex comment="good" lid="Id2">Aarhus Airport serves the city of Aarhus, Denmark.</lex>
</entry>
I would like to randomly extract sentences and their triples from this dataset and use them as training. How can I insert them into the "prompt" variable automatically without writing by hand ?.
&#x200B;
Thanks! | 0.81 | t3_s7niat | 1,642,589,457 |
LanguageTechnology | Rewriting to Fit Author's Style | Hello all,
I'm trying to get smarter on mimicing writing style based on sample input text. The hope is a system like this:
**Inputs:**
* Input tagged sample writing/letters/emails/dialogue from desired author.
* Basic sentence to be rewrite (assuming some word embedding/transformer to abstract meaning).
**Output:**
* Translated sentence written in sample author's writing style.
I'm assuming this is a bit of an ambitious lift and may require some training on my own. Curious if anyone has any insights on stylometry papers written. Even something on generative text that is just meant to replicate an author's style could be a helpful starting point.
Thanks! | 1 | t3_s7a0rx | 1,642,545,475 |
LanguageTechnology | Pyarabic: a python package for the Arabic language ( brief description with basic simplified in english) | nan | 0.67 | t3_s79qhb | 1,642,544,731 |
LanguageTechnology | Fine-tuning reader models for Question-Answering | Hi all, I put together some material on [fine-tuning reader models for open-domain question-answering](https://www.pinecone.io/learn/reader-models) (ODQA). ODQA is an increasingly popular approach to building more human/natural language information retrieval tools. Allowing users to store massive amounts of text data, and then search using natural language questions, it is one of the technologies that powers Google search. Reader models are the final step in an ODQA pipeline, allowing us to extract very specific answers to questions.
Let me know what you think, I hope it's useful, thanks! | 1 | t3_s726uh | 1,642,525,369 |
LanguageTechnology | Sentence-transformer Bert model performs worse after fine-tuning | I'm using [symanto/sn-xlm-roberta-base-snli-mnli-anli-xnli](https://huggingface.co/symanto/sn-xlm-roberta-base-snli-mnli-anli-xnli) from HuggingFace. After multiple tries with different batch sizes, epochs, learning rates and even different unsupervised learning models methods such as [this](https://www.sbert.net/examples/unsupervised_learning/TSDAE/README.html), I couldn't get my sentence transformer to perform better than raw model straight from HuggingFace. I'm not sure what I'm doing wrong. I'm sure there are no bugs in my code since I followed the sentence transformer model documentation almost verbatim.
background on my task: my datasets consists of a list of sentences(legal articles— around \~300 small sentences) and a person will enter a query of say 3-5 sentences and I'm supposed to find the "correct" matches for the query.
Currently, the base model isn't amazing but it's also not too bad. Hence, I expected better performance once I fine-tune it. However, upon fine-tuning, cosine similarity scores all drop and the fine-tuned model has never made a better prediction(map from query to correct sentence in the dataset) than the original model with no fine tuning.
I'd like to know why that might be the case and if that's a normal thing that usually happen with nlp models. My dataset is very small so my guess is my training parameters were bad? or is my training data so insignificant that my fine-tuning simply doesn't matter? | 0.94 | t3_s6y0ah | 1,642,513,906 |
LanguageTechnology | Unicode error in vectorizing text |
I am trying to vectorize 20-news group data
using tensorflow TextVectorization layer
but in TextVectorization layer
if I limit the vocab size to some number say 10000 then it works fine. However if I preprocess the data or do not set the vocab size to some number then I get UnicodeDecodeError: 'utf-8' codec can't decode byte 0xfe in position 2257: invalid start byte
error.
My question is have I done something wrong in preprocessing? Because if I set the vocab size to 10000 and do the same preprocessing then this wont work. Also, do I need to set vocab size in 'TextVectorization\` but the docs says it can have unlimited size?
&#x200B;
Here is what I did :
i. Get the list of files:
train_dir_list = []
for i in os.listdir(train_dir):
f = os.path.join(train_dir, i)
for j in os.listdir(f):
train_dir_list.append(os.path.join(f, j))
ii. Create tensorflow data
train_data = tf.data.TextLineDataset(train_dir_list)
iii. Preprocess data
def preprocess(text):
lower = tf.strings.lower(text)
# remove emails
email_removed = tf.strings.regex_replace( lower, "\S*@\S*\s?", "" )
# remove numbers
number_removed = tf.strings.regex_replace( email_removed, "[0-9]", ' ' )
# remove punctuations
punctuation_removed = tf.strings.regex_replace( number_removed, '[%s]' % re.escape(string.punctuation), ' ' )
# remove multiple blank spaces
multiple_space_removed =tf.strings.reduce_join(tf.strings.split(punctuation_removed), separator=" ")
return multiple_space_removed
iv. Create vectorizer: In here if I remove the standardize=preprocess
and keep the vocab\_size and sequence\_length
it works fine. But if I use standardize=preprocess
either with same vocab\_size and sequence\_length
or do not use standardize=preprocess
but keep the vocab\_size and sequence\_length
empty or as default then it gives the UnicodeDecodeError:
vocab_size = 10000 sequence_length = 300
This works fine:
vectorize_layer = layers.TextVectorization(
# removed preprocess
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length )
This will throw error:
vectorize_layer = layers.TextVectorization(
standardize=preprocess,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length )
This will also give error :
vectorize_layer = layers.TextVectorization(#used default parameter)
v. calling adapt works fine
vectorize_layer.adapt(train_data.batch(1024))
vi. This is were the error is thrown
vectorize_layer.get_vocabulary()
Also, on looking up the vocab size when the error is produced, the vocab size is only around 30-40 | 0.67 | t3_s6wvu7 | 1,642,510,480 |
LanguageTechnology | Vocab size for word2vec implementation |
I am trying to implement word2vec for large corpus may be in billions of words if possible. I am following [Word2vec tensorflow](https://www.tensorflow.org/tutorials/text/word2vec) as a reference, where they have used a vocab size of 4096 and sequence length of 10. My question is, if I use other corpus with billions of words should I limit the vocab size to some numbers like 10000 and sequence length or create vocab for all the unique words present in the corpus ?
I want to know how did gensim and other library trained their model on large corpus, did they limit the size of vocabulary or trained on all the unique words present in the corpus ? | 1 | t3_s6qaog | 1,642,485,582 |
LanguageTechnology | searching for free available document for language processing | Hey Im searching the web for a document with following criteria for a nlp model
30+ pages
contain :
knowledge
rules
instructions
\--- no speculation unclear content inside like research papers
\------ mostly text no relevant pictures or formulas
&#x200B;
I would greatly appropriate any help and tips | 0.76 | t3_s6c319 | 1,642,446,741 |
LanguageTechnology | What LSTM Baseline To Use? | Suppose you are writing a paper with some new transformer-variant that does well on classification tasks. You want to have a LSTM baseline. How would you go about choosing that LSTM architecture? What about training hyperparameters? Is there a standard, should it be grounded with respect to another paper, does it not matter as long as one explains what the architecture is?
&#x200B;
Thanks! | 1 | t3_s66i5v | 1,642,433,105 |
LanguageTechnology | I'm building a neural search plugin for Elastic/Opensearch | Hey Everyone,
I'm building a plugin for end-to-end neural search (eg SBERT, Hugging Face, Doc2Vec etc) in Opensearch and I'd love to hear any suggestions from the NLP community of what you might find useful or about issues you've had with Opensearch/neural search in the past.
Despite the Elasticsearch website claiming in many places that you can do "machine learning" with Elasticsearch, I've found that it's not straight forward at all to use neural search algos with ES/Opensearch. In most cases (putting to one side some specific cases like anomaly detection), you have to implement the ML algorithm yourself and you only get to use ES as a storage layer. Some 3rd party plug and play frameworks that support ES seem quite promising but also lack functionality in terms of data retrieval - for example, Cherche seems to enforce that users first retrieve documents using an algorithm like tf-idf before putting them through neural search.
I want to build a plugin/service that will allow users to more readily take advantage of the vector functionality and neural search in general. I've found the following issues:
1. Opensearch/ES have added a great deal in terms of functionality to allow for vector search (eg approximate knn algo), but it seems entirely up to the user to encode the word embeddings. Therefore, users must add code to manually encode any documents and queries into their chosen embeddings before searching/adding data. I think users having their embeddings is generally a good idea if they want a high level of optimisation, but for many use cases, pretrained embeddings should be a "good enough" solution.
2. If the data is in text format, it cannot easily be converted into a format to be used with algorithms like SBERT etc without reindexing the entire index and running it through a custom script to change the data into a vector format.
3. I'd suspect for many users who arn't NLP experts, navigating all of the potential options for embeddings/Neural Search architecture could be quite overwhelming. Having a configurable plugin where they can try different options would likely help them to accelerate getting started.
I think letting users have their own embeddings makes sense from an optimisation perspective but I think also it would be amazing to have an end to end solution where you can connect different algos directly into Opensearch. I'm also exploring extending this and allowing users to refresh/update these embeddings to continually improve them.
Let me know what you think, open to any suggestions! If you want to keep up to date with this, here is a google form [https://forms.gle/acmGTK1gPkPZbVJm8](https://forms.gle/acmGTK1gPkPZbVJm8) | 0.91 | t3_s5ethj | 1,642,348,913 |
LanguageTechnology | [P] Open-source tool for building NLP training sets with weak supervision and search queries | nan | 0.71 | t3_s5dmsy | 1,642,345,368 |
LanguageTechnology | Metric for text summarization | Recently, I was reading some literature about text summarization and came across its evaluation metric, the "ROUGE" score. From what I understood from preliminary reading, the ROUGE score only measures n-gram overlap between candidate summary and reference summary which wrongly penalizes abstractive summaries containing different n-grams but conveying the same meaning. There's also a **BERTScore** metric (arXiv'19, ICLR'20) that does not suffer from these issues of ROUGE and computes contextual similarity rather than just n-gram overlap. How can I assess if BERTScore is a better evaluation metric compared to ROUGE? (consider me a beginner in NLP) | 0.88 | t3_s5ao3z | 1,642,335,236 |
LanguageTechnology | How is multidimensional scaling plot supposed to be for word embeddings? | I plotted MDS plot for word embeddings obtained from BERT. The \[CLS\] token is plotted on the middle of the figure and other word embeddings are scattered . Is it supposed to be in middle? Is there any significance to it? MDS plot was plotted on the basis of pairwise cosine similarity. | 0.71 | t3_s51i8l | 1,642,301,236 |
LanguageTechnology | I'm conducting research in NLP with data pulled from multiple sources, primarily Reddit, Twitter, and Facebook. The data contains different categories which are mentioned in the description below. Is anyone familiar with the ethics or the problems that came up using data like this? | The different categories include:
1. Posts that were deleted by the user themselves.
2. Posts that were banned by the Community moderators.
3. Posts that were banned by the Platform moderators.
4. Pages or communities containing posts that were banned by the Platform moderators.
I'm fairly uncertain about whether all the data that was pulled contains reasons for the ban they faced. In the case of deleted posts, there's no such label available.
Any idea how to go about this? Any link to cited paperwork that has faced and dealt with similar problems would be great. Links or mentions of authors who might have faced this issue also help as I can try reaching out to them. I'm having some trouble finding sources.
Even similar datasets links would be great as I can do a comparison study on this.
Thanks. :D | 0.85 | t3_s4cjy3 | 1,642,223,400 |
LanguageTechnology | What is an example of a problem of a specific 2022 that NLP can solve? | For example a project similar to scanning for cyberbullying comments but one that has been done already | 0.27 | t3_s47pz3 | 1,642,208,404 |
LanguageTechnology | Farsi > English Translation Model | I’m just wondering if anybody knows of any good Farsi (Persian) > English translation models? I’ve tried a few of the multilingual ones from Huggingface but the quality isn’t the best | 0.76 | t3_s41hmv | 1,642,191,320 |
LanguageTechnology | Scientific Literature Review Generation v1.0 | Hello ,
I've developed after my PhD a first version of an algorithm to automatically generate a literature review : [https://www.naimai.fr](https://www.naimai.fr/) and many remarks were given. I just deployed a new version with much more papers and I'll be thankful if you have any remarks about it :)
More about the new version here : [https://yaassinekaddi.medium.com/scientific-literature-generation-ii-73628aebd4fb](https://yaassinekaddi.medium.com/scientific-literature-generation-ii-73628aebd4fb)
Hopefully that could be useful for the PhDs (and the non PhDs) !
&#x200B;
Cheers, | 0.9 | t3_s3whz6 | 1,642,177,989 |
LanguageTechnology | Help needed. How to predict profession from short bio ? | Hi NLP community,
Probably much of a newbie here and need some guidance. I am doing a personal project that aims to predict a person's industry from their short biography.
&#x200B;
For example:
" I am a retired engineer and company manager. I do not have a financial background or offer financial advice. blah blah " => **Prediction:** ENGINEERING
and
" Damon makes his living as a gap trader, an earnings trader, and an interday trader. In his free time, he writes for ABC, where he focuses on seasonal investing, market timing, and earnings analyses. " => **Prediction:** FINANCE
&#x200B;
I wanted to ask what approach should i do to make such predictions ? And what kind of public dataset would be useful to train a ML model for such task ?
&#x200B;
Thank you so much ! | 0.72 | t3_s3iw6i | 1,642,133,379 |
LanguageTechnology | Using Stanford NLP want to get sentiment for full context with bias | I'm using the Stanford NLP to help with a personal project. Last week I couldn't tell you what a Verb or Adverb was to be quite honest. I've been feeding my C# program titles/comments from stock investing subs to get the "Sentiment". Someone posted:
To r/ALL: 1 Year ago, the most unprecedented move in the history of the stock market happened. \[Positive\]
The Buy Button was turned off for a specific stock. \[Negative\]
1 year later and there have been NO CONSEQUENCES. \[Neutral\]
No one went to jail. \[Neutral\]
Was there even a fine? \[Neutral\]
Why? \[Negative\]
How is the answer not going to sound like a conspiracy \[Neutral\]
I understand the algorithm is evaluating each sentence. How would one go about determining the full context? Clearly this example has a neutral/negative assumption.
Further more, bias... so if I have a left or right leaning comment to which I could tell the program i'm in favour of L/R then the algorithm would deem that as a postitive to my bias. | 0.89 | t3_s392e0 | 1,642,106,307 |
LanguageTechnology | NLP Bias & (un)Fairness Recognizer App with Spacy | Day 2 of #8daysofstreamlit | nan | 0.76 | t3_s38zn9 | 1,642,106,108 |
LanguageTechnology | Remote company looking for an NLP DS! | [This](https://apply.workable.com/bunnystudio/j/8E938022F3/) position is currently open and I wanted to share with you! | 0.84 | t3_s30ccv | 1,642,083,152 |
LanguageTechnology | Pretrained models for multi-label classification (transformer based) | Hi,
I am looking at multi-lable classification for Twitter-like data. Does anybody know if any open source project (or Huggingface model hub) has any pre-trained models ready to use?
I am looking at classification taxonomy such as [IAB v2 categories](https://www.iab.com/guidelines/content-taxonomy/) or Wikipedia categories.
Thanks! | 0.8 | t3_s300v8 | 1,642,082,261 |
LanguageTechnology | Making sentences of a dialogue simple and clear | I have a task where I need to come up with a system that takes sentences from dialogues and makes them more simple and clear. Sentences during dialogues are often filled with filler words such as uhm, uh and repetitions and my goal is to extract the information of the sentence and present it in a clear form.
E.g:
Original: "And then I drew it on the board uhm white board because that's easier to understand for the student uh i mean students"
Simplified: "I drew it on the whiteboard because it's easier to understand for the students"
I just took an introductory NLP course in my undergraduate degree so I'm still relatively new to the subject. Any learning resources and advice on how to do this would be greatly appreciated. | 1 | t3_s2t02e | 1,642,056,598 |
LanguageTechnology | Understanding BLEU metric | In the BLEU paper [https://aclanthology.org/P02-1040.pdf#page=2](https://aclanthology.org/P02-1040.pdf#page=2) , why **In Example 1, Candidate 1 achieves a modified unigram precision of 17/18; whereas Candidate 2 achieves a modified unigram precision of 8/14.** ? | 0.76 | t3_s2neoo | 1,642,039,194 |
LanguageTechnology | Alternatives to Google Cloud Translate | A lot of cloud computing services have a lot of vendors in the field but is there any other API for machine translation like Google Translate? DeepL or any others?
Thanks very much | 0.86 | t3_s2gg6u | 1,642,020,337 |
LanguageTechnology | What do think of doing empirical study on open source model for my master thesis? | The problem with my university is we have to first write proposal and only after approved we get a supervisor on thesis domain. But the proposal take lot of effort and time. Even choosing topic takes lot of time. So reddit is the only option I have.
I wanted my master thesis to be moderatly hard, so choosed to do empirical study instead and was finding hard to choose the topic. I found some research papers do empirical study, for eg: [https://aclanthology.org/P19-1493/](https://aclanthology.org/P19-1493/). What do you guys think? Is doing empirical study on open source model like BERT is appropriate for master thesis? | 0.83 | t3_s2gd3g | 1,642,020,113 |
LanguageTechnology | Suggestions for Simple Natural Language Projects that haven't been done yet? | Also possible in Python programming language | 0.5 | t3_s2egjq | 1,642,015,224 |
LanguageTechnology | Fine-tuning and implementing retrievers in open-domain Q&A | Hi all, I've just published another article focusing on [fine-tuning a retriever model for open-domain question-answering](https://www.pinecone.io/learn/retriever-models/). The retriever is a big component of the open-domain QA pipeline, allowing us to retrieve relevant *contexts* from a vector database, which then help us answer a query (article includes vector db <> retriever setup too)
Let me know if you have any questions or feedback, thanks! | 1 | t3_s26r9f | 1,641,995,581 |
LanguageTechnology | txtai 4.0 released - semantic search with SQL, content storage, object storage, reindexing and more | txtai 4.0 has been released with a number of new features.
* **Content Storage** \- Content can now be stored alongside embeddings vectors. No longer required to have external data storage.
* **Query with SQL** \- txtai supports both natural language queries and SQL queries
`embeddings.search("feel good story")`
`SELECT id, text, score FROM txtai WHERE similar('feel good story') AND score >= 0.15`
* **Object Storage** \- Store binary objects alongside embeddings vectors
* **Reindex** \- Indexes can be rebuilt using stored content, no need to resend data to txtai
* **Index Compression** \- Indexes can be compressed using GZ/XZ/BZ2/ZIP
* **External Vectors** \- Use external vector models from an API or an external library. Centralize building vectors on GPU servers leaving index servers to be powered by more modest hardware.
More information can be found in following links.
* [GitHub Project](https://github.com/neuml/txtai)
* [4.0 Release Notes](https://github.com/neuml/txtai/releases/tag/v4.0.0)
* [What's new in txtai 4.0](https://colab.research.google.com/github/neuml/txtai/blob/master/examples/24_Whats_new_in_txtai_4_0.ipynb)
* [Documentation](https://neuml.github.io/txtai)
* [Examples](https://neuml.github.io/txtai/examples/) | 1 | t3_s25i2x | 1,641,991,728 |
LanguageTechnology | Choosing a LT program | Hi everyone! I am a BA graduate with background in linguistics and self-taught programmer. I have been looking at two programs in language technology Msc. in Speech and Language Processing in Edinburgh ([https://www.ed.ac.uk/ppls/linguistics-and-english-language/prospective/postgraduate/msc/speech-language-processing](https://www.ed.ac.uk/ppls/linguistics-and-english-language/prospective/postgraduate/msc/speech-language-processing)) and Human Language Technology ([https://linguistics.arizona.edu/master-science-human-language-technology-hlt](https://linguistics.arizona.edu/master-science-human-language-technology-hlt)) in the States and I was wondering I anyone has any advice in which would be a better fit? I am hoping to start working in the industry after graduating (open to do a PhD but maybe more in the future) but I am worried about which one would help me be more prepared. The one in Edinburgh is a one-year program while the one in AZ is a two-year program. Would appreciate any help with your viewpoints on this :) | 0.81 | t3_s1wvwl | 1,641,960,572 |
LanguageTechnology | A dataset of parse trees generated from abstracts of arXiv articles | [https://github.com/l74d/scholarly-trees](https://github.com/l74d/scholarly-trees)
I have put up some (not so few) parse trees online as a dataset. Not something as substantial as Penn Treebank, since the trees have NOT been human-edited, but still way more parse trees than from Penn to feed into your later-stage NLP algorithms, free of charge or hassle.
The current format is straight from where they were generated. Suggestions of alternative formats based on ease of use would be heavily appreciated!
&#x200B; | 0.96 | t3_s1d4p9 | 1,641,908,383 |
LanguageTechnology | Twitter topics | Does anyone know how Twitter generates their “Topics”?
It seems like they could be machine generated (but human reviewed)? It would be a lot of labor simply brainstorming a huge ontology of trending concepts in the Twittersphere.
They must have some algorithms for analysing and clustering tweet topics.
And possibly even for automatically suggesting the name of the cluster (the topic / label).
Anyone have any guesses how they do it?
Thanks very much | 0.67 | t3_s103c7 | 1,641,863,189 |
LanguageTechnology | Webinar: NLU Project Showcase | Stanford Online students will present original projects developed in the Natural Language Understanding professional course. Q&A to follow. [Register here](https://learn.stanford.edu/WBN-AI-NLU-Project-Showcase.html). | 0.67 | t3_s0vkki | 1,641,851,323 |
LanguageTechnology | Research topics for a master’s thesis? | I am doing a Master’s conversion course in Computer Science. Previously I did Linguistics for my undergrad. Due to my background, I am interested in conducting my thesis on Natural Language Processing. However, as I’ve never conducted a thesis, I don’t know where to start.
What are some good research topics I could read through to get a good idea? Any suggestions on where to start for a beginner in this field would be very much appreciated. | 0.88 | t3_s0uwco | 1,641,849,650 |
LanguageTechnology | UC Sandiego Researchers Propose A Controllable Voice Cloning Method That Allows Fine-Grained Control Over Various Style Aspects Of The Synthesized Speech For An Unseen Speaker | Text-to-Speech (TTS) synthesis is achieved using current voice cloning methods for a new voice. They do not, however, manipulate the expressiveness of synthesized sounds. The task of learning to synthesize the speech of an unseen speaker with the least amount of training is known as voice cloning.
UC San Diego researchers propose a Controllable voice cloning method that offers fine-grained control over many style features of synthetic speech for an unseen speaker. The voice synthesis model is explicitly conditioned on a speaker encoding, pitch contour, and latent style tokens during training. [***Continue Reading***](https://www.marktechpost.com/2022/01/10/uc-sandiego-researchers-propose-a-controllable-voice-cloning-method-that-allows-fine-grained-control-over-various-style-aspects-of-the-synthesized-speech-for-an-unseen-speaker/)
Paper: https://arxiv.org/pdf/2102.00151.pdf | 0.81 | t3_s0qfes | 1,641,838,309 |
LanguageTechnology | Where do you search for your state of the art NLP papers ? | Hi,
I'm looking for the latest papers on different subjects (explicit language detection, keywords/keyphrase detection ..etc) .
Typing on google is not helping. I find only commercial solutions.
Thank you | 0.93 | t3_s0q1m4 | 1,641,837,372 |
LanguageTechnology | Using pre-trained BERT embeddings for multi-class text classification | What would be the steps involved in doing such a thing? Basically I have data from a research,
the form of a dataframe including: participant ID, response ID, single word object, and response, which consists of a description of the function of the single word object. Some paricipants have had more than one response from the same object. Object is the same in the whole dataset, and category codings have been done by someone else. Basically responses need to be categorized into these coded categories, denoting similar responses. I want to construct embeddings of the responses and feed them into a CNN. How can this be done quickest? I don't have much knowledge and all the information online is very overwhelming.
&#x200B;
Basically, I want to use the bert uncased model, (so no s-bert or anything of that sort), but I guess I do need to average the world embeddings. Also, how do I tokenize the whole column of responses, and how to add the object into the mix since it is needed as the response and object are connected. Don't expect someone to give me a tutorial in the comments, but a list of general steps to take in this context would be incredibly helpful. You will save my life and my graduation. | 0.94 | t3_s0p3yw | 1,641,835,002 |
LanguageTechnology | Is there a way to detect torn up words? | Example: Qual ity -> quality
I'm using pytesseract to transcribe pdfs, and unfortunately one of the issues is PDF often splits up words at the end of column in two parts . I'm trying to figure out a way to detect when words don't make sense separately but make a normal word combined (using python) | 0.92 | t3_s0l796 | 1,641,824,657 |
LanguageTechnology | How to speed up inference of your Transformer-based NLP models? | Hello all,
Many of us are having a hard time speeding up our Transformer based NLP models for inference in production.
So I thought it would be worth writing an article that summarizes the options one should consider (GPU, batch inference, export to ONNX or Torchscript, using TensorRT, Deepspeed, Triton Inference Server... etc.):
[https://nlpcloud.io/how-to-speed-up-deep-learning-nlp-transformers-inference.html](https://nlpcloud.io/how-to-speed-up-deep-learning-nlp-transformers-inference.html?utm_source=reddit&utm_campaign=ehyiek56-ed8e-11eb-ba80-5242ac130007)
I hope you'll find it useful. If you can think of additional options, please let me know and I'll add them to the article!
Julien | 1 | t3_s0h5nv | 1,641,811,182 |
LanguageTechnology | Why is Fine tuning a text model so influential on the results? | Newbie to this field, but nonetheless BERT was trained on 3.3 billion+ words, when I do a masked learning task it is fairly successful on my healthcare dataset without fine tuning. However, when I fine tune the dataset, maybe adding only additional 1 million words (only \~0.02% more words), suddenly the same task is significantly more accurate.
I understand that fine tuning is training the model on my specific task, so of course results will improve, but it is almost as if the model weights the fine-tuned additional words over itself.
Why can such a small number of additional words improve the model so drastically? | 0.81 | t3_rzuyqo | 1,641,744,709 |
LanguageTechnology | AI that can write advanced explicit segmentation rules | Usually you move from rule-based segmentation to just machine learning when it gets too complicated.
But if we consider all rule based methods at our disposal - not just regular expressions but also having a corpus of words and symbols loaded as identifiers, plus various statistical methods that were used before machine learning like N-grams and so on -
I think it’s conceivable that if we give an AI certain parameters to play with from the gamut of rule-based segmentation methods, it could think of extraordinarily complex yet relatively effective explicit segmentation scripts.
Sort of like how Ramanujan could think of extremely complicated formulae in number theory that produced perfect results, and DeepMind recently tried to emulate a complex mind like that to discover new theorems in mathematics.
Does anyone know of any projects like this?
Thank you | 0.67 | t3_rzuiy5 | 1,641,743,486 |
LanguageTechnology | HMM tagger from scratch | Hey there! I'm trying to build an HMM tagger from scratch in Python, but I'm not so sure of how to go about it. Do you know of any good resources or guides that could be useful to me? | 0.67 | t3_rzs4ah | 1,641,736,244 |
LanguageTechnology | Best tools for Multi-GPU Model training? | nan | 0.76 | t3_rzpmd0 | 1,641,727,290 |
LanguageTechnology | Are there any languages in the world which break some NLP fundamental assumptions? | So, for context, I've only just started learning NLP, and I've just encountered the word2vec algorithm for the first time. This algorithm calculates the probability of a word appearing at a position in a sentence as a function of what it's surrounding words are, weighted by the distance from that central word, learned from a large corpus of language. So for instance, if you fed it an incomplete sentence: "the cat jumped over the ... ", it would assign high probabilities to words like "table", "mat", "bed", and assign low probabilities to words like "blue", "boil", "running".
Are there any human languages in the world for which the assumptions which the algorithm are built on break? For example, any languages for which the context of a word is *inversely* proportional to it's semantic meaning, rather than proportional as this algorithm assumes?
Are there any other interesting concepts in NLP which work for some languages, but not others? | 0.98 | t3_rzp9h6 | 1,641,725,921 |
LanguageTechnology | Coqui Introduces ‘YourTTS’: A Zero-Sample Text-to-Speech Model With State-of-The-Art (SOTA) Results | Recent advancements in end-to-end deep learning models have enabled new and intriguing Text-to-Speech (TTS) use-cases with excellent natural-sounding outcomes. However, the majority of these models are trained on large datasets recorded with a single speaker in a professional setting. Expanding solutions to numerous languages and speakers is not viable for everyone in this situation. It is more challenging for low-resource languages not often studied by mainstream research.
Coqui’s team has designed ‘[YourTTS](https://coqui.ai/blog/tts/yourtts-zero-shot-text-synthesis-low-resource-languages)‘ to overcome these limits and provide zero-shot TTS to low-resource languages. It can synthesize voices in various languages and drastically reduce data requirements by transferring information between the training set. [***Continue Reading***](https://www.marktechpost.com/2022/01/08/coqui-introduces-yourtts-a-zero-sample-text-to-speech-model-with-state-of-the-art-sota-results/)***....***
Paper: https://arxiv.org/pdf/2112.02418.pdf
Github: https://github.com/coqui-ai/TTS | 0.9 | t3_rz8epu | 1,641,671,793 |
LanguageTechnology | Is there an equivalent concept in NLP to what high-level computer languages (e.g. Python) do to manage user error? | Is there an equivalent concept in NLP to what high-level computer languages (e.g. Python) do to manage user error?
That is:
* natural languages may see users doing errors (grammar etc.)
In computer languages, however:
* low-level languages (e.g. ASM, C) see users doing errors (e.g. in utilizing memory)
* therefore people design higher level languages (e.g. Python) that have features that correct these errors
Is there an equivalent concept for natural languages in NLP? | 0.87 | t3_ryyhke | 1,641,643,791 |
LanguageTechnology | How hard are automatic grammar checkers? | How hard are automatic grammar checkers?
Over and over again I find that it'd be much smarter to have a program suggest whether your grammar is sound instead of trusting a person to keep track of all nuances over the course of a large number of sentences.
But I wonder how hard is it to write an automatic grammar checker.
Is it easier for languages like Lojban? | 0.78 | t3_rywwfl | 1,641,637,563 |
LanguageTechnology | Just found out Google Translate doesn't provide romanizations of Farsi, and doesn't even have Tibetan at all... | This feels intentional... no?
Also: [https://www.k-international.com/blog/8-surprising-languages-you-wont-find-on-google-translate/](https://www.k-international.com/blog/8-surprising-languages-you-wont-find-on-google-translate/)
Found some others that are sorely lacking too—what a joke. | 0.27 | t3_rynylf | 1,641,605,752 |
LanguageTechnology | Possible interview questions for Research Assistant. | Hello all,
&#x200B;
I am trying to pursue my MS in Computer Science and for funding I have been talking to Professor. He is asking me to brush up my knowledge on Deep Learning and NLP. I am scaring to my death. I have 2 years of working experience but working in company is different from research. In company we just use library with surface understanding and working mechanism to get things done.
&#x200B;
So, what might be the expectation of any Professor who is hiring any research assistant. What can be the possible questions any Professors ask in Deep Learning and NLP ?
&#x200B;
Thanks in advance. Your help and support means a lot to me. | 1 | t3_rxwrax | 1,641,524,304 |
LanguageTechnology | Clone your voice and speak a foreign language | nan | 1 | t3_ry08s3 | 1,641,534,910 |
LanguageTechnology | How can you do efficient text preprocessing? | Hello,
I am trying to do some basic preprocessing on 2.5GB of text. More specifically, I want to do tokenization, lower casing, remove stop words and top-k words. I need to use spacy because the dataset is in greek and I think other libraries can't support this.
However, when I try to apply what the spacy documentation or most of the guides/resources mention, it takes forever to complete even half of the techniques that I mentioned above. I stop the execution every time.
Could you provide me with some resources that I might have missed, in order to make this procedure run faster?
Thanks in advance | 0.88 | t3_rxtn0v | 1,641,515,584 |
LanguageTechnology | How could I go about turning a bunch of text into a series of questions with answers? | nan | 0.67 | t3_rxt5kp | 1,641,514,264 |
LanguageTechnology | Named Entity Recognition (NER) Ensemble Method | Hi, I am trying to get into NLP with knowledge in CV. In CV, tasks like object detection have SOTA ensemble methods like Weighted Box Fusion. I was wondering if NER has the equivalent to WBF in terms of ensembling Transformer models of different folds. | 0.78 | t3_rxiklt | 1,641,486,534 |
LanguageTechnology | corpus visualization tools | Hello all,
Which are your favorite tools to visualize a corpus?
Do you prefer a desktop or web solution? And which kind of analyses can you perform with such tools (n-gram, pos, lemmatization ecc)?
Thanks in advance | 0.83 | t3_rxc8t8 | 1,641,467,912 |
LanguageTechnology | Text Classification using Unsupervised Learning. | hi r/LanguageTechnology,
I'm working on a text classification problem, where I want to classify the textual data into different domains/categories.
We have tried couple of different approaches, like Topic Modelling and BERT. Topic Modelling didn't give out the expected results and in the case of BERT the accuracy were not at the desired level.
What are the other methodology which I can look into for this particular task?
Are there any ways in which we can improve the accuracy with on these models? | 1 | t3_rx9rmp | 1,641,458,299 |
LanguageTechnology | Language, Mind and Infinite Use of Finite Means | Noam Chomsky | nan | 0.77 | t3_rx3svh | 1,641,438,510 |
LanguageTechnology | Is there a tokenizer that is able to classify contractions as one word? | As compared to multiple. My tokenizer currently splits “can’t” to “can”, “‘“, and “t”. I’m using Python | 1 | t3_rx30rj | 1,641,434,473 |
Subsets and Splits