sub
stringclasses 4
values | title
stringlengths 3
304
| selftext
stringlengths 3
30k
| upvote_ratio
float64 0.07
1
| id
stringlengths 9
9
| created_utc
float64 1.6B
1.65B
|
---|---|---|---|---|---|
LanguageTechnology | 🎈 Build a custom Q&A app with Streamlit and Pinecone to revolutionize your search systems! Join this week's webinar to learn how. 👇 | nan | 0.5 | t3_szam6i | 1,645,598,085 |
LanguageTechnology | Any pretrained models that can detect categories? | I'm looking at Google's NLP and they can detect categories based on text like:
/Games/Computer & Video Games/Sandbox Games
/Games/Computer & Video Games/Shooter Games
/Games/Computer & Video Games/Simulation Games
/Games/Computer & Video Games/Sports Games
/Games/Computer & Video Games/Strategy Games
/Games/Computer & Video Games/Video Game Emulation
Do any free models or libraries exist that I can play around with?
Google NLP is too expensive for me.
Thank you. | 0.86 | t3_sz4v47 | 1,645,580,980 |
LanguageTechnology | [N] Who Is Behind QAnon? Linguistic Detectives Find Fingerprints using statistics and machine learning | According to the [New-York Times,](https://nyti.ms/36s3szj) using machine learning, stylometry, and statistics on Q texts, two separate teams of NLP researchers from [France](https://zenodo.org/record/6164620#.YhWRPHX0mCV) and [Swiss](https://www.orphanalytics.com/en/news/whitepaper202201) have identified the same two men as likely authors of messages that fueled the QAnon movement. First the initiator, Paul Furber, a South African software developer and then Ron Watkins took over, who operated 8chan website where the Q messages began appearing in 2018 and is now running election for Republican in Arizona. | 0.97 | t3_sz4qyt | 1,645,580,652 |
LanguageTechnology | Top American Universities on Reddit - a corpus of posts and comments to date on the subreddits of Forbes' top 10 US colleges | nan | 0.67 | t3_syy6vw | 1,645,563,610 |
LanguageTechnology | CoNLL-U token annotations | I would like to access the annotations of individual tokens in a CoNLL-U file.
The code in the file I am running (I followed the documentation found in [github](https://github.com/EmilStenstrom/conllu)):
import conllu
f = open('en_gum-ud-test.conllu', 'r', encoding='utf-8')
annotations = f.read()
sentences = conllu.parse(annotations)
sentence = sentences[0]
print(sentence) # prints TokenList of first sentence in file
# TokenList<The, prevalence, of, discrimination, across, racial, groups, in, contemporary, America, :>
token = sentence[0]
print(token) # prints the first token of the TokenList above, The
Instead of the actual token (The) I would like to get the ordered dictionary:
{
'id': 1,
'form': The,
'lemma': the,
...
}
I get this only if I run the code line by line in the console and without the print statements, so by running just what's inside the print statements.
How can I get the ordered dictionary by running the file where the code is? | 1 | t3_syiqa9 | 1,645,519,526 |
LanguageTechnology | Best textbook on coreference stuff? | What is a good textbook for someone writing coreference code? I need sources I can cite in my dissertation. Currently I am using [Stanford's Speech and Language Processing](https://web.stanford.edu/~jurafsky/slp3/), which has been very useful so far. | 0.81 | t3_sy23h8 | 1,645,471,825 |
LanguageTechnology | Does this count as a mention? | I am doing some coreference stuff and I am uncertain if the noun phrase in a noun phrase possessive constitutes a mention, eg is "Alex" in
> Alex's
a mention? I am leaning towards no, as I think it will be part of a larger noun phrase that constitutes the actual mention. | 1 | t3_sy1y6k | 1,645,471,444 |
LanguageTechnology | Introduction to Sentence-BERT (SBERT) | nan | 0.93 | t3_sxt6or | 1,645,449,037 |
LanguageTechnology | How do CAT tools write segments to an output file | When you import text to a CAT tool, it gets converted to something like .xlf, where all the recognised sentences are marked up.
Then I assume when using the tool the program has loaded in text into a list from that markup file and is somehow writing each segment back to the output file in the XML.
But how does that work? You could overwrite the file when you save the project in the program. But most CAT tools immediately save each segment. How do they connect the segment to its location in the XML to auto-write there every time you translate another segment?
Thank you very much | 1 | t3_sxhtzp | 1,645,409,255 |
LanguageTechnology | ASAPP AI Researchers Propose a Family of Pre-Trained Models (SEW) for Automatic Speech Recognition (ASR) That Could be Significantly Better in Performance-Efficiency Than the Existing Wav2Vec 2.0 Architecture | Recent research in natural language processing and computer vision has aimed to increase the efficiency of pre-trained models to reduce the financial and environmental expenses of training and fine-tuning them. We haven’t seen comparable attempts in speaking for whatever reason. Efficiency advances in speech might mean better performance for similar inference times, in addition to cost savings related to more efficient training of pre-trained models.
Due to a self-supervised training paradigm, Wav2Vec 2.0 (W2V2) is one of the current state-of-the-art models for Automatic Speech Recognition. This training method enables us to pre-train a model using unlabeled data, which is always more readily available. The model may then be fine-tuned for a specific purpose on a given dataset. It has attracted a lot of interest and follow-up work for using pre-trained W2V2 models in various downstream applications, such as speech-to-text translation (Wang et al., 2021) and named entity recognition (Shon et al., 2021). However, researchers believe that the model architecture has several sub-optimal design decisions that render it inefficient. To back up this claim, researchers ran a series of tests on various components of the W2V2 model architecture, revealing the performance-efficiency tradeoff in the W2V2 model design space. Higher performance (lower ASR word mistake rate) necessitates a bigger pre-trained model and poorer efficiency (inference speed). [**CONTINUE READING**](https://www.marktechpost.com/2022/02/20/asapp-ai-researchers-propose-a-family-of-pre-trained-models-sew-for-automatic-speech-recognition-asr-that-could-be-significantly-better-in-performance-efficiency-than-the-existing-wav2vec-2-0-arch/)
Paper: https://arxiv.org/pdf/2109.06870.pdf
Github: https://github.com/asappresearch/sew | 0.92 | t3_sx7l1o | 1,645,381,204 |
LanguageTechnology | I trained the GPT-2 model on the tao te ching. It made some interesting samples. | nan | 0.82 | t3_swjkre | 1,645,303,868 |
LanguageTechnology | 5 Scholarships for corise Applied NLP course | We are giving away 5 scholarships to the upcoming (starts March 14) Applied NLP courses taught by Sourabh Bajaj, former Google Brain and Coursera. Pls ping me if interested! | 0.95 | t3_svyegh | 1,645,235,358 |
LanguageTechnology | Why does Zero-Shot-Classification not work in this simple use-case? | I'm trying to classify **adjectives** to see if they apply to **humans** or **things**.
I tried several pretrained models for this task on huggingface, such as [typeform/distilbert-base-uncased-mnli](https://huggingface.co/typeform/distilbert-base-uncased-mnli?candidateLabels=personality+trait%2C+object&multiClass=true&text=physical) (and many of [these](https://huggingface.co/models?pipeline_tag=zero-shot-classification&sort=downloads)), but the classification is very unreliable when it comes to single adjectives. I tried all possible types of candidate labels, even aggregated scores for clusters, but cannot seem to solve this reliably. I have about 6 000 adjectives that should be classified. So far I haven't bothered to calculate validity, but intuitively I'd say "slightly better than chance".
However, the examples provided on the model cards seem to work really well. Any ideas on why this doesn't work and how to fix it? | 0.92 | t3_svqw4t | 1,645,214,275 |
LanguageTechnology | Top 10 Language Detection APIs | nan | 0.5 | t3_svhrnz | 1,645,190,308 |
LanguageTechnology | Dodiom - Telegram bot to collect idiom corpus | Hi everyone,
I've developed a Telegram bot to help you learn English idioms as a multiplayer competitive game while collecting MWE (multi-word expressions) corpus.
Bot link: https://t.me/dodiom_en_bot
You can see the results for Turkish version of the bot in this [journal article](https://www.cambridge.org/core/journals/natural-language-engineering/article/gamified-crowdsourcing-for-idiom-corpora-construction/A69DC2EC025689C5495A3859387468A3). You can also find the source code and collected corpus on [Github](https://github.com/Dodiom/dodiom/). Right now we are collecting corpus for English language and our aim is to compare it to existing human annotated corpora. I'd be very grateful if you try it.
All feedback is welcome and encouraged. I hope this post does not break subreddit rules. | 1 | t3_svctbx | 1,645,172,187 |
LanguageTechnology | Dutch Lexical Simplification with RobBERT | Hi all,
I'm working on a lexical simplification system for Dutch. I've found the following for English:
[https://github.com/qiang2100/BERT-LS](https://github.com/qiang2100/BERT-LS)
Would it be doable to adapt this to Dutch? I would use RobBERT (pretrained Dutch BERT), and would have to apply some changes so that the output would be grammatical in Dutch, I suppose?
Seems like it may be more efficient to create a "handcrafted" pipeline approach for Dutch, and add in trigram verification of the output using RobBERT or similar. The output may not be as good, though.
Please let me know what you think! | 0.9 | t3_suo2ln | 1,645,102,265 |
LanguageTechnology | Warning: Invalid line when computing TER metric | Hi guys, i'm trying to compute the ter metric using this repository: [https://github.com/jhclark/tercom](https://github.com/jhclark/tercom)
I have two txt files for the ref. and hypothesis, with this two i have already compute the Bleu ([https://raw.githubusercontent.com/moses-smt/mosesdecoder/master/scripts/generic/multi-bleu.perl](https://raw.githubusercontent.com/moses-smt/mosesdecoder/master/scripts/generic/multi-bleu.perl)), Meteor ([https://www.cs.cmu.edu/\~alavie/METEOR/README.html#about](https://www.cs.cmu.edu/~alavie/METEOR/README.html#about)) and Rouge ([https://github.com/pltrdy/rouge.git](https://github.com/pltrdy/rouge.git)).
For compute the ter metric i use this code :
!java -jar tercom.7.25.jar -h /content/pred.txt -r /content/tgt-val.txt
But it gives me this error:
Warning: Invalid line for every line of the txt file and the final results are:
Total TER: NaN (0.0/0.0)
Number of calls to beam search: 0
Number of segments scored: 0
Number of shifts tried: 0
How can i solve this problem?
Thanks all | 1 | t3_sumgy0 | 1,645,096,689 |
LanguageTechnology | Deploying GPT-NeoX 20B: lessons learned and a focus on Deepspeed | Hello all,
Deploying and using GPT-NeoX 20B reliably in production has been quite a challenge. You basically have 2 choices: have it run on a single huge GPU, or on multiple smaller GPUs. Here are a couple of lessons I learned during this interesting journey:
[https://nlpcloud.io/deploying-gpt-neox-20-production-focus-deepspeed.html](https://nlpcloud.io/deploying-gpt-neox-20-production-focus-deepspeed.html?utm_source=reddit&utm_campaign=ehyiek56-ed8e-11eb-ba80-5242ac13d5jv)
If some of you used a different strategy, I'd love to hear about it. Also, if you have an idea about how to perform batch inference on GPT-NeoX 20B, I'm interested in hearing your thoughts.
Thanks to EleutherAI for their amazing work. Can't wait to see what's next! | 0.97 | t3_sujgxo | 1,645,084,772 |
LanguageTechnology | Relative Position Representation/Encoding for Transformer | nan | 1 | t3_sucfg7 | 1,645,061,880 |
LanguageTechnology | How to handle text data for sentence boundary detection with LSTMs? | First of all, i am complete new to the topic of NLP and in general new to machine learning.
I want to build a sentence boundary detection model for texts using Keras LSTMs. The goal is that I can give the model a text and then I get a list of labels representing each token in the given text. I have the following training data: 6 texts of different lengths, each of these texts are annotated at word level with one of the following labels: ["B-SEN", "E-SEN", "O"]. Example data:
X = [["This", "is", "text", "one", ".", ...], ["This", "is", "text", "two", ".", ...], ... ["This", "is", "text", "six", ".", ...]]
y = [["B-SEN", "O", "O", "O", "E-SEN", ...], ["B-SEN", "O", "O", "O", "E-SEN", ...], ... ["B-SEN", "O", "O", "O", "E-SEN", ...]]
My training data X I have now converted that each token/word is converted to a feature vector of 8 features. My labels y I have also encoded starting at 1 for B-SEN. Since the texts have different lengths I have padded the training data based on the longest of my samples. The longest sample/text of these 6 texts is 80451 tokens long. The shorter texts were padded with 0-vectors in X representing the padding and the y arrays with 0 entries.
Since I am completely new to this topic, I wanted to start with a simple model consisting of one LSTM layer and then extend it later on. Now I have the question of how to model this problem/approach best? The most trivial approach in my opinion would be to say I choose my input_shape = (6, 80451, 8)
model = Sequential()
model.add(LSTM(128, input_shape=(6, 80451, 8), return_sequences=True))
That would mean that I would look at a whole sample at one timestep, right? Does this make sense for this problem? Most sentences have a length of < 100 tokens. For example, if I wanted my model to look at only 100 tokens per timestep per sample, how could I do that, would I have to restructure my training data?
Many thanks in advance! | 1 | t3_su8hsa | 1,645,051,010 |
LanguageTechnology | How to assess and statistically test the difference between two embedding models (applied on the same dataset)? | I am working on a supervised multi-class text classification problem. A number of phrases/responses are classified into one or more categories (with the following labels: 1 noting that it is in that category, 0 noting that it is not).
I used BERT + NN, then tried the Universal Sentence Encoder to embed again + NN. How is the difference between models going to be assessed and statistically tested?
Until now, I've seen the following metrics: Accuracy, F1, Precision and Recall. Do I just put the numbers side by side? Because the difference doesn't seem grand that way. Or is another test needed?
Finally, as another evaluator I can use an inter-rater reliability between human experts and the algorithm. I was thinking of calculating the hamming-loss of the model predictions and the humans. How can I compare both models though? Looking at both hamming loss scores?
There are a lot of sub-questions in one, but I would love to hear your ideas. | 0.91 | t3_su4au1 | 1,645,040,172 |
LanguageTechnology | Applications of DeBERTa for Sentiment Analysis | I was wondering if anyone had any cool projects that applied the DeBERTa model for a sentiment analysis task; I'm starting a new project in the field, and I'm just trying to get a good scope of the terrain. Thanks! | 1 | t3_su2y7c | 1,645,036,691 |
LanguageTechnology | Similarity of answers across groups to open-ended survey questions | Say I have some data in which vegetarians, vegans and omnivores all responded to the same set of open ended questions (presumably about eating habits) and I wanted to determine the strength / difference in their answers.
Of course I could simply look at relative word frequencies, topic models etc. But if i wanted to determine the similarity between the answers, would vectorizing each group's answers and comparing them with cosine similarity make sense?
Conceptually, I think this is interesting as I would like to see, with some quantitative value, how 'similar' vegans are to vegetarians to omnivores? Does this approach make sense? | 1 | t3_su15x3 | 1,645,031,742 |
LanguageTechnology | data format to fine tune gpt-2 for code generation |
I'm following this [https://github.com/nshepperd/gpt-2](https://github.com/nshepperd/gpt-2) repo to fine tune the gpt-2 355M model, i've collected (comment,code) pairs from github into a text file where data have the following format :
#comment
code
<|endoftext|>
is this the correct format for fine tuning the gpt-2 model? | 1 | t3_sttkig | 1,645,010,493 |
LanguageTechnology | Dialects in Q&A models | Hey everyone, I have been trying to implement a Q&A model for a low resource language and I've read up a few papers for the same. I want to know if different dialects across a country might affect the model. And how I can avoid this. Since I'm going to be crowdsourcing my data for this!
I thought it would be an issue with GottBERT but they haven't mentioned anything with regard to dialects.
Any insights would be appreciated! | 1 | t3_stret9 | 1,645,002,075 |
LanguageTechnology | 8 Completely FREE Courses to Learn AI (Artificial Intelligence) | nan | 0.25 | t3_stqc3t | 1,644,997,851 |
LanguageTechnology | Collaborate with an up-and-coming NLG project! | Hey NLP community! We've just launched a project called "Friday", a GPT-3 based content generator. We are looking for users to help collaborate and provide feedback on the project. Direct message me for a 3-month trial account of Friday service, we hope our work together can create a stronger generator for the whole community to use. Thanks! | 1 | t3_stkhcp | 1,644,978,399 |
LanguageTechnology | Which vector database to use for storing embeddings? A benchmark | If you are wondering which vector databases to use, hope these blog posts serve as a reference for you:
[https://farfetchtechblog.com/en/blog/post/powering-ai-with-vector-databases-a-benchmark-part-i/](https://farfetchtechblog.com/en/blog/post/powering-ai-with-vector-databases-a-benchmark-part-i/)
[https://www.farfetchtechblog.com/en/blog/post/powering-ai-with-vector-databases-a-benchmark-part-ii/](https://www.farfetchtechblog.com/en/blog/post/powering-ai-with-vector-databases-a-benchmark-part-ii/)
The analysis is conducted by Farfetch, an e-commerce company, which surveyed the most recent, popular, and reportedly robust large-scale vector databases that can sustain conversational AI systems including Vespa, Milvus, Qdrant, Weaviate, Vald, and Pinecone. | 0.92 | t3_stp26x | 1,644,993,014 |
LanguageTechnology | precision recall calculation for key phrase extraction | suppose I have 5 documents and I ranked keyphrases extracted. How do I calculate precision, recall @ k? The number of keyphrases vary across each document. How does it work?
1. Is precision@k is average precision at k? 1/k(precision@1+....precision@k) or its just precision@k?
2. If number of keyphrases extracted vary. For example: doc 1 has 2 keyphrases and doc 2 has 3 keyphrases? How do I calculate precision@3? is it 1/3((doc1 p@1+ doc2@ p@1)/2+.......+ (doc 1 p@3 (0)+ doc2 p@3)/2)? Please bear somehow unclear example. | 1 | t3_stccso | 1,644,956,531 |
LanguageTechnology | XLMRoberta validation loss increases with mcc score. | I am trying to fine tune a bit complex problem with XLMRoberta model and using the parameters as prescribed in the paper, lr of 2e-5 , etc. However, the validation loss (crossentropy) keeps on increasing with epochs along with other matrices like f1 score , mcc score! Although everyone suggests to use just 2-4 epochs of training but when i try to use early stopping (on mcc score), I noticed that the accuracy keeps on improving with epochs. Is it a valid scenario ? Because as per Machine Learning logics it shouldnt be the case. And what is expected if i fine tune BERT model for large epochs ? | 1 | t3_stc0d7 | 1,644,955,621 |
LanguageTechnology | Embedding 3 Segments, Each With Its Own [CLS] Token in BERT | My goal is to build a model that takes embeddings of 3 segments, concatenates them, and passes them through a single linear layer for some classification.
My strategy right now is to surround each segment with a \[CLS\] and \[SEP\] token, so my input to BERT looks like:
\[CLS\] segment 1 \[SEP\] \[CLS\] segment 2 \[SEP\] \[CLS\] segment 3 \[SEP\]
I think grab the final hidden state from each \[CLS\] token and concatenate before passing through final layer. My question is, does this strategy make sense, i.e.:
\-will the first \[CLS\] token encode something meaningful about segment 1?
\-will the second \[CLS\] token encode something meaningful about segment 2?
\--will the third \[CLS\] token encode something meaningful about segment 3?
I understand BERT was trained with two segments, but I've also seen work where people are inserting a bunch of CLS tokens.
Thanks in advance, and if anyone has any suggestions for how this can be done better that'd be greatly appreciated as well! | 1 | t3_st8ey2 | 1,644,946,519 |
LanguageTechnology | Named Entity Recognition - Label-Label Transitions | In NER, what are label-label transitions? And what do the weights refer to?
Am I right in thinking that this is the relationship strength between the labels (e.g. I-LOC and B-LOC)?
&#x200B;
Thanks | 0.5 | t3_st7nbq | 1,644,944,523 |
LanguageTechnology | can someone point me to research of a minimal language word set that can be used to describe most other words? | Has anyone done research like this?
For example "to run" can be described as move fast.
move is a base word used in many other word definitions like:
drive "to move in fast object with wheels"
I've been trying to find this research, but can't find anything good, can anyone help point me to some good research? | 1 | t3_st3rng | 1,644,934,096 |
LanguageTechnology | Using Transformer for Topic Modeling - what are the options? | I need to generate labels that semantically describe groups of words as fittingly as possible, so I've been looking into topic modeling and want to know what the options are.
What are the state-of-the-art libraries/models out there? So far I've looked into BERTopic ([https://github.com/MaartenGr/BERTopic](https://github.com/MaartenGr/BERTopic)).
Thanks
**EDIT:** I'm especially interested in finding **words/sequences that best describe** topics (i.e., finding the word "animal" for "cat", "dog", "bear"). | 0.85 | t3_st2zh7 | 1,644,931,819 |
LanguageTechnology | Averaging sentence embeddings to create multi-sentence embeddings? | Hey everyone.
So, I'm getting started on a project where I'm trying to extract information from short texts with an unsupervised approach. The plan is for me to cluster these texts. I know of Topic Modeling but I'm looking into other options too.
While the texts are short, they can contain muliiple sentences. I'm basically wondering if it would be considered fine to use a sentence transformer like Sentence-BERT to create sentence embeddings, and for each text with multiple sentences, average the embeddings to create a single one?
Feel free to suggest other ideas to create a single embedding for something that contains multiple sentences. | 0.94 | t3_st1si5 | 1,644,927,902 |
LanguageTechnology | System for Semantic Role Labeling | Anyone know of a decent SRL parser? I'd prefer one that produces output based on the Propbank v3 frameset, not the v2 used in the original OntoNotes release but I'm not sure I'm going to get to be picky. If you know of a decent one, let me know. "Papers with Code" has some links but they require training. I was hoping to find something in a little more of a released state. It doesn't look like Spacy or Stanford's NLP suites offer this functionality. | 0.88 | t3_ssumex | 1,644,900,186 |
LanguageTechnology | Are there any datasets/models that address the connotation of a word? | For example, the word "blood", with the same meaning, can connote:
1. Family: "blood" is thicker than water
2. Violence: there was "blood" in the streets
3. Passion: he got my "blood" racing
4. Biological substance: "blood" pressure
The languages models I know have a single meaning for a word, and attention models don't address meaning, but rather usage
and prediction.
Before I go down this research path, I'd like to know if there is already literature surrounding this problem. | 1 | t3_ssatsr | 1,644,845,901 |
LanguageTechnology | Dependency Parsing(python) | I have predicted dependency relation for each of the words in a sentence. How do I find syntactical head of each of these words if my data now is of form list(deprel) for a sentence inorder to construct dependency tree? | 0.81 | t3_ss898f | 1,644,837,261 |
LanguageTechnology | Knowledge Base for Portuguese | Hi,
I want to do Entity Linking in text written in Portuguese and just find out that Spacy has an [EntityLinker](https://spacy.io/api/entitylinker) component in its pipeline. I only need a Knowledge Base database.
Do you about Knowledge Base repositories? I'm looking for one more targeted to Portuguese, in particular European (most of the entities I will find in the texts are from Portugal). | 1 | t3_ss744n | 1,644,832,852 |
LanguageTechnology | Using LDA to categorize my blog posts | My problem is that the lda/js algo i'm using returns terms with probabilities. I would like to feed the algo a list of allowed categories and it should return N categories with probas, not terms.
I'm using this : [https://github.com/primaryobjects/lda](https://github.com/primaryobjects/lda) . I don't know if Gibbs sampling is the answer.
Example of what I have at the moment: categories should be like "Life", "Society", "Art", "Business" etc... Right now the categories I have are "Traffic", "Music" and "Money".
[
[
{ term: 'traffic', probability: 0.072 },
{ term: 'people', probability: 0.07 },
{ term: 'reasons', probability: 0.056 },
{ term: 'website', probability: 0.048 },
{ term: 'readers', probability: 0.044 }
],
[
{ term: 'music', probability: 0.047 },
{ term: 'feel', probability: 0.035 },
{ term: 'problem', probability: 0.027 },
{ term: 'partners', probability: 0.027 },
{ term: 'jobs', probability: 0.025 }
],
[
{ term: 'money', probability: 0.055 },
{ term: 'blog', probability: 0.045 },
{ term: 'started', probability: 0.034 },
{ term: 'give', probability: 0.028 },
{ term: 'business', probability: 0.028 }
]
] | 0.84 | t3_srqflu | 1,644,779,743 |
LanguageTechnology | Noob question: how hard is it to pair quotes and names from the text of a news article? | I was reading an article on a political website that was describing how a particular politician had said something that more or less contradicted something they had said a year or two ago. I thought, wouldn't it be nice to have a public website where you could look quotes that appeared in news articles to see a persons' public statements over time, but also with the context of the time. As a web developer, building a site to scrape articles and a database to hold the person, quote, and original source article is something I understand, but to automate the process of pairing a quote in the article to the speaker, and do it *reliably*, is more difficult (at least to me). Any thoughts on how hard that would be with NLP or a similar approach? Thanks in advance. | 0.93 | t3_srmrqt | 1,644,770,461 |
LanguageTechnology | What are the State of the art Approaches for Long Text Similarity? | Hi everyone,
What are the state of the art approaches for measuring long text similarity? I've been working on this subject and noticed that measuring semantic similarity between short texts are some what easier but I haven't achieved good results for longer texts. Do you guys have any suggestions on what I could try for this task? It would be awesome if you could even share some code.
I tried sentence BERT for example, but even for BERT I am not getting very good results and I thought that I'd get good results since BERT is the state of the art for most of the other tasks. (I sentence tokenized the texts, took the mean of the sentence BERT representations for all the sentences in a single long text and finally took the cosine similarity between the resulting vectors) | 0.95 | t3_sr0z3i | 1,644,699,231 |
LanguageTechnology | The Best Machine Learning Courses on Udemy (2022) | nan | 0.4 | t3_sqo123 | 1,644,659,336 |
LanguageTechnology | 3Blue1Brown Solving Wordle using information theory - YouTube | nan | 0.98 | t3_sqgr1y | 1,644,633,542 |
LanguageTechnology | NLP: Evaluation of Summarization Task | ROUGE score is heavily used for the evaluation of summarization task in NLP but it only considers n-gram overlap rather than contextual similarity.
We want to check the feasibility of BertScore(arXiv:1904.09675) as an evaluation metric.
we collected some samples from our in-house dataset (with human-generated ground truth summaries) and generated their summaries using abstractive models like pegasus and Bart. For those summaries, we got some people to rate them between 1-5.
Query: Now, to check if BertScore is a better evaluation metric, we plan to compare the correlation of human ratings with both rouge score and BertScore. Is it the right way to proceed? Any suggestions?
(we are focused on financial news data, and we cater to an audience with finance background mainly) | 0.67 | t3_sq6cgy | 1,644,604,786 |
LanguageTechnology | Deepmind’s Latest Machine Learning Research Automatically Find Inputs That Elicit Harmful Text From Language Models | Large generative language models (LMs) such as GPT-3 and Gopher have proven the ability to generate high-quality text. However, these models run the danger of producing destructive text. Therefore, they are challenging to deploy due to their potential to hurt people in ways that are nearly impossible to forecast.
So many different inputs can lead to a model producing harmful text. As a result, it’s challenging to identify all scenarios in which a model fails before it’s used in the actual world. [**Continue Reading**](https://www.marktechpost.com/2022/02/11/deepminds-latest-machine-learning-research-automatically-find-inputs-that-elicit-harmful-text-from-language-models/)
Paper: https://arxiv.org/pdf/2202.03286.pdf | 1 | t3_sq4wbd | 1,644,600,913 |
LanguageTechnology | Document type reckognition. | I am dealing with documents, 100 different documents. Some are standard (1040, W2) and some are not (Insurance Policy, bank statement....)
I need to develop a system that I would feed a document and it will spit out which doc type it is.
Not sure how to go about it. Should I vectorize all words, then my input layer will have (let say) 2000 words and output layer will have 100 SoftMax neuron showing probability of those 2000 words belonging to specific doc type?
&#x200B;
PS: Sorry, misspelled the title, should be "recognition" | 1 | t3_sq2sb1 | 1,644,595,399 |
LanguageTechnology | NER - How to determine covariance of entities in named entity recognition? | How would I go about deriving the correlation or covariance between entities within text? Relation extraction seems to be more focussed on providing more context to the entities, whereas I am looking to calculate the correlation of entities.
The entities I am looking to use in building the model are likely to have significant overlap, which I would be interested in identifying. For example, would I look for overlap in identified text for different entities, or look at word distance between co-occurring entities?
For example, I have a sample sentence of "*Communication \[entity1\]* skills are key for *interpersonal \[entity2\]* development, *groupwork \[entity3\]* and *presentations \[entity1\]*." I would like to determine the strength of entity1, entity2 and entity3 but uncertain of how best to approach this problem. Would I look to e.g. calculate average word distance between entities, or something else?
The datasets I am looking to run this on will be large (multiple webpages of text, rather than sentences) and so I don't think collecting counts would be appropriate and requires some consideration of how close entities appear.
Thanks! | 1 | t3_spv4y1 | 1,644,570,633 |
LanguageTechnology | How to add a global attribute to an input sentence of a pre-trained language model? | I want to add a global attribute (say text style) to the input sentence of a pre-trained model like BERT for downstream tasks. Should I
1. replace the \[CLS\] token with the style embedding
2. add the style embedding before or after the \[CLS\] token
3. add the style embedding to each of tokens in the sentence
4. other methods
Which one is the recommended way? or it depends? | 0.93 | t3_spv25a | 1,644,570,313 |
LanguageTechnology | Attempting to rewrite BERT codebase into RoBERTA, running into shape issue. | nan | 1 | t3_spkuk1 | 1,644,538,373 |
LanguageTechnology | Corpus of news articles about Politicians? | HI!
I've been looking around for something I could use but nothing has jumped out at me. I'm looking for a corpus of news articles about politicians. Specifically, I'm looking for a database I can use to feed a neural network the article, and the subject's sex (male/female).
If I can find a corpus about politicians, I can do the manual labor of storing it as male/female myself.
These articles could be something with like "Biden repeats earlier statement regarding....", or "Washington halts Greene's progress on...", etc.
&#x200B;
Any sort of guidance is helpful.
Thanks! | 0.86 | t3_spaigr | 1,644,510,448 |
LanguageTechnology | The computational cost of Text Classification with Universal Embeddings? | Hi. I am not experienced with AI but I am trying to use a simple model for one of my personal projects.
The question is: What would be the computational complexity ( just an estimation of it) of a Text Classification with Universal Embeddings for like 1000 data inputs? For example, I have 1000 sentences, and some of them may mean the same thing. So I want to create a list of groups of sentences with the same meaning. Therefore it would look like this -> Input: 1000 sentences. Output: Group 1: 500 sentences mean this thing ( and showing the sentences), group 2: 300 sentences mean this thing, and group 3: 200 mean another thing ( whatever the sentences might be). I am interested in how computational expensive would that be? Would it run on a phone? Implying the input is 1000 sentences.
Probably using this: [https://www.npmjs.com/package/@tensorflow-models/universal-sentence-encoder](https://www.npmjs.com/package/@tensorflow-models/universal-sentence-encoder) but I am not sure if that's the best way. Also, does this model make requests to google to do the heavy lifting?
Also, please don't hesitate to correct me if I got it all wrong... Most likely I got it all wrong.
Thanks. | 1 | t3_sp8tra | 1,644,505,998 |
LanguageTechnology | Is there a database of books for processing? | Hello everyone!
I want to know whether exists a books repository for NLP processing? Here in Brazil, we have one database with most Brazilian literature [https://www.literaturabrasileira.ufsc.br/?locale=pt\_BR](https://www.literaturabrasileira.ufsc.br/?locale=pt_BR). | 1 | t3_sp6asn | 1,644,498,673 |
LanguageTechnology | [P] What we learned by accelerating by 5X Hugging Face generative language models | nan | 1 | t3_sp2oy5 | 1,644,485,600 |
LanguageTechnology | 9 Best Courses to Learn to TensorFlow | nan | 1 | t3_sp1ery | 1,644,480,591 |
LanguageTechnology | Are there models like "punctuation_en_bert" from Nvidia for other languages, in particular German? | Hi guys, I want to perform some analysis on transcripts that miss punctuation. The lack of it distorts the results, so I found that there is "punctuation_en_bert" from Nvidia that inserts punctuation back. It does a great great job for English. I need something like this for German as well. I wonder if that exists though. Can you point me in the right direction ? | 0.94 | t3_soqap4 | 1,644,446,949 |
LanguageTechnology | What Keeps You Going ? | Noam Chomsky | nan | 1 | t3_sofq8n | 1,644,419,133 |
LanguageTechnology | Transformer model comparison for binary sentiment classification | Hi everyone,
On two independent datasets, I am comparing XLNet and BERT models with binary sentiment classification tasks: the Twitter dataset, where sentences are short, and the IMDB review dataset, where sentences are long.
On the Twitter dataset, BERT matches and slightly outperforms XLNet, but XLNet outperforms BERT on the IMDB dataset. I understand that XLNet captures longer dependencies due to the Transformer XL architecture and so outperforms BERT; but, what additional reasons may exist for one to outperform the other for a certain dataset? Why is BERT more successful, or at least comparable to XLNet, in classifying social media sentiment? | 1 | t3_soe7gc | 1,644,415,039 |
LanguageTechnology | Brainstorming an Approach to Label Specific Narrative Text | Hi! I'm currently working on a project that involves labeling specific narrative text - for example, finding descriptions of COVID-19 hospital situations throughout Facebook posts from March 2020 to current. I'm not too well-versed with the neural models (know how they work, but haven't played around with transformers or anything) - currently thinking of employing something along the lines of a random forest using various engineered features (such as n-grams, whether certain phrases of interest occur, etc.) but believe that it will not be expressive enough to fit large text data.
Any ideas on technologies and models to look into for problems like this? Thanks! | 0.76 | t3_so49dg | 1,644,379,983 |
LanguageTechnology | For those who get overwhelmed by the maths for machine learning 👇 | nan | 0.5 | t3_so2xzs | 1,644,375,995 |
LanguageTechnology | Master's thesis topic? - Policy/Government related | Hi everyone,
I'm struggling to frame the topic of my master's thesis, so I'm calling reddit to the rescue!
Ideally, I'd like to work on something related to policy/government/democracy. I have a few government surveys (50k+ answers) with open ended questions, which were poorly analyzed. I would like to do something a bit more interesting than a simple clustering of the answers, any idea?
Of course, any other suggestions are welcome!
Thanks! | 1 | t3_snux8b | 1,644,354,526 |
LanguageTechnology | I would like to put ancient Greek texts through a neural network, in order to individuate multiple authors within them. Where should I get started? | Disclaimer: I know absolutely nothing about NLP.
I study ancient Greek texts and would love to analyse parts of a single text in order to test hypotheses on its multiple authors. I suspect the text, which is said to have been written by a single author, was actually written by multiple authors.
Is there some online resource you would recommend, or some tips you might have as for testing such a hypothesis through NLP? | 1 | t3_snuwjz | 1,644,354,479 |
LanguageTechnology | What tools do you want when transforming raw text to dataset | Hi,
I always feel it's a headache to transform raw text (like wiki data) to structured dataset for training, so I am thinking about contributing some open-source utility functions to ease NLP work. One thing I am thinking about is for Masked Language Modeling, it would be helpful if we have a function `mask_tokens(text)` that randomly mask tokens for us.
My knowledge is very limited...so I am posting to ask for your opinions: are people aware of any existing NLP data processing tools? Also what other tools you think maybe helpful?
Many thanks!! | 0.92 | t3_snrfc8 | 1,644,345,637 |
LanguageTechnology | Alternative to sentence semantic similarity? | How do we model the relationship between two sentences that are connected but not semantically related? Can BERT be used for this kind of modeling?
EDIT: 'connected' can mean any relationship that may exist between two sentences 1 & 2 (not necessarily semantics or meaning based). The task is around feeding sentence 1 as input to a bert-based model so that it outputs sentence 2. | 1 | t3_snr64z | 1,644,344,989 |
LanguageTechnology | AI-modified short story study | Hi all, I am doing a PhD on personalisation and narratives, and for this, I created a user study where a short story of about 4000-5000 words has been modified by AI. The participants should also do a very short personality test and answer a few questions on what they thought of the story. There are also Amazon vouchers worth 5 GBP available for those doing this now! It's at [https://cci.arts.ac.uk/\~wnybom/cloak.html](https://cci.arts.ac.uk/~wnybom/cloak.html) | 0.67 | t3_snj92r | 1,644,323,997 |
LanguageTechnology | Multi-Class Classification NLP | I am trying to build a Multi-class Text classification model with 90 classes.Data is quite imbalanced with some of the classes having less than 100 samples while some having over 1200 samples. Currently I am using Bert Base with Cross-Entropy as loss. However, I am seeing very low accuracy for some of the classes. Classes with low accuracy are not necessarily those which have low number of samples.
I have already tried using Focal Loss, Dice Loss and Weighted Cross entropy as Loss functions.
What else can be tried ? | 1 | t3_sncs76 | 1,644,299,984 |
LanguageTechnology | Are translation models useful for generating synthetic data? | I am working on a project which requires data in low resource languages (mainly Indic). Would it be acceptable to use translated data for something like this? I unserstand there are some issues about validating the translation quality of the model. Would that be required even if we were using something like the google translate API? Any leads to resources/papers which have done this before will be much appreciated. | 0.84 | t3_sn6ar0 | 1,644,281,373 |
LanguageTechnology | Unix utility for machine translation | Is there any Unix utility which performs machine translation from the command line without connecting to some company’s API online?
It should be something that just works out of the box; it’s ok if it only works in limited ways.
Thanks very much | 1 | t3_smz9n2 | 1,644,263,925 |
LanguageTechnology | Any pointers as to creating a bot that generates silly quotes? | Title pretty much sums it up. I am looking for any pointers you might have to help me build a bot that generates silly quotes. | 0.78 | t3_smydor | 1,644,261,693 |
LanguageTechnology | Extracting useful information from product reviews | I'm trying to extract useful facts from product reviews, to make the buying decision easier.
I do that by letting humans annotate the phrases that contain the most common word n-groups, and then trying to find other n-groups, that come up in similar contexts as the annotated ones.
&#x200B;
At the moment, what I'm generating looks like this: [https://www.rantyu.com/handcreams/productinfo.php?pc=B004RRH90Q](https://www.rantyu.com/handcreams/productinfo.php?pc=B004RRH90Q)
When a fact seems to be common, I also use it as a search filter:
[https://www.rantyu.com/handcreams/search.php](https://www.rantyu.com/handcreams/search.php)
&#x200B;
1. How would you approach this problem?
2. What other useful info would you add on the product page, besides the extracted facts, product info, frequent nouns and frequent adjectives? | 1 | t3_sms0pk | 1,644,245,488 |
LanguageTechnology | How to create a broad/representative sample from millions of records? | Hey all.
I have millions and millions of small text documents. Within the project, I’m looking to generate a relatively small sample to use as a training dataset.
I haven’t been able to find anything really about this… which makes me think I’m just searching the wrong terms. The one [stackoverflow comment](https://datascience.stackexchange.com/questions/81005/sampling-methods-for-text-datasets-nlp) I found goes unanswered…
I do have some ideas if I have to push on blindly. Maybe do some doc2vec and then work on creating a uniform subset from the vectors? Haven’t given this much thought yet, as you can tell.
Tldr is I’m looking for blogs/papers/packages to sample a large text dataset, resulting in a (relatively) small sample to use. | 1 | t3_smd223 | 1,644,195,833 |
LanguageTechnology | BERT: Understanding the Null response threshold in QA systems | nan | 0.92 | t3_smar5h | 1,644,189,542 |
LanguageTechnology | [R] PromptBERT: Improving BERT Sentence Embeddings with Prompts. tl/dr For sentence embeddings, an input text prompt out performs average pooling and the CLS token. Anyone else confused by this? | nan | 1 | t3_slkn3o | 1,644,105,704 |
LanguageTechnology | OpenAI Team Introduces ‘InstructGPT’ Model Developed With Reinforcement Learning From Human Feedback (RLHF) To Make Models Safer, Helpful, And Aligned | A system can theoretically learn anything from a set of data. In practice, however, it is little more than a model dependent on a few cases. Although pretrained language models such as Open AI’s GPT-3 have excelled at a wide range of natural language processing (NLP) tasks, there are times when unintended outputs, or those not following the user’s instructions, are generated. Not only that, but their outcomes have been observed to be prejudiced, untruthful, or poisonous, potentially having harmful societal consequences.
OpenAI researchers have made substantial progress in better aligning big language models with users’ goals using reinforcement learning from human feedback (RLHF) methodologies. The team proposed [InstructGPT](https://openai.com/blog/instruction-following/) models that have been demonstrated to produce more accurate and less harmful results in tests.
[**Continue Reading**](https://www.marktechpost.com/2022/02/05/openai-team-introduces-instructgpt-model-developed-with-reinforcement-learning-from-human-feedback-rlhf-to-make-models-safer-helpful-and-aligned/)
Open AI Blog**:** [https://openai.com/blog/instruction-following/](https://openai.com/blog/instruction-following/)
Paper: [https://cdn.openai.com/papers/Training\_language\_models\_to\_follow\_instructions\_with\_human\_feedback.pdf](https://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf) | 0.94 | t3_sler0f | 1,644,089,478 |
LanguageTechnology | Locality-sensitive hashing in Reformer model | nan | 1 | t3_slci7r | 1,644,082,664 |
LanguageTechnology | Using Physics to Teach Computers to Speak! Unity3D + GPT Project Overview | nan | 0.67 | t3_skho3l | 1,643,992,372 |
LanguageTechnology | How to obbtain probability for entire sequence (Huggingface transformers) |
I want to encode certain, predetermined sentences, such as e.g.
s1 = "He is a good-hearted person."
s2 = "He is a blockheaded person."
and compare the **overall probabilities** for each of these sequences, to see which is more likely.
How do I obtain such probabilities using any of the huggingface pretrained transformers? | 1 | t3_skdpoj | 1,643,982,522 |
LanguageTechnology | Top 10 Sentiment Analysis APIs | nan | 0.54 | t3_skafa3 | 1,643,971,665 |
LanguageTechnology | wav2vec2 | How do we calculate the receptive field from the convolution configuration?
eg. wav2vec2 paper says
The feature encoder contains seven blocks and the temporal convolutions in each block have 512 channels with strides (5,2,2,2,2,2,2) and kernel widths (10,3,3,3,3,2,2). This results in an encoder output frequency of 49 hz with a stride of about 20ms between each sample, and a receptive field of 400 input samples or 25ms of audio. | 0.72 | t3_sk8au5 | 1,643,963,311 |
LanguageTechnology | auto-regressive decoder in Transformer | nan | 1 | t3_sk86yr | 1,643,962,885 |
LanguageTechnology | txtai 4.1 released - semantic search workflows with scheduling | nan | 0.9 | t3_sk0xqg | 1,643,939,411 |
LanguageTechnology | [Urgent]Training mean-teacher for token level classification | I am planning to apply mean-teacher for my problem of token classification. Since adding different noise for teacher and student is really important for the approach, i am confused about how to calculate consistency cost as length of active logits would differ. for e.g. if i use synonym noise then it can happen that it increases the length of the sentence (some tokens maybe replaces by synonym of len 2) when given to teacher model and same augmentation/Noise may generate different sentence(of different length) when given to student model. Can anyone please help as i am stuck. Paper link for ref. : [https://arxiv.org/abs/1703.01780](https://arxiv.org/abs/1703.01780) | 0.76 | t3_sjys2g | 1,643,933,517 |
LanguageTechnology | Holy $#!t: Are popular toxicity models simply profanity detectors? [OC] | nan | 0.96 | t3_sjxsem | 1,643,930,842 |
LanguageTechnology | How to represent sequential triples in an ontology? | I have the sentence "The leaf was green before it was brown." and I would like to represent it using an ontology.
Obviously, we have two triples:
:leaf :was "green", "brown" .
However, how do we represent that `:leaf :was "green"` comes before `:leaf :was "brown"`?
I'm thinking there has to be an event like:
_:a a :event ; :changes :leaf ; :traitChanged :color ; :from "green" ; :to "brown" .
What are people's thoughts on this?
I tried comparing to Stanford CoreNLP annotation, but it just returns this:
1.0 leaf was green before brown
1.0 leaf was green
1.0 leaf was green brown
1.0 it was brown
1.0 it was before brown | 1 | t3_sjxsba | 1,643,930,834 |
LanguageTechnology | Need advise for domain specific QA system. | I am trying to build a question answering system on instruction manuals (kind of one that comes with electrical appliances) . I am still at a very initial phase and I have some doubts.
1. The manuals are in PDF format. They are highly unstructured with tables, images and text in no specific format (different appliance manufacturers have different formats). I have tried various techniques and ways to extract text and split the pdf into smaller sub-documents (like page-wise split, paragraph-wise split, section-wise split, etc.) but none of them seem to be "the solution" as each approach has its own drawbacks. Are there any better ways to extract text with any suggestions for pre-processing of data and building a document store?
2. I have tried various pre-trained models on small paragraphs of text. They give near accurate answers to most of the short answer type questions but struggle if the queries are more technical. Moreover the models aren't fit for LFQA. I've been looking into Extractive QA but I am a little lost with so much of information. What adds to the problem is the lack of domain specific dataset that could've helped in fine-tuning the model.
Can you please suggest any resources, code-implementations or projects I could refer to that might help. Any sort of suggestion or thought will be immensely appreciated. Thank you for taking time to read this. | 1 | t3_sjoct8 | 1,643,907,927 |
LanguageTechnology | Looking for partners for Kaggle competition | Hello! I'm looking for partners for [an NLP Kaggle competition](https://www.kaggle.com/c/feedback-prize-2021/overview). I'm a BA in linguistics and have experience developing classical NLP models (ontologies and decision trees based on SEMs). I also have a good understanding of NNs and the mathematics that underlay the models, but I don't have any experience developing them.
&#x200B;
I would love to team up with someone who's available to put in the work to have a great portfolio piece and maybe even win some money. | 0.87 | t3_sjoaf0 | 1,643,907,761 |
LanguageTechnology | [Project] Refining the Natural language processing course - Feedback v2 and thank you | I’m Sourabh, I lead one of the core Tensorflow teams at Google Brain and worked on data products at Coursera with Andrew Ng. Kaushik Rangadurai, ML Engineer at Facebook and I are leading a live, cohort based course on NLP starting March 14th. [https://corise.com/course/natural-language-processing](https://corise.com/course/natural-language-processing).
This is the second run of the class and we learned a lot from the feedback of the reddit community from the first run in November. Some of the changes we're making from the previous iteration:
1/ More focus on transformers and less on RNN/LSTM as hugging face is becoming the defacto for any content.
2/ Pytorch lightning has some really easy to use interfaces so better organizing the boiler plate code.
3/ OpenAI has opened the GPT-3 API so a deeper dive into current possibilities.
Would love to continue getting feedback and build this to be a great resource. The plan is to open the content after we refine it to a degree we're happy with. You can join the course (capped at about 30 students) at the link above. If you’re open to giving feedback on the class on how we can do better, happy to give a discount. | 0.87 | t3_sjnp6o | 1,643,906,372 |
LanguageTechnology | References for multi-lingual digital dictionaries? | I see e-readers have very nice digital dictionaries. Where can we find digital dictionaries for multiple languages. Any note about digital dictionaries that can be consumed with code would be appreciated. It would be perfect if it include the POS tag and lemmas. | 1 | t3_sjmnwv | 1,643,903,843 |
LanguageTechnology | Local text generation (InferKit alternative) | [This Radiohead post](https://old.reddit.com/r/radiohead/comments/sbg4is/i_used_an_ai_program_to_predict_the_rest_of/) is a good example of what [InferKit's Text Generation](https://inferkit.com/docs/generation) can do with the [demo page](https://app.inferkit.com/demo).
Does anyone here know of any free software programs or frameworks that can do this sort of text generation locally with a free model? | 1 | t3_sjmexo | 1,643,903,192 |
LanguageTechnology | A library for storing and retrieving tag data | Here is a library that allows accessing different nlp taggers (esp. spacy, flair, stanza) from a unified API. Also allows saving / loading the results for larger document collections. Might be useful to other people.
[https://github.com/poke1024/nlabel](https://github.com/poke1024/nlabel) | 0.86 | t3_sjj7xw | 1,643,894,986 |
LanguageTechnology | Reversible layer for Transformer | nan | 0.67 | t3_sjh2l8 | 1,643,888,295 |
LanguageTechnology | 11 Best Natural Language Processing Online Courses | nan | 0.64 | t3_sjf980 | 1,643,881,711 |
LanguageTechnology | Really need some guidance with my dissertation - information extraction on political text | I am aiming to do information extraction on Hansard, a record of what is said in UK Parliament. I have 3 months and I am not even sure where to start. I did almost nothing last semester between a really shitty unit sucking up my time and depression.
My supervisor does not seem very in to the field and has other focuses.
I have tried applying Stanford's OpenIE to some of the text. It outputs trash. I had intended to aim to make improvements on it, eg improved NER, coreference resolution, filtering input based on sentiment analysis, then maybe some kind of filtration on the output to reduce false positives. A major issue is that it is very poor at NER, outputting relations with fragmented boundaries. I don't know if attacking these problems is feasible in 3 months.
I have 3 months. I am losing my fucking mind. Any pointers are appreciated.
edit
maybe just scrapping all of that and instead doing some sentiment analysis to investigate MP's views on other MP's and parties over time would be more feasible. I would have a much smaller named entity domain. | 1 | t3_sj00by | 1,643,836,257 |
LanguageTechnology | Is it a good idea to combine encoders and decoders from different models? | I was looking at a problem where I only need the decoder of a specific architecture. So would it be a good idea to train my encoder and use a pre-trained decoder from that model? | 1 | t3_siars7 | 1,643,762,955 |
LanguageTechnology | Stand-alone sentence segmenter | Does anyone know a good standalone sentence segmentation tool / method that isn’t part of a broader NLP module like NLTK or Spacy?
Ideally, just a single standalone function. I could pip install it or maybe even just copy and paste in some code into a file.
Thanks very much | 1 | t3_si7cs8 | 1,643,753,886 |
LanguageTechnology | Espial - NLP tool to automate discovery and organization of personal knowledge | nan | 0.76 | t3_si604g | 1,643,750,525 |
LanguageTechnology | Treebanks with PTB style bracketing | Hi everyone!
Are there any corpora or treebanks that are labeled with Penn Treebank-style constituency trees (other than the PTB itself)?
I'm investigating the usage of a certain syntactic construction in English. I'm using the PTB constituency trees, but it would be nice to have as much data as possible. I found the Georgetown University Multilayer Corpus (GUM), and I'm wondering if there are others.
Thanks in advance! | 1 | t3_shq4rt | 1,643,704,333 |
LanguageTechnology | Where to find an up-to-date list of the top-k most common words in English | I need to identify the \~ 10 000 most common words in the English language as of some reliable / well-established corpus. I intended to use this repo [https://github.com/first20hours/google-10000-english](https://github.com/first20hours/google-10000-english)
but I realized that it is somewhat outdated. Any ideas where I could find a comprehensive, up-to-date list of the most common words in English? Thanks :) | 0.92 | t3_sh7h2x | 1,643,651,868 |
LanguageTechnology | CKY algorithm |
Hey,
Could someone please simply explain the CKY algorithm, how it works?
let the input be a string I consisting of n characters: a1 ... an.
So the input is just characters, not tokenized words?
let the grammar contain r nonterminal symbols R1 ... Rr, with start symbol R1.
What are non terminal symbols?
let P[n,n,r] be an array of booleans.
This is a three dimensional array? Why does n occur twice?
Initialize all elements of P to false.
for each s = 1 to n
for each unit production Rv → as
set P[1,s,v] = true
What is unit production?
for each l = 2 to n -- Length of span
for each s = 1 to n-l+1 -- Start of span
for each p = 1 to l-1 -- Partition of span
for each production Ra → Rb Rc
if P[p,s,b] and P[l-p,s+p,c] then set P[l,s,a] = true
if P[n,1,1] is true then
I is member of language
else
I is not member of language
https://en.m.wikipedia.org/wiki/CYK_algorithm
Thanks very much | 1 | t3_sh0cip | 1,643,632,847 |
LanguageTechnology | Current book about extracting text from structured and unstructured documents (PDFs, Word, Excel)? | I am quite used to the Machine learning aspects of NLP, but I am lacking knowledge on how to make raw texts accessible and how to handle meta data. Is there a good book on this - preferably in Python? | 1 | t3_sh08de | 1,643,632,435 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.