sub
stringclasses
4 values
title
stringlengths
3
304
selftext
stringlengths
3
30k
upvote_ratio
float64
0.07
1
id
stringlengths
9
9
created_utc
float64
1.6B
1.65B
Python
I broke Math. I generated a quantum random number in Python. Split the decimal part of it into groups of 5 digits. Found every single group in the decimal expansion of PI (just upto a million digits). Somebody tell me I'm wrong or give me a Nobel prize. I won't sleep tonight.
import quantumrandom as qr with open("piDigits.txt", 'r') as f: pidecimals = f.read() f.close() maxrange = int(input("Enter Maxmimum of Range ")) count = 0 for i in range (0,maxrange): myx = qr.randint() print(f"Quantum Number {myx}") decimalstring = str(myx) substring1 = decimalstring[2:7] substring2 = decimalstring[7:12] substring3 = decimalstring[12:17] if substring1 in pidecimals: count += 1 print(f"Quantum digit sequence {substring1} found in Pi") if substring2 in pidecimals: count += 1 print(f"Quantum digit sequence {substring2} found in Pi") if substring3 in pidecimals: count += 1 print(f"Quantum digit sequence {substring3} found in Pi") print(f"Quantum digit sequences found in Pi {count} times out of {3*maxrange}") Sample Output: (Ran the code many times to look for an exception. None found. Yet.) `Enter Maxmimum of Range 20` `Quantum Number 4.434424353398947` `Quantum digit sequence 43442 found in Pi` `Quantum digit sequence 43533 found in Pi` `Quantum digit sequence 98947 found in Pi` `Quantum Number 9.409781033035783` `Quantum digit sequence 40978 found in Pi` `Quantum digit sequence 10330 found in Pi` `Quantum digit sequence 35783 found in Pi` `Quantum Number 9.769588769359885` `Quantum digit sequence 76958 found in Pi` `Quantum digit sequence 87693 found in Pi` `Quantum digit sequence 59885 found in Pi` `Quantum Number 7.9896238651102465` `Quantum digit sequence 98962 found in Pi` `Quantum digit sequence 38651 found in Pi` `Quantum digit sequence 10246 found in Pi` `Quantum Number 1.8486305027847716` `Quantum digit sequence 84863 found in Pi` `Quantum digit sequence 05027 found in Pi` `Quantum digit sequence 84771 found in Pi` `Quantum Number 5.2133974212253` `Quantum digit sequence 21339 found in Pi` `Quantum digit sequence 74212 found in Pi` `Quantum digit sequence 253 found in Pi` `Quantum Number 3.3063248645761805` `Quantum digit sequence 30632 found in Pi` `Quantum digit sequence 48645 found in Pi` `Quantum digit sequence 76180 found in Pi` `Quantum Number 6.982528419928283` `Quantum digit sequence 98252 found in Pi` `Quantum digit sequence 84199 found in Pi` `Quantum digit sequence 28283 found in Pi` `Quantum Number 4.838788433661402` `Quantum digit sequence 83878 found in Pi` `Quantum digit sequence 84336 found in Pi` `Quantum digit sequence 61402 found in Pi` `Quantum Number 0.5404745555809872` `Quantum digit sequence 54047 found in Pi` `Quantum digit sequence 45555 found in Pi` `Quantum digit sequence 80987 found in Pi` `Quantum Number 5.2051575494010835` `Quantum digit sequence 20515 found in Pi` `Quantum digit sequence 75494 found in Pi` `Quantum digit sequence 01083 found in Pi` `Quantum Number 2.928511482413977` `Quantum digit sequence 92851 found in Pi` `Quantum digit sequence 14824 found in Pi` `Quantum digit sequence 13977 found in Pi` `Quantum Number 8.577248798352025` `Quantum digit sequence 57724 found in Pi` `Quantum digit sequence 87983 found in Pi` `Quantum digit sequence 52025 found in Pi` `Quantum Number 0.699015793087663` `Quantum digit sequence 69901 found in Pi` `Quantum digit sequence 57930 found in Pi` `Quantum digit sequence 87663 found in Pi` `Quantum Number 6.524299992370489` `Quantum digit sequence 52429 found in Pi` `Quantum digit sequence 99923 found in Pi` `Quantum digit sequence 70489 found in Pi` `Quantum Number 5.860379949645227` `Quantum digit sequence 86037 found in Pi` `Quantum digit sequence 99496 found in Pi` `Quantum digit sequence 45227 found in Pi` `Quantum Number 6.33020523384451` `Quantum digit sequence 33020 found in Pi` `Quantum digit sequence 52338 found in Pi` `Quantum digit sequence 4451 found in Pi` `Quantum Number 4.562294956893263` `Quantum digit sequence 56229 found in Pi` `Quantum digit sequence 49568 found in Pi` `Quantum digit sequence 93263 found in Pi` `Quantum Number 2.0715648126955064` `Quantum digit sequence 07156 found in Pi` `Quantum digit sequence 48126 found in Pi` `Quantum digit sequence 95506 found in Pi` `Quantum Number 8.903639276722362` `Quantum digit sequence 90363 found in Pi` `Quantum digit sequence 92767 found in Pi` `Quantum digit sequence 22362 found in Pi` `Quantum digit sequences found in Pi 60 times out of 60`
0.2
t3_ti584e
1,647,724,969
Python
PyMacApp: Build, Package, and Code-Sign Python Projects on MacOS in just 10-lines of Python Code!
Hi All: I recently posted about a project I was working on called PyMacApp. I just released a ton of updates that brings full support for building, packaging, and code-signing in as little as just 10 lines of code (2 import statements, 4 for the app, and 4 for the package; it supports function chaining, so you could even accomplish this in less than 10 lines)! Get started with ```pip3 install pymacapp``` GH: https://github.com/The-Nicholas-R-Barrow-Company-LLC/PyMacApp A more-detailed starter is available below. ``` # build.py from pymacapp import App, Package from pymacapp.helpers import get_first_application_hash, get_first_installer_hash # Apple Account Information # You can get rid of the input(...) functions and instead enter the strings directly so you do not have to enter them each time. APPLE_DEVELOPER_ACCOUNT_EMAIL = input("Apple Developer ID Email (str): ") APPLE_DEVELOPER_ACCOUNT_APP_SPECIFIC_PASSWORD = input("Apple Developer ID App-Specific Password (str): ") app = App("My New App", "com.identifier") app.setup("./app/main.py", overwrite=True) app.build() app.sign(get_first_application_hash()) package = Package(app, "0.0.1", "com.identifier.pkg") package.build(get_first_installer_hash()) package.sign(get_first_installer_hash()) package.notorize(APPLE_DEVELOPER_ACCOUNT_EMAIL, APPLE_DEVELOPER_ACCOUNT_APP_SPECIFIC_PASSWORD).wait() ```
0.66
t3_ti4ztb
1,647,724,308
Python
Small Line Counter Script
I made a command line tool that lets you specify a file/directory path on the command line and will count the lines of code in a file or all files within a directory or directories recursively. I plan to add more features. I am a systems dev by trade and have worked with python in the past but not very much and that was a couple years ago, so I didn't know if I should flag this as beginner or intermediate but let me know if I should change it. Please feel free to contribute. If you do read the code (its short) I would love feedback. [https://www.github.com/carterdugan/LineCounter](https://www.github.com/carterdugan/LineCounter)
0.81
t3_ti4zcz
1,647,724,271
Python
Is pygame still worth it in 2022??
nan
0.85
t3_thzhqp
1,647,709,018
Python
What do you think is the most valuable Python package?
In the spirit of March Madness, we created a tournament to choose the MVP Python package. This is just meant to be fun, but it might jumpstart an interesting discussion about your choices and the kind of work you're doing. [https://deephaven.io/community/experiments/python-bracket/](https://deephaven.io/community/experiments/python-bracket/)
0.15
t3_thw2h9
1,647,699,262
Python
Build a Hash Table in Python With TDD – Real Python
nan
0.75
t3_thvrq5
1,647,698,366
Python
How to Shortlist the Best Python Development Company ?
nan
0.33
t3_thvj46
1,647,697,596
Python
DeepForSpeed: A self driving car in Need For Speed Most Wanted built with python + pytorch
[video here](https://youtu.be/t0iqfM36mRc) [code here](https://github.com/edilgin/DeepForSpeed) So i built a self driving car with python in need for speed most wanted(2005). I was really impressed when i saw nvidia build their own self driving car with just a single algorithm(cnn) so i decided to try it myself. Basically i record training data while i'm playing the game (i played around 2 hours i think) my key presses associated with every frame are recorded. Later i process this training data and train the algorithm (which is almost the same as the nvidia's). Latest step is just running the algo. Important hings i've used are: numpy, opencv, matplotlib and pytorch. Please take a look at the code i tried to document everything and i would appreciate any pull requests and advice in general :)
0.97
t3_thsp8c
1,647,686,990
Python
Python for beginners from Harvard CS50x. Starting April 1st. Free course, awesome teacher, explains how computer thinks in terms of programming. 11 weeks 3-9 hours per week! Sign up now, highly recommended.
nan
0.94
t3_ths5wg
1,647,684,560
Python
Red Mail: All you need from an email sender
Hi all, I have made a couple of posts about Red Mail in the past but I recently received some comments feeling this being underrated compared to how useful it really is. So, I hope you don't mind an update, I also just released a new version. So what is Red Mail? It's an email library that aims for simplicity without compromising features. You will find a bunch of email senders from Pypi but there is nothing quite like this. So what can it do? * [Supports sending HTML, text, attachments](https://red-mail.readthedocs.io/en/latest/tutorials/sending.html) * [Send to regular receivers, cc (carbon copy) or bcc (blind carbon copy)](https://red-mail.readthedocs.io/en/latest/tutorials/sending.html) * [Attachments from various types: paths, bytes, Pandas dataframes, etc.](https://red-mail.readthedocs.io/en/latest/tutorials/attachments.html) * [Emails with embedded images in HTML](https://red-mail.readthedocs.io/en/latest/tutorials/body_content.html#embedding-content) * [Embedded images from various types: Matplotlib plots or PIL images](https://red-mail.readthedocs.io/en/latest/tutorials/body_content.html#embedding-content) * [Emails with embedded (prettified) tables from Pandas dataframes](https://red-mail.readthedocs.io/en/latest/tutorials/body_content.html#embedded-tables) * [Templated emails using Jinja](https://red-mail.readthedocs.io/en/latest/tutorials/jinja_support.html) * [Gmail and Outlook pre-configured](https://red-mail.readthedocs.io/en/latest/tutorials/config.html) * [Logging handlers!](https://red-mail.readthedocs.io/en/latest/extensions/logging.html) * [Flask integration](https://flask-redmail.readthedocs.io/en/stable/index.html) A minimal example for Gmail users: from redmail import gmail gmail.username = "[email protected]" gmail.password = "<MY PASSWORD>" gmail.send( sender="[email protected]", receivers=["[email protected]"], subject="An example email", html=""" <h1>Hi,</h1> <p>nice to meet you.</p> """, ) More advanced features: from redmail import EmailSender email = EmailSender(host="smtp.myhost.com", port=0) email.send( sender="[email protected]", receivers=["[email protected]"], subject="An example email", html=""" <h1>Hi {{ friend }},</h1> <p>look at this image:</p> {{ nice_image }} """, body_params={"friend": "Jack"}, body_images={"nice_image": 'path/to/image.png'}, attachments={"file.csv": pd.DataFrame({"col": [1, 2, 3]})} ) So it's pretty clean and does everything you wished from an email sender. There are alternatives for email sending but nothing quite like this. It is also well tested and documented. Resources: * Source code: [https://github.com/Miksus/red-mail](https://github.com/Miksus/red-mail) * Documentation: [https://red-mail.readthedocs.io/en/latest/](https://red-mail.readthedocs.io/en/latest/) * Releases: [https://pypi.org/project/redmail/](https://pypi.org/project/redmail/) If you need to integrate it to a Flask application: * Source code: [https://github.com/Miksus/flask-redmail](https://github.com/Miksus/flask-redmail) * Documentation: [https://flask-redmail.readthedocs.io/en/latest/](https://flask-redmail.readthedocs.io/en/latest/) * Releases: [https://pypi.org/project/Flask-Redmail/](https://pypi.org/project/Flask-Redmail/) So what has changed? Now the email structures are more structured and more likely gets rendered across email providers, fixed a bug related to embedded emails and aliases and improved documentation. If you found it useful, leave it a star on Github. That's the way to get visibility and it lets me know I'm building useful things in my free time. Thanks again for all the support!
0.89
t3_thrdr9
1,647,681,015
Python
I created a super simple customizable desktop clock with python
Have you ever wanted to program a simple clock for your desktop? This [Simple Desktop Clock](https://github.com/underpig1/simplest-desktop-clock) uses python's tkinter to create a desktop clock for cleaner layouts. The best part is, you can customize how it looks and where it appears on your screen really easily, and it will change colors based on your wallpaper colors. I created this program a few weeks back because I was annoyed with the tiny clock in the bottom right of my screen and wanted something like the big clock on the lock screen. I decided to share it when I heard others were encountering a similar problem. The clock functions just as a part of your wallpaper--as in, you can't drag it or click on it, and it by default stays behind your windows. Let me know if you enjoy it or find it useful! https://preview.redd.it/0thaq2fkzao81.png?width=1674&format=png&auto=webp&s=f00ab6723f9de61dc72fe1fc4adfb36fd814301d
0.88
t3_thqxd6
1,647,678,891
Python
3 Things You Might Not Know About Numbers in Python
nan
0.5
t3_thqu6i
1,647,678,482
Python
The Zen Of Python, One-Liners and Being Pythonic
nan
0.8
t3_thqfid
1,647,676,626
Python
What extension is useful when doing Python in VS code?
Hi! I'm new to using VS code. Before this I was using Spyder and found that VS code to be more complicated (for me). Please do recommend me necessary extension :D
0.71
t3_thpx8i
1,647,674,284
Python
Solution to Ramanujan equations
[https://todaymylearn.blogspot.com/2022/03/solution-to-ramanujan-equations.html](https://todaymylearn.blogspot.com/2022/03/solution-to-ramanujan-equations.html) I made a small program in python to solve Ramanujan equations.
0.5
t3_thplm6
1,647,672,828
Python
PEP 686: Make UTF-8 mode default
nan
0.97
t3_thnk4l
1,647,664,164
Python
Complete Guide to PyGame Setup in 14 mins!
nan
0.75
t3_thmkk7
1,647,660,554
Python
What Python GitHub repos are good examples of best practices?
I've been writing more packages and recently enjoyed looking through Perfect's GitHub. It felt like a good example of code structuring, they implemented their cli well. I even started to look at how they were branching and what they were putting in commit messages. I very frequently work on islands without the oversight of more senior engineers and wanted to know if there were other GitHub repos you guys recommend looking at to learn how to learn to write good software.
0.85
t3_thmbi1
1,647,659,670
Python
Euporie: a terminal app for working with Jupyter notebooks
nan
1
t3_thlcky
1,647,656,373
Python
A Happy Success Story and Python!
Hi everyone! ​ I just wanted to share some exciting events, as well as a good story for those maybe demotivated learning Python. ​ Like many of you, I started learning Python (around 3-4 months ago), and quickly felt overwhelmed by just the huge mass of tutorials, libraries, and posts. However, I pushed through it, and have made some real progress! ​ Today I launched v1 of my AI/startup business [https://finned.tech](https://finned.tech) \- selling a (Python) solution for personalized marketing on digital billboards, a solution now patent-pending. In addition to this, I built a licensing solution from the ground up, again using Python, and finally a Python webserver (using FastAPI) to handle auth, license checks, and payments. ​ What I learned from all of this (besides that it's hard to start a business), is that if you keep persevering, watching those tutorials, and building those things you think no one will ever use, that, before you know it, you'll be creating things you never thought possible before! ​ Hopefully this post has inspired some of you to put a bit more effort into learning and building some projects, as I know many posts here did with me. ​ (Ok, now I'm done with my story, go build some cool things and learn more!)
0.85
t3_thlata
1,647,656,210
Python
Elden Ring Open Source API
Good night tarnished guys and gals. Do we have some developers among us? I made this open-source API that contains all kinds of data scraped from Elden Ring that can be used for all sorts of student projects. So if you're new to programming/web development, loves Elden Ring, and want to build a cool app, feel free to use and abuse this API. This is an Open Source project, so feel free to contribute. All the data and media can be found in the GitHub repository. Also, since the game came out not long ago, there are still some missing data and gaps here and there. If you find any issues, feel free to open an issue or open a PR that I will gladly merge into the codebase. The API is available in both REST and GraphQL formats. Get started at [https://eldenring.fanapis.com/](https://eldenring.fanapis.com/) ​ Source code available at: [https://github.com/deliton/eldenring-api](https://github.com/deliton/eldenring-api)
0.93
t3_thl39l
1,647,655,499
Python
Saturday Daily Thread: Resource Request and Sharing! Daily Thread
Found a neat resource related to Python over the past week? Looking for a resource to explain a certain topic? Use this thread to chat about and share Python resources!
1
t3_thiqqa
1,647,648,010
Python
*UPDATED* Random numbers generator list with Mean, Median, Mode and frequency of each number
import random import numpy from statistics import mode import csv import collections #create list list=[] ##random numbers for i in range (100): number= (random.randint(0,10)) #add to list list.append(number) print("Random numbers: "+(str(list))) list.sort() print("Numbers sorted: "+(str(list))) #Mean, median and mode of list mean1=numpy.mean(list) median1=numpy.median(list) mode1=mode(list) print("The mean of the numbers is "+(str(mean1))) print("The median of the numbers is "+(str(median1))) print("The mode of the numbers is "+(str(mode1))) ##frequency method ##dictionary frequency = {} ###using collections.Counter for number frequency frequency=collections.Counter(list) # printing the frequency print(frequency) ##Export dictionary to CSV file with open('numbers.csv', 'w') as f: for key in frequency.keys(): f.write("%s,%s\n"%(key,frequency[key])) #df.to_csv("numbers.csv", header=["Number", "Frequency"], index=False)
0.6
t3_thgvms
1,647,642,439
Python
Random numbers generator list with Mean, Median, Mode and frequency of each number
import random import numpy from statistics import mode import csv #create list list=[] ##random numbers for i in range (100): number= (random.randint(0,10)) #add to list list.append(number) print("Random numbers: "+(str(list))) list.sort() print("Numbers sorted: "+(str(list))) #Mean, median and mode of list mean1=numpy.mean(list) median1=numpy.median(list) mode1=mode(list) print("The mean of the numbers is "+(str(mean1))) print("The median of the numbers is "+(str(median1))) print("The mode of the numbers is "+(str(mode1))) ##frequency method ##dictionary frequency = {} # iterating over the list for item in list: # checking the element in dictionary if item in frequency: # incrementing the counr frequency[item] += 1 else: # initializing the count frequency[item] = 1 # printing the frequency print(frequency) ##Export dictionary to CSV file with open('numbers.csv', 'w') as f: for key in frequency.keys(): f.write("%s,%s\n"%(key,frequency[key]))
0.33
t3_thdm72
1,647,633,270
Python
I've heard that "if a class is just a constructor and one method, then it should be a function". What is your opinion on this and what are counter examples?
I watched a seminar a few years ago about someone talking about the misuses of OOP in python and talked extensively about the case where people write a class, implement the `__init__()` method and one extra method and that's it. He provided examples and showed functional code that simplifies the solution to a few lines and he said: > "If a class is just a constructor and one method, then it shouldn't be a class at all. It should be a function." I can't find the video anymore, but I was just wondering what reddit thinks about this statement and if you have any counter examples or in what cases it could be useful. I'm also interested in the cases where if it is true, how would that functional implementation look like when I need some sort of a state to persist. EDIT: Awesome discussion in the comments. Thanks for everyone's input!
0.97
t3_th6ztt
1,647,623,696
Python
Resume Generation With Python
nan
0.33
t3_th6lvy
1,647,623,183
LanguageTechnology
HuSpaCy: Industrial-strength Hungarian NLP
nan
1
t3_udox1z
1,651,128,559
LanguageTechnology
Hey all! I'm making a pytorch transformer from scratch and I'm wondering if you have any tips.
Here's a post I have open on the pytorch forums: https://discuss.pytorch.org/t/help-simple-generate-predict-inference-function-for-attention-transformer/150251 I can't seem to figure out what's going on in my forward method and I'm not sure how well it's learning. I damn sure don't know how to generate text. If anyone can help me with this problem, I'd appreciate it.
1
t3_udgf7q
1,651,100,509
LanguageTechnology
Any annotation tool for argument mining?
Hi, I've been testing several annotation tools for my research at university. Each one has its particular advantages but none of the ones I've tried seem to be created for annotating argument relations. Inception could probably be the most flexible for my needs, yet it requires too many technical requirements for installing and using it as an admin. And most importantly now, it is usable but not optimal for argument mining, which would require being able to see the relations between sometimes across far-away-in-the-text arguments in an organised and intuitive way - both in-text and extracted separately from the original text. Arrows that go horizontally through several lines in a text, although they can at least show a relation across paragraphs, tend to accumulate and are very difficult to follow. I've recently seen Tiara, which has better visualisations for AM, but you can't see annotations from different annotators and calculate inter-rater agreement, which is basic unless you're just annotating for yourself... Does anyone know of an annotation app that allows for manually annotating argument units and visualising relations in different texts by more than one annotator?
0.84
t3_udeuwy
1,651,096,155
LanguageTechnology
I just released the first six modules of my free NLP course.
If you're new to NLP, these first six modules will get you up and running with the basics. We'll cover: * what makes NLP challenging and what we'll learn in the course. * various ways to preprocess our text depending on our goals including tokenization, lemmatization, part-of-speech-tagging, and more. * how to turn our text into numbers with basic bag-of-words approaches and performing document similarity and search. * how to use spaCy and scikit-learn to do all of the above. No registration needed but you can sign up for updates. Check it out at [nlpdemystified.org](https://nlpdemystified.org) and let me know what you think!
0.96
t3_ud7hrz
1,651,076,631
LanguageTechnology
BERTScore: Evaluating Text Generation with BERT (Summary)
BERTScore is an automatic evaluation metric for text generation🔥 BERTScore is found to correlate better with human judgments and provides stronger model selection performance than existing metrics like ROUGE, BLEU and it's variants 🔥🔥 Watch paper summary at https://lnkd.in/dPE3g3_p Paper: https://arxiv.org/abs/1904.09675
1
t3_ud5pln
1,651,071,938
LanguageTechnology
How to find synonyms in text.
Hi all, I am working on a problem where, input is a piece of text (generally text summary of 4-5 paragraphs), and out is the synonym pairs/phrases used in this text. If anyone has worked on this problem before do help me with any lead. This provided data could be of any domain so wordnet kind of approach is not possible. For example, if let say it’s a bio-medical data then synonym terms would be “doctor of cancer” and “oncologist”. Any possible lead is most welcome. Currently I am experimenting by extracting all possible NNP pairs and comparing there embedding for the similarities. But this could be very exhaustive if document is large. And getting good embeddings is also a challenge. If you guys have any better approach in mind, please do share.
1
t3_ud2kja
1,651,063,040
LanguageTechnology
Is there an evaluation metric for how much a sentence "makes sense"
So I'm doing a project involving fine-tuning a language model on a dataset of text and generating text that resembles text in the dataset. I was wondering if there was a metric that could evaluate how much that generated text "makes sense" as in how much does it resemble a real sentence. Thanks in advance.
0.96
t3_ucucoi
1,651,030,555
LanguageTechnology
Review my basic Sentiment Analysis web app
I'm doing a Computer Science conversion course and going to do my thesis on Sentiment Analysis. For this, I did a very simple web app to analyse public tweets which I will want to use as a tool for my thesis. I would appreciate feedback on my application. Are there any features you can think of that would be interesting to include? Here is the link to it: [https://share.streamlit.io/alexanderverheecke/sentimentanalysisapp/main/main.py](https://share.streamlit.io/alexanderverheecke/sentimentanalysisapp/main/main.py)
0.84
t3_ucdbl1
1,650,981,868
LanguageTechnology
Text Encoder output giving zero values
I have been working on an Out of Domain detection model which takes in two strings, one domain string and one out of domain string, and a list of strings that can be taken as an anchor. Based on this I have trained an encoder model that converts these strings into vectors of 100 dimensions. Each input sentence is converted to tokens and then they are converted to word vectors using glove vectors. Post that a CNN model is trained on the vectors. Batch norm is done before pooling and then they are passed through a linear layer to get the final vectors. Below is the code for the model. class Encoder(nn.Module): def __init__(self): super(Encoder, self).__init__() input_size = 40 dim = 100 self.batch_size = BATCH_SIZE kernel = 2 self.conv_x_in = nn.Conv1d(dim, 300, kernel_size=kernel) self.batch_norm_in = nn.BatchNorm1d(300) self.pool_in = nn.AvgPool1d(MAXLEN - kernel + 1) self.linear_in = nn.Linear(300, 100) self.conv_S_in = nn.Conv1d(dim, 300, kernel_size=kernel) self.batch_norm_S_in = nn.BatchNorm1d(300) self.pool_S_in = nn.AvgPool1d(MAXLEN - kernel + 1) self.linear_S_in = nn.Linear(300, 100) def encoded_x_i_in(self, x_i_in): x_i_in = x_i_in.transpose(1, 2) # make batch x seq x dim => batch x dim x seq x = F.relu(self.conv_x_in(x_i_in)) x = self.batch_norm_in(x) x = self.pool_in(x) x = x.squeeze(2) x = F.relu(self.linear_in(x)) return x def encode_S_in(self, x_i_in): x_i_in = x_i_in.transpose(1, 2) # make batch x seq x dim => batch x dim x seq x = F.relu(self.conv_S_in(x_i_in)) x = self.batch_norm_S_in(x) x = self.pool_S_in(x) x = x.squeeze(2) x = F.relu(self.linear_S_in(x)) return x def forward(self, x_i_in, x_j_out, S_in): in_emb = self.encoded_x_i_in(x_i_in) out_emb = self.encoded_x_i_in(x_j_out) overall_emb = [[self.encode_S_in(y) for y in x] for x in S_in] overall_emb = [[torch.sum(y, 0) for y in x] for x in overall_emb] overall_emb = [torch.stack(x) for x in overall_emb] overall_emb = torch.stack(overall_emb) return in_emb, out_emb, overall_emb After training the outputs are coming as mostly zeros, which makes the resulting vectors almost unusable. Not sure what is the issue here. I can try changing the base vectors from glove to something else but I think the problem stems from something else. An example of the encoder output for the \`\``in_emb`\` variable. tensor([[0.0000, 0.0413, 0.0174, 0.0000, 0.1767, 0.2171, 0.0037, 0.0000, 0.0000, 0.0000, 0.0000, 0.0799, 0.0000, 0.0000, 0.3925, 0.0695, 0.2034, 0.0000, 0.0422, 0.0000, 0.0000, 0.1015, 0.2971, 0.1513, 0.1093, 0.0368, 0.0000, 0.0000, 0.0000, 0.0000, 0.0767, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0411, 0.0000, 0.0000, 0.1053, 0.0000, 0.0000, 0.0000, 0.0694, 0.0000, 0.0000, 0.1459, 0.0000, 0.1779, 0.0464, 0.0333, 0.0000, 0.1581, 0.0000, 0.0032, 0.0000, 0.0113, 0.0206, 0.0168, 0.0000, 0.0000, 0.0709, 0.0173, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.1660, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.1795, 0.2580, 0.0000, 0.0000, 0.2024, 0.0416, 0.0000, 0.5906, 0.0000, 0.0350, 0.2130, 0.0000, 0.0000, 0.2970, 0.0000, 0.0000, 0.0559, 0.0863, 0.0313, 0.0000, 0.2553, 0.0000, 0.0858, 0.0279]])
0.67
t3_ubi4qz
1,650,883,199
LanguageTechnology
What made you interested in nlp? What's your motivation for picking it up?
I used to suck at writing English when I was a kid, my teacher used to bully me, but once was 15 IDK how but my spelling got better ( I have ADHD if that matters). I still suck to some extent but now I have keyboard predictions to my rescue and also text to speech has been a life saver to me when my senses overload cant read so I listen to the text instead.
1
t3_ubhx15
1,650,882,355
LanguageTechnology
Anybody working on auto speech recognition
nan
0.41
t3_uaovi2
1,650,783,365
LanguageTechnology
sparse sinkhorn transformer
nan
0.86
t3_ua28f7
1,650,708,691
LanguageTechnology
Language Data Researcher @Amazon
Hello!! I had my first call with HR today for the language data researcher role at Amazon and the recruiter mentioned there would be technical questions asked in my follow up interview. I haven’t had any luck finding information online regarding this role, does anyone have advice on how I should prepare for the technical questions or what kinds they may be asking me? Also, if anyone has some salary insight or experience with this role I would appreciate hearing about it, thank you :)
0.95
t3_u9tl3x
1,650,675,819
LanguageTechnology
Hashtag inference - Looking for guidance
I have a a dataset of tweets and their corresponding hashtags, I would like to train a model/create an ML pipeline, that given a tweet's text it will provide a list of hashtags that are relevant to that piece text. Honestly, not sure how to start with this task, or even how to break it down to several ML operations (if needed). I understand the basics of deep learning and NLP, but I would appreciate your thoughts on how you would approach to solve this problem. Recommendations for papers, sample projects, etc are also welcome.
0.76
t3_u9oxgf
1,650,662,063
LanguageTechnology
Has anybody tried using spacy with Cython?
Asking this so that I can speed up my Python Code using Cython (namely using stating typing of variables like strings and lists)
0.91
t3_u99qj1
1,650,616,133
LanguageTechnology
Which huggingface model is the best for sentence as input and a word from that sentence as the output?
I am trying to build a pun detector and I would appreciate it if you could help me understand what would the best huggingface model to fine-tune be for this type of task: Example input 1: `If there's one person you don't want to interrupt in the middle of a sentence it's a judge.` Example output 1: `sentence` Example input 2: `A good baker will rise to the occasion, it's the yeast he can do.` Example output 2: `yeast`
0.94
t3_u91wzh
1,650,587,925
LanguageTechnology
mGPT model release - multilingual generative transformer for 61 languages
Hi everyone. Today we released the mGPT model: multilingual generative pre-trained transformer The checkpoints are available on Huggingface [model page](https://huggingface.co/sberbank-ai/mGPT) The example usage is at the Github repo [https://github.com/ai-forever/mgpt](https://github.com/ai-forever/mgpt) * The model has 1.3 billion parameters * The context length is 512 tokens. The model can generate sequences after the input prompt, can be used for fine-tuning or for zero- and few-shot learning: from transformers import GPT2LMHeadModel, GPT2Tokenizer model_name = "sberbank-ai/mGPT" tokenizer = GPT2Tokenizer.from_pretrained(model_name) model = GPT2LMHeadModel.from_pretrained(model_name) model.cuda() model.eval() texts = [ "My favourite holiday is ", "Իմ սիրելի տոնն է ", "Моє улюблене свято ", "mi fiesta favorita es ", "मेरी पसंदीदा छुट्टी है", "我最喜欢的节日是", "Min favorithelg är " ] transformers.set_seed(1337) for text in texts: input_ids = tokenizer.encode(text, return_tensors="pt").cuda() out = model.generate( input_ids, min_length=100, max_length=100, eos_token_id=5, pad_token=1, do_sample=True, top_k=0, top_p=0.9, no_repeat_ngram_size=4) generated_text = list(map(tokenizer.decode, out))[0] ``` My favourite holiday is �Thanksgiving� so, I wanted to share the recipe I made from a recipe I found on the fool, Flockish Street Bakery. The banana bread is delicious and a good way to treat those stained teeth. Everyone loves a chocolate treat, so I thought I would share it with you, hopefully others will like it too. This bread is SO good!! --- Իմ սիրելի տոնն է շատ լավ եղե՞լ. Քիչ ու պակաս հաղթանակ հարստացրին --- Моє улюблене свято є Різдво --- mi fiesta favorita es @marhuval__ La gente queremos fique muy feliz, estoy pensando en celebrarlo el 2 de abril --- मेरी पसंदीदा छुट्टी है सीधी रात, इंटरनेट से जुड़े बहुत सारे विकल्प हैं और यदि आप वापस सीधे किसी घर बसों में घुसते हैं, तो आपको स्वागत है बैठकें ह --- 我最喜欢的节日是-“保卫国”日!” 澳门论坛 澳门论壇<< 上一篇:点石成金!武磊 下一篇:你还在爱得浑身发抖吗?但婴儿在妈妈身上~~ --- Min favorithelg är ute, og din blog er mødested for så mange som muligt af dem i øjeblikket. ``` Full language list: *Afrikaans, Arabic, Armenian, Azeri, Bashkir, Basque, Belarusian, Bengali, Bulgarian, Burmese, Buryat, Chinese, Chuvash, Danish, Dutch, English, Finnish, French, Georgian, German, Greek, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Kalmyk, Kazakh, Korean, Kyrgyz, Latvian, Lithuanian, Malay, Malayalam, Marathi, Moldovan, Mongolian, Ossetian, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Swahili, Swedish, Tadjik, Tamil, Tatar, Telugu, Thai, Turkish, Turkmen, Tuvan, Ukrainian, Urdu, Uzbek, Vietnamese, Yakut and Yoruba.*
0.76
t3_u8zixf
1,650,580,710
LanguageTechnology
Where did Intent / Slot schema even come from?
hi all, i'm writing some internal documentation at my company about labelset design best practices for NLU models. i'd like this documentation to be approachable by new hires / those maybe broadly familiar with classification tasks, but not NLU. it occurred to me that I don't even know where the idea for intent / slot schemata originated. it feels like trying to find a citation for the invention of the wheel. ​ any pointers here? what's the earliest paper we can find that references intent / slot ontologies or classification?
0.76
t3_u8xytg
1,650,576,193
LanguageTechnology
Leetcode for NLP positions?
Location: Germany, Baden-Württemberg I recently finished my dual-major MA in English and Computational Linguistics in Croatia, got my Goethe B2 certificate, and built a GitHub portfolio with some NLP/ML projects. I need to tweak my CV a bit, and I plan to start applying for positions in Germany within a week or two. I was wondering if I should focus my prep time for the interviews on Leetcode style problems or actually relevant coding exercises with Spacy/NLTK/TensorFlow etc. What are your experiences? Any additional tips are welcome, of course, especially if someone wants to provide feedback on my portfolio, and/or my CV when I'm done. Here's the GitHub link: [https://github.com/SkarletXx](https://github.com/SkarletXx) Note: if relevant, I already live in Germany.
0.8
t3_u8t1t2
1,650,562,607
LanguageTechnology
Building Dense Passage Retrievers(DPR)
Hi, I made a video explaining the ideas behind building a **Dense Passage Retriever(DPR)**. Whenever we talk about retrievers, we mostly refer to the DPR formulation which appeared [in this paper](https://arxiv.org/pdf/2004.04906.pdf). A lot of publicly available implementations also use this formulation. In a previous video, we discussed how to use the DPR End-to-End QA system which uses DPR with a QA model. **In this video, we solely focus on retrievers and the ideas behind building them.** The implementation is quite similar to retrievers pre-trained with [Inverse Close Task](https://www.youtube.com/watch?v=JUJD6So-2Lo). This video is part 8 of a 9 video series on Open-domain question answering using Dense retrievers. Thanks for the support and I will appreciate any feedback. [https://www.youtube.com/watch?v=w61p0HLo7gc](https://www.youtube.com/watch?v=w61p0HLo7gc)
1
t3_u8nudu
1,650,548,215
LanguageTechnology
Amazon releases 51-language dataset for language understanding
nan
1
t3_u8n2qn
1,650,545,944
LanguageTechnology
Cross Validation using Simple Transformers for NER
While implementing the code below for an IOB dataset with columns of word, labels and sentence\_id for NER:- (retrieved from [https://www.philschmid.de/k-fold-as-cross-validation-with-a-bert-text-classification-example](https://www.philschmid.de/k-fold-as-cross-validation-with-a-bert-text-classification-example)) 14 # prepare cross validation 15 n=5 16 kf = KFold(n_splits=n, random_state=42, shuffle=True) 17 18 results = [] 19 20 for train_index, val_index in kf.split(train_data): 21 # splitting Dataframe (dataset not included) 22 train_df = train_data.iloc[train_index] 23 val_df = train_data.iloc[val_index] 24 # Defining Model 25 model = ClassificationModel('bert', 'bert-base-uncased') 26 # train the model 27 model.train_model(train_df) 28 # validate the model 29 result, model_outputs, wrong_predictions = model.eval_model(val_df, f1=f1_score) 30 print(result['f1']) 31 # append model score 32 results.append(result['f1']) I received this error message:- "ValueError: You appear to be using a legacy multi-label data representation. Sequence of sequences are no longer supported; use a binary array or sparse matrix instead - the MultiLabelBinarizer transformer can convert to this format." Any help is very much appreciated!
1
t3_u8jdku
1,650,532,473
LanguageTechnology
Why does it say Import "transformers" could not be resolvedPylance" when I try to import this specific package
nan
0.73
t3_u8islf
1,650,529,940
LanguageTechnology
NLP techniques related to distinguishing scenes in a story
Say I have a story or novel with multiple scenes. Are there any NLP work/techniques that allow me to know when the author switches from one scene to another? Searching things like "context", "scene" in google does not yield good results. Assume that I do not have a list of characters' names to begin with and there may be nameless NPCs.
0.81
t3_u8ajqf
1,650,500,354
LanguageTechnology
Using subwords for POS predictions
Hi everyone, I was given a problem to build a classifier to predict POS using embedding layer that encode every word to the sum if its encoding, 3 character prefix and 3 character suffix. my question is what happens when i take a word like “is” which has no prefix/suffix of size 3? Ignore the subwords? As special tokens?
0.8
t3_u8410w
1,650,481,929
LanguageTechnology
txtai 4.4 released - explainable semantic search, build your own query language
nan
0.93
t3_u80160
1,650,471,065
LanguageTechnology
Most Suitable library for large data NLP sentiment analysis in Python?
Hi All, I'm trying to do a sentiment analysis of some stocks in Farsi. I am using googletrans library to translate to English and then running a sentiment analysis using NLTK/BERT. BERT seems to be more accurate but it doesn't seem suitable for big datasets. I have 16gb ram but my PC still cannot process the data. I also admit I'm a rookie with it (and also pytorch/tensorflow). I've also tried NLTK but it doesn't seem as accurate. I have watched many YouTube tutorials, but I'm still not sure what to do - I could do with some personalised advice. **The project**: Extract Data from Telegram/Twitter -> Clean in Pandas -> Analyse sentiment using NLTK/BERT Thanks!
1
t3_u7wc5a
1,650,460,728
LanguageTechnology
what are some practical applications of language technology one would like to see in use?
nan
1
t3_u7uhno
1,650,454,751
LanguageTechnology
Unable to get any TF-IDF values when analyzing Twitter data using R
I am analyzing (using Tidytext) about 13,000 tweets that I downloaded using the AcademicTwitteR package. As far as I can tell, the problem is that when transforming the data to the Tidytext format, it for some reason assumes I only have 2 documents. As I understand it, each Tweet should be its own document. Can someone tell me what is wrong? tweet_words_total <- tweets_total %>% select(text, id) %>% unnest_tokens(word, text) my_stop_words <- stop_words %>% select(-lexicon) %>% bind_rows(data.frame(word = c("https", "t.co", "rt", "amp","der","20"))) tweet_clean_total<-tweet_words_total%>% filter(!(word %in% stopwords(source = "snowball")))%>% anti_join(my_stop_words) total_corpus<-Corpus(VectorSource(tweet_clean_total)) total_tdm<-TermDocumentMatrix(total_corpus, control = list(weighting = weightTfIdf, removePunctuation = T, removeNumbers = T, stemming = T, stripWhitespace = T)) \> total\_tdm <<TermDocumentMatrix (terms: 16860, documents: 2)>> Non-/sparse entries: 16860/16860 Sparsity : 50% Maximal term length: 36 Weighting : term frequency - inverse document frequency (normalized) (tf-idf)
1
t3_u7tvzb
1,650,452,552
LanguageTechnology
Text Entailment approach for Zero-shot Text Classification (Research Paper Walkthrough)
nan
0.81
t3_u7ty0s
1,650,452,743
LanguageTechnology
Hey people im aspiring to publish a conference paper in nlp specifically in nlg but finding difficult to frame problem statement anybody would like to guide and collab !? Interested ones kindly DM for working together and kindly advice how to frame problem statement
nan
1
t3_u7pi0w
1,650,433,715
LanguageTechnology
NLP on Bugzilla issue tracking system
Hello guys, I haven't had any experience in artificial intelligence yet, but I think that Natural Language Processing could be helpful here. We have a issue tracking system which is based on bugzilla. We have about 300,000 entries in this system, where the entries are structured like this: ​ \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* \-- Entered from Customer\_XY on 8/4/2021 11:12:39 AM Server: Database: Version: Priority: Owner: ​ Observed Behaviour: Application is blocking since 20 min, Log file entries are as follows ... ​ ​ \-- From COMPANY/AZR on 9/4/2021 11:00:29 AM Logfile shows that stored procedure xy has been the main cause of the blockings, created new index xy. Performance should be fine again. ​ ​ => Maybe again answer from Customer ... \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* ​ ​ So this issue system also serves us as a knowledge base where many issues have been solved before. It would now be interesting to have a tool or plug-in that could be integrated into our issue tracking system, which could already give possible solution suggestions based on the customer's entry. This can be used as help for new ​ ​ I would like to learn more about AI technologies. My main concern now would be this: \- What technologies could be helpful here? \- The 300,000 entries could serve as training data: All entries written by a "CUSTOMER" are problems, while everything starting with COMPANY/xy contains the solutions. ​ I would like to hear what technologies could be used here. This could help new staff to answer customer problems by having the NPL already suggest what possible solutions to problems could be. ​ Thanks a lot
1
t3_u7oqgi
1,650,430,695
LanguageTechnology
Any upcoming biomedical NLP conferences?
I recently had a paper rejected from NAACL 2022. I have revised it according to the reviewers recommendations (which weren't all that helpful anyway), and am looking for another suitable conference. The paper's broad topic is knowledge discovery using biomedical text corpora. So far I have found the International Conference on Biomedical Natural Language Processing 2023 - I can't find much info about this conference, and also their website seems to have a lot of non-NLP submissions under "selected papers." Seems kind of sketch tbh. It seems like I missed the paper submission deadline for BioNLP 2022 at ACL. Do you guys know of any upcoming biomedical domain-specific NLP conferences that might be coming up and that are still open for submission? Any recommendations would be appreciated. Thanks.
0.9
t3_u7jfmi
1,650,413,496
LanguageTechnology
Comprehensive Certification for Conversational Design - from UX and copywriting to NLP and AÍ
Hello, is there any chance someone could give me some advice for a conversational design certification? I would like a comprehensive course that would cover UX, copywriting and AI. I already work with it but the company I work for fails to help me grow my tool box and I am immigrating to Canada, so a NA certificate would help, since I have lost an opportunity for the lack of formal education on the field. I’m a differentiated professional with strong sales and customer service background and I do have formal education on data science but only have worked with NLP and honestly, thanks to YouTube, DataCamp and so on. I’m looking into [conversational design institute](https://www.conversationdesigninstitute.com/courses) and since I’m not making dollars yet, the investment must be very well calculated! Thanks in advance
1
t3_u7g30z
1,650,403,903
LanguageTechnology
Decoder Pre-trained and Encoder Fine-Tuned?
Hello I am currently writing my masters thesis and diving into the world of PLMs and NLP models. My goal is to design an architecture of some kind of PLM model, that asks questions to a human and uses the answer to gain knowledge. Basically it should learn from dialogues with humans. I thought about using an pretrained Transformer model and subsequently use the question-answer pairs as labeld data for fine-tuning. That way a generic model should adapt more and more to the knowledge wihtin a certain domain. At the moment I am considering whether I should go for an encoder or decoder model. I think they are not too much different but the encoder is better in nlu and the decoder ist better at nlg and few-shot learning, right? Because I think these models arent so much different I got the following idea: Is it possible to use a unidirectional pre trained decoder model for having the QA dialogue with humans, while fine-tune the model on MLM (bidirecitonal). It is because I think a decoder model would be better in the communication but in the fine-tuning part (to adapt new information in the model, without forgetting the pretrained knowledge and not overfitting new knowledge) MLM seems like the better solution. Do you have any thoughts on this? Thank you1
1
t3_u7fmb8
1,650,402,641
LanguageTechnology
I made a multi class toxic text detection model, and in order to test it out in a practical scenario I am looking for an open source social media app template or even a comment section depicting scenario component including backends, Any code stack is fine (tho I would love a MERN stack one), Thanks
nan
0.5
t3_u7ccqz
1,650,393,949
LanguageTechnology
Abstractive summarization TLDR Slackbot
We’ve recently launched a new Slackbot that leverages abstractive summarization to TLDR webpages, text, and more. Currently working on adding support for summarizing YouTube videos. So you share whatever you need summarizating with the Slackbot and you get TLDRs back. [Check it out](https://www.seenopsi.com)! Currently looking for more feedback and ideas. We also have a [beta testing option](https://www.seenopsi.com/beta) if you want to give it a go.
1
t3_u7bskb
1,650,392,491
LanguageTechnology
AI thesaurus
Has anyone tried making a modern thesaurus trained on a massive corpus of famous writers which is just a neural machine translation engine with the source and target language the same language (a reflexive NMT engine) in order to serve as a modern thesaurus, not only word for word but creative, original, impressive rhetorical phrases and even entire suggested sentence rewordings? I think it’s a good idea and has potential. Thanks very much
0.81
t3_u7au2j
1,650,390,048
LanguageTechnology
Can machine learning NLP be applied to ancient text/languages?
I'm new to machine learning/NLP. Is it possible to apply machine learning to predict constructs/classes/categories in an ancient Hebrew or Sumerian corpus for example? what is the best Python module or platform to use for this? I took a look at SpaCy and watched a video of Prodigy used on English text, can those be used? Does it have to be in a dataset format? This is ancient hebrew dataset...word tokens can have up to 6 parts. The morphology breaks it all down: Ref,Eng,Heb6,Heb5,Heb4,Heb3,Heb2,Heb1,Highlight,Morphology,Strongs Gen.1.1-01,in the head,,,,,רֵאשׁית,בְּ,p,HR/Ncfsa,H9003=ב=in/H7225=רֵאשִׁית=first_§1_beginning Gen.1.1-02,he has cut-out,,,,,,בָּרָא,,HVqp3ms,H1254a=בָּרָא=to create Gen.1.1-03,elohim,,,,,,אֱלֹהים,,HNcmpa,H0430=אֱלֹהִים=God_§[email protected] Gen.1.1-04,אֵת-,,,,,,אֵת,b,HTo,H0853=אֵת=obj. Gen.1.1-05,the Heavenly-pair,,,,,שָּׁמַיִם,הָ,,HTd/Ncmpa,H9009=ה=the/H8064=שָׁמַיִם=heaven Gen.1.1-06,אֵת-and,,,,,אֵת,וְ,b,HC/To,H9002=ו=and/H0853=אֵת=obj. Gen.1.1-07,,,,,׃,אָרֶץ,הָ,,HTd/Ncbsa,H9009=ה=the/H0776=אֶ֫רֶץ=land_§3_planet/H9016=׃=verseEnd
1
t3_u76fyw
1,650,378,445
LanguageTechnology
SparseServer.UI : A UI to test performance of Sparse Transformers
You can now load multiple transformers (each model has a unique sparsification recipe) on top of the DeepSparse server behind Streamlit, and it's open-source. This was battle tested on a 16GB of RAM with only 4 core CPU virtual machine. These compute requirements are enough to load up to 19 sparse BERT models in memory and compare their performance on question answering (P.S. they are really fast on just CPUs). 💻code: [**https://github.com/neuralmagic/deepsparse/tree/main/examples/sparseserver-ui**](https://github.com/neuralmagic/deepsparse/tree/main/examples/sparseserver-ui)
1
t3_u74rel
1,650,373,685
LanguageTechnology
Using spaCy to find words describing the environment
Hey guys, I am working on a program that produces illustrations from pages in story books and I need a way to find words that describe the environment (e.g. mountains, river, ceiling, floor, table, road etc.). Is there a way to do this in spaCy? also any other suggestions of words I can find and use to generate illustrations and how I could find them with spaCy would be appreciated.
1
t3_u737fp
1,650,368,791
LanguageTechnology
NeuroX toolkit for interpreting Deep NLP models
We have developed a toolkit to facilitate the interpretation of transformer models of NLP https://neurox.qcri.org/ The tool is well documented and it provides a number of basic functionalities that takes away the pain of initial work e.g. activation extraction, combing subwords to words, visualizing neurons, and running common interpretation methods. I would love to hear your feedback.
0.85
t3_u71fkb
1,650,362,152
LanguageTechnology
Why is natural language so hard to process?
There are theories that the ambiguity from polysemy and homonymy (many to many matches between terms and meanings/concepts) is a result of optimizing language for communication of ideas between humans. See for example: [https://medium.com/ontologik/why-ambiguity-is-necessary-and-why-natural-language-is-not-learnable-79f0e719ac78](https://medium.com/ontologik/why-ambiguity-is-necessary-and-why-natural-language-is-not-learnable-79f0e719ac78) But frankly I am not at all convinced. For example, while this explains why one word can have many meanings, how does it explain why a single meaning can be expressed in multiple ways? And then there is all of the seemingly unnecessary complexity in natural language grammars. From my experience, it seems that the real reason is that there are at least two fundamental roles of natural languages. One role is to convey meaning, but another equally important role is to manipulate the thinking of the listener into some state desired by the speaker. The second role appears to have resulted in aspects of natural languages that actually obscure communication of ideas through ambiguity and complexity. This can be useful in poetry or motivational speeches, but when the aim is to transfer knowledge as accurately as possible, e.g. in a classroom or an academic conference, the second role gets in the way. Is anyone here familiar with such a theory and any work that has been done to prove/disprove it?
0.9
t3_u6y3az
1,650,348,070
LanguageTechnology
Game Theory and NLP
Anyone know of any work using game theory paradigms in NLP? I've been finding some fun ideas with alignment but nothing beyond that. Would appreciate if anyone can suggest some rabbit holes.
1
t3_u6wzug
1,650,343,964
LanguageTechnology
How to implement the theory of copy movement in Prolog?
Using this input: parse(Parse, [what,did,thomas,eat], []) I want to produce this output: sbarq( whnp( wp(what) ), sq( vbd(did), np(nnp(thomas)), vp( vb(eat), whnp( wp(what) ) ) % this is not in the input, I need to copy the entire whnp here ) with this code: parse(Tree) --> sbarq(Tree). % rules sbarq(sbarq(WHNP, SQ)) --> whnp(WHNP), sq(SQ). whnp(whnp(WP)) --> wp(WP). sq(sq(VBD, NP, VP)) --> vbd(VBD), np(NP), vp(VP). np(np(NNP)) --> nnp(NNP). vp(vp(VB)) --> vb(VB). % lexicon wp(wp(what)) --> [what]. vbd(vbd(did)) --> [did]. nnp(nnp(thomas)) --> [thomas]. vb(vb(eat)) --> [eat]. how can I change my code to copy the whnp into the vp?
1
t3_u6whtp
1,650,342,185
LanguageTechnology
At what level of text aggregation should I conduct topic modeling?
I have a dataset of \~50k documents that are each several hundred sentences long, as well as various metadata about the documents. I have conducted several correlated topic models of these data in R using the stm package. The problem is that the topics are hard to interpret because even high proportion documents only contain a small amount of text actually devoted to a particular topic. It's also not clear which parts of the document are about one topic vs. another (especially when the topics are related). For this reason, I'm considering breaking up documents and modeling them at a smaller level of aggregation (sentences; document-folds), then getting document-topic proportions by summing these units within-document. ​ Is this a reasonable strategy? Is there any work on topic modeling at different levels of text aggregation?
1
t3_u6v814
1,650,338,028
LanguageTechnology
Can anyone help me build a model wrapper around a toxic comments classification model?
nan
0.5
t3_u6tcyy
1,650,332,344
LanguageTechnology
Reddit IRL - a corpus of relatable Reddit humour
Hello all, We have compiled a dataset of /r/meirl and /r/me_irl posts and comments for your viewing pleasure. You can use it to search for reposts, analyze topics, or even try understand Redditors' sense of humour. You can find the data [here](https://socialgrep.com/datasets/the-reddit-irl-dataset?utm_source=reddit&utm_medium=link&utm_campaign=theredditirldataset). (Or [here](https://huggingface.co/datasets/SocialGrep/the-reddit-irl-dataset), if you prefer using Huggingface Datasets). Hope you enjoy.
1
t3_u6nv4k
1,650,316,821
LanguageTechnology
UKRUWAR22: A Collection of Ukraine-Russia War Related Tweets Dataset
It contains 55186 unique tweets in 57 different languages loosely coupled to the Ukraine-Russia war. The details about the date and time of the tweet, language used, attachments URL, number of likes, retweets and replies, and hashtags used for each tweet are present in the dataset. The data can be used for sentiment analysis and other NLP-related works. [https://ieee-dataport.org/documents/ukruwar22-collection-ukraine-russia-war-related-tweets](https://ieee-dataport.org/documents/ukruwar22-collection-ukraine-russia-war-related-tweets)
0.92
t3_u6k8vs
1,650,307,307
LanguageTechnology
Best models for paraphrase generation?
I'm currently working with the fine-tuned `PEGASUS` model provided at `tuner007/pegasus_paraphrase`, but was wondering if there are any good paraphrase models that allow longer than `max_input_len = 60`, as the articles I have are more akin to paragraphs, sometimes more than a few sentences long. I'm not sure exactly where to go digging around for paraphrase generation methods; most of what I search seems to redirect me to text summarization models, which I always thought of as different (intentionally trying to shrink the sequence length, versus simply saying the same think with different words). Thanks for any help you can provide!
0.83
t3_u6k08v
1,650,306,677
LanguageTechnology
Word-Meaning Dictionary Dataset
Hey all! So I intend to make an application that, very naively speaking, outputs synonyms to a given word regardless of context (like if word1 is "bank", the model should output both "money" and "river", and the order does not matter). For this, I intend to use a Doc2Vec type of classifier, where the meanings of each word can serve as a document, and then similar words can easily be returned using a cosine similarity function. I chose this over a classic Word2Vec as this will be able to predict uncommon words (which blimey the English language has a lot of) which would otherwise be processed as <UNK> tokens. To this end, I am searching for a suitable dataset. Any ideas?
1
t3_u6hv49
1,650,301,055
LanguageTechnology
Where do I begin with gpt-2-simple?
I want to make a chatbot that acts like a pre-existing character from a game or an anime, and was considering using gpt-2, as I have recently found out about it. Problem is, I don't really understand what I'm reading, and when I try to do things like install packages, I just get tons of errors that I don't understand. Where should I start if my end goal is the character based chat bot?
1
t3_u6gx9e
1,650,298,537
LanguageTechnology
Translate natural language queries to vector search SQL
nan
1
t3_u69tf9
1,650,277,289
LanguageTechnology
POS tagger Question: Should I keep embedding weights 0 if it is excluded from the Word2Vec training model? or should I set min_count to 1 for my training model?
I'm training a POS tagger (for multiple languages that may not have word-vector dataset) and intended to include a embedding layer with pre-trained weights produced by a self-trained model via Word2Vec using a training set. I assume that the rows of the array for embedding weights need to resembles the number of unique words in the vectorised token dictionary (i.e., I have 15000 unique words + padding term in the 'term to index' dictionary --> 15001 rows for the embedding weights array). When training a Word2Vec model, min\_count is at 5 and many words are excluded. Thus, if I were to apply all available vectors into the embedding weights array, many rows will be left with arrays of 0s because they were not trained in the Word2Vec model. Is this fine to be implemented in the embedding layer? Or should I set min\_count = 1? Or should I not implement this at all?
0.81
t3_u6978p
1,650,274,812
LanguageTechnology
Google AI’s Latest Research on Language Model Proposes Two Different Sequence-To-Sequence Approaches Toward Zero-Shot Transfer For Dialogue Modeling
Conversational agents are nothing but computer systems intended to converse with humans. These agents have a wide range of applications, including booking flights, finding restaurants, playing music, and telling jokes. It can analyze and reply in ordinary natural languages. However, adding this functionality might be cumbersome as each new assignment necessitates new data collection and retraining of the conversational agent’s models. Most task-oriented dialogue (TOD) models are trained on a particular task-specific ontology, which poses a significant problem. Ontology is just a catalog of conceivable user intentions or the activities that the conversational agent must complete and the available attribute fields to retrieve from any given interaction. A tight ontology can limit the model’s ability to generalize to new tasks or contexts. Consider the case where an agent already knows how to buy train tickets; adding the ability to order airline tickets would necessitate new data training. In an ideal world, the agent would be able to apply existing information from one ontology to new ones. [Continue reading our Bite on this research](https://www.marktechpost.com/2022/04/17/google-ais-latest-research-on-language-model-proposes-two-different-sequence-to-sequence-approaches-toward-zero-shot-transfer-for-dialogue-modeling/) Paper 1: https://arxiv.org/pdf/2201.08904.pdf Paper 2: [https://arxiv.org/pdf/2204.04327.pdf](https://arxiv.org/pdf/2204.04327.pdf) Google Blog: https://ai.googleblog.com/2022/04/simple-and-effective-zero-shot-task.html
0.75
t3_u634n2
1,650,250,816
LanguageTechnology
what is the easiest way to deploy a nlp model?
I created a model and i have saved .pb file and the variables, from here on, how i deploy my model in a server? lets say i want to send inputs from a front end and get predicted data outputs from the model, how do I establish this? Thank you
0.93
t3_u5iisu
1,650,183,127
LanguageTechnology
Questions about ACL Rolling Review
A few questions about ACL ARR: \- If you request to reassign a reviewer, would the editor aim for reassigning all three reviewers or he would go for reassigning only that particular reviewer? Assume you have given a valid reason for reassignment and the editor is convinced. \- If you request to reassign a reviewer, can the new reviewer see the previous reviews/scores before submitting his own review? Or he would access to the previous revision after submitting his own review. I already know (have heard) that in many cases reviewers are not available, and it becomes inevitable to get an entirely new set of reviews. I already know this. But my questions are about the case that the reviewer availability is not an issue. Juts trying to find out how things are managed
0.6
t3_u59y9d
1,650,151,400
LanguageTechnology
Evaluation metric for entity linking?
I am writing a system to perform entity linking and coreference resolution. I know how I can score the coreference chains, using the Conll score, but how should I score the entity linking performance?
0.72
t3_u54844
1,650,134,434
LanguageTechnology
Tips for posting code review
Hello everyone, I have been working on a project for my masters thesis that looks to classify nuclear power plant outage reports based on severity. It's taken me quite some time to figure out and learn bert. I do not have much support in NLP work and hearing feedback from the community is what I have mostly relied on up to this point. With this said, what is the best way to post code and to have it looked at to make sure I am doing things correctly, as expected? I do not intend to just throw code up and expect feedback. I will be sure to clean things up, document and comment as clearly and concise as possible. I just need tips figuring out where I can appropriately the code and how to show the input data in a way that streamlines feedback. Thanks for the help everyone!
0.76
t3_u53gfw
1,650,132,277
LanguageTechnology
Funds to spend on continued education / training
My employer is sponsoring ~$1k of support for any continued training in text analysis / NLP / Python programming that I choose, to be used on any combination of short-courses, books, online subscription services, etc. This money will disappear if unspent. Beyond the free options available, has anyone found success or heard positive things about other specific paid resources? All suggestions are welcome, and perhaps this may benefit others in a similar situation as myself (bonus: resources that teach/use PyTorch over TF are a plus). Many thanks in advance.
0.86
t3_u52j9p
1,650,129,682
LanguageTechnology
Text Segmentation Using Neural Network and Sentence Transformer
Hi everyone, I am a CS student and a beginner in Nlp. I am trying to do text segmentation but with video transcripts. Basically, the model will divide the text into segments based on topics. I am encoding all the sentences in a video with sentenceBert. So let's say, v1 = \[s1,s2, ...sn\] where si represents a sentence. I am using SentenceTransformer to directly get sentence embedding from the "sentence\_transformers" library, and feeding these sentence embeddings to a transformer model and then a feedforward layer to predict a binary output ( 0 if the sentence doesn't start a new segment, 1 if it is starting a new segment). The input looks like this (batch, sentences, embedding\_dim). If I am having 4 videos in a batch and the maximum length of the video in this batch is 200 sentences, then it is (4, 200, 384). I have been following the paper "Transformer over Pre-trained Transformer for Neural Text Segmentation with Enhanced Topic Coherence", and they did something like that, but with my very little understanding, I am trying to recreate it and use it with sentence\_transformer. My question is should I need to add positional encoding to the output of SentenceTransformer and then pass it to my Transformer model ? And also If this all seems correct. I am not sure because we use positional encodings when the inputs are words, but in my case, I am feeding the whole sentence at one timestep. Please any answer would be really helpful.
0.81
t3_u50nnv
1,650,124,331
LanguageTechnology
BERT Toxic text classification problem, detects cvs but not raw texts
expected shape=(None, 128), found shape=(None, 3) Why do i get this error when i pass in a single string to predict? I am trying out a hate speech detection model and it predicts whole datasets but not sentences, it throws this error, any help would be much appreciated! Thank you Notebook link: [https://www.kaggle.com/code/abrh119/test-toxic-comments-bert](https://www.kaggle.com/code/abrh119/test-toxic-comments-bert)
0.76
t3_u4xqdy
1,650,115,420
LanguageTechnology
Why does this notebook uses way too ram?
Is there a way to run this notebook without exceeding the memory usage? [https://colab.research.google.com/drive/14Ea4lIzsn5EFvPpYKtWStXEByT9qmbkj?usp=sharing](https://colab.research.google.com/drive/14Ea4lIzsn5EFvPpYKtWStXEByT9qmbkj?usp=sharing) any help would be much appreciated! thank you so much
0.5
t3_u4uorb
1,650,103,657
LanguageTechnology
Simplest modifications to Spacy segmentation
The segmentation engine is pretty good but it doesn’t segment on menu headers and random lines of text that aren’t complete sentences. I usually segment and then split on newlines afterwards, it works pretty well. However, what are some simple options I can use to customize the segmentation? I know I can load different language models but as far as I know that’s just about speed rather than a fundamentally different segmentation. Or are there models for different text types for example? Is there any other small thing I can change, in a config file for example, to try out different ways of using their segmentation engine? The sentences are accessible in “doc.sents” so I’m pretty sure I would need to configure the NLP object before instantiating it on the string; in other words, there’s nothing to be done after you’ve created the NLP object. Thanks very much
0.67
t3_u4u9wc
1,650,101,836
LanguageTechnology
How to normalize irrelavent numbers in the dataset?
Hi I am using wikidump as the dataset for building a tokenizer(BPE) using [tokenizer from huggingface](https://huggingface.co/docs/tokenizers/python/latest/index.html), and I want to improve the dataset by reduce the noise in my dataset. Let say I am building for English, and I have the following data: The total area of the United Kingdom is 93,628 square miles (242,500 km2), with an estimated population in 2020 of over 67 million. How should I deal with those numbers? Removing them doesn't make sense to me, but keeping them doesn't look right to me as well. Should I increase the occurrence needed for a merge to avoid these noise?
0.76
t3_u4i255
1,650,057,466
LanguageTechnology
Salesforce AI Researchers Introduce Converse: A Task-Oriented Dialogue System That Simplifies Chatbot Building And Handles Complex Tasks
One of the long-term ambitions of Artificial Intelligence (AI) has been to create a machine capable of having a meaningful conversation with a person and assisting them in performing tasks. For decades, many academics have dreamed of creating a strong chatbot that can assist humans with very difficult work through natural and fluent chats. Recent developments in this domain have made voice assistants and chatbots popular in our everyday lives. Task-oriented conversation systems, for example, can assist you in booking a table at a restaurant, purchasing airline tickets, cross-checking vaccination appointment status, or tracking the status of your transaction.  However, frequently chatbots get trapped in an infinite loop or misunderstand what’s being said. This makes people want to connect to a human agent in case of emergencies or urgency, which negates the point of having the chatbot in the first place. Furthermore, building a robust chatbot that can handle several complicated tasks is challenging for many chatbot developers.  This article sheds light on a breakthrough achieved by Salesforce AI Research Engineers that helps developers quickly construct smart bots that assist users in performing tasks. The team introduces Converse, a flexible and modular task-oriented conversation system. Using an interactive configuration tool and a little code, bot creators can easily develop jobs in Converse. [Continue Reading](https://www.marktechpost.com/2022/04/15/salesforce-ai-researchers-introduce-converse-a-task-oriented-dialogue-system-that-simplifies-chatbot-building-and-handles-complex-tasks/) Paper: https://arxiv.org/pdf/2203.12187.pdf Github: https://github.com/salesforce/Converse
1
t3_u4ejv1
1,650,047,490
LanguageTechnology
Dictionary for country names or references?
I am doing some text analysis and trying to find out how often countries are named in a series of texts, the problem is names like USA, US United States, the States, etc, are not linked in any dictionary that i can find. I spent half a day looking but didn't find anything. Maybe you know?
0.86
t3_u4dbfa
1,650,044,100
LanguageTechnology
Zero-shot and Few-shot Text Classification Methods
Few-shot and Zero-shot learning aims for the ML models to make prediction under the scenario of Small number or ZERO examples of the relevant class for these models to learn from. 🔥🔥 This tech report from Cloudera Fast-forward labs talks about various techniques for doing Zero-shot and Few-shot text classification. Interested? Then watch the full report walkthrough: https://youtu.be/2250AqR1RF8 Tech Report: https://few-shot-text-classification.fastforwardlabs.com/
0.74
t3_u45lml
1,650,020,810
LanguageTechnology
Really struggling with fine-tuning a wav2vec2 model: confused on difference between 30 vs 256 vocab sizes..
I am trying to fine-tune a wav2vec2.0 model on only a little Assyrian speech data (\~40 minutes). Somebody on this subreddit pointed me to an [Arabic ASR model here](https://huggingface.co/asafaya/hubert-large-arabic-ft) which I would like to fine-tune. I'm struggling with this process however. I have approximately 5 pages of text which I can train a tokenizer on. Should I go with a vocabulary size of 30-ish (1 token per phoneme) or of 256? Here's my thought process: the [examples I'm going off of](https://github.com/huggingface/transformers/blob/main/examples/research_projects/wav2vec2/FINE_TUNE_XLSR_WAV2VEC2.md) show how to fine-tune a pretrained [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) checkpoint. This example assumes you train the tokenizer to have 1 token per character. The issue with that is the model I actually want to fine-tune, [this one right here](https://huggingface.co/asafaya/hubert-large-arabic-ft), as an LM-net with 256 output nodes for 256 tokens. Does this mean for my fine-tuning to work, I should use a 256 tokenizer? I was told this is not a good idea since I only have 5 pages of text so the tokenizer will be really bad... Also, for some reason the Arabic ASR [has a random sequence of layers](https://imgur.com/a/r1e19kl) sitting between the wav2vec2 model and the LM net. I'm so confused, does anybody know what this is doing in here!? Besides this block, the Arabic ASR model and the Facebook wav2vec2 model from the examples have exactly exactly the same architecture. Does this extra block have to do with the fact the Arabic ASR uses a 256 tokenizer instead of a \~30 tokenizer? And more importantly, should I freeze this extra block when fine-tuning or no? Thanks I know this was a lot.
1
t3_u3nynt
1,649,960,799
LanguageTechnology
Spotify's Natural Language Search Explained
nan
0.98
t3_u3iela
1,649,945,362
LanguageTechnology
purpose of using re.escape()
nan
0.4
t3_u3e8gu
1,649,931,839
LanguageTechnology
If Anybody got NLP startup ideas do share
nan
0.25
t3_u39tyk
1,649,913,278
LanguageTechnology
Any idea to improve the performance on multi-class classification problem?
Hi there! I am recently working on a multi-class text classification problem with BERT. My training set contains 12800 sentences, which are uniformly distributed in 5 classes. The dev set contains 1600 samples. And I am using a pre-trained model "bert-base-chinese" from HuggingFace. The problem is, my dev acc raised from 20% but stuck at 52%. Even if I tried different parameters, including lr, batch\_size, maxLength of tokens and dropout rates, the performance didn't get better. Could someone help with this problem, and give some instructions to improve the performance? Or should I turn to another model?
1
t3_u2yixx
1,649,878,901
LanguageTechnology
Supercharged UI for MLflow
Hi guys, we've built a plugin that seamlessly reads MLflow logs and provides a beautiful UI to compare multiple runs with just a few clicks. You can: filter runs with a super versatile fully pythonic search group and aggregate your metrics / images We are trying make it work seamlessly with MLflow and complement its other awesome features 🎉 Here is more info about it https://bit.ly/3xvyYbn Would love your feedback!!
1
t3_u2tu0c
1,649,866,156