Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
42,806,319
2017-03-15T09:56:00.000
0
0
0
0
python,django
42,806,401
4
false
1
0
When you added the migration with on_delete='DO_NOTHING' django generated a migration. Then, you generated a new migration with on_delete=models.DO_NOTHING. So you have to delete both migrations located in your_project/your_app/migrations/ and then execute makemigrations again. Hope it helps!
2
0
0
I've been trying to add a foreign key to my models in Django 1.9 with the option on_delete='DO_NOTHING' per instructions on Django Docs, but for version 1.10. I ran python manage.py makemigrations without any problems but when I tried to run python manage.py migrate of course I got the error: django.core.exceptions.FieldDoesNotExist: Entrance has no field named u'DO_NOTHING' Realizing my mistake, I changed the option to on_delete=models.DO_NOTHING and ran makemigrations and migrate again but I'm still getting the same error: django.core.exceptions.FieldDoesNotExist: Entrance has no field named u'DO_NOTHING' Looks like something is wrong in migration files. Not too familiar with internal workings of Django so I don't know where to look to fix this. Any ideas?
Django migration error: has no field named u'DO_NOTHING'
0
0
0
1,411
42,809,895
2017-03-15T12:33:00.000
0
0
0
0
python,python-3.x,pyqt,pygame,pyqt4
53,130,954
1
true
0
1
I found the answer to this issue a while ago so thought I should put it here for anyone else. I gave the program an internal exit button in the ui, then made the button call os._exit(0). It worked perfectly then. Checking the Python and PyQt documentation gave no comment on closing a application window while a while loop is being performed. Any other solutions would be appreciated though.
1
0
0
I've been working on a media player in python that uses pygame.mixer to play music and PyQt4 to build the UI. I'm currently using a while loop to check if a song is finished so the next song can be loaded in from the queue. In order to make sure that the user can still interact with the PyQt buttons while the while loop is running, I have put QtCore.QCoreApplication.processEvents() into the while loop. However, now when I close the main window while music is playing, the program does not stop (as it normally does) and the music keeps playing. The program also can't detect the app.aboutToQuit.connect() command while the music is playing (meaning it can when music isn't playing). Any help would be greatly appreciated.
Shutting down a Python program using PyQt and PyGame.music()
1.2
0
0
157
42,810,240
2017-03-15T12:48:00.000
0
0
0
0
python,c++,tensorflow,benchmarking,inference
42,902,517
1
false
0
0
Write in the language that you are familiar with, in a way that you can maintain. If it takes you a day longer to write it in the "faster" language, but only saves a minute of runtime, then it'll have to run 24*60 times to have caught up, and multiple times more than that to have been economical.
1
3
1
Is it really worth it to implement a C++ code for loading an already trained model and then fetch it instead of using Python?. I was wondering this because as far as I understand, Tensorflow for python is C++ behind the scenes (as it is for numpy). So if one ends up basically having a Python program fetching the model loaded with a Python module it would perform pretty similar than using a module in C++ right? Is there any benchmark? I wasn't able to find anything supporting this theory. Thanks!
Python vs C++ Tensorflow inferencing
0
0
0
789
42,812,125
2017-03-15T14:06:00.000
0
0
0
1
python,celery
55,646,504
3
false
0
0
We have tasks that may run up to 48 hours. Graceful restart you talk about is very common when we have a new release and we deploy the new version to production. What we do is simply send the SIGTERM (shutdown) signal to the running workers, and then spin up completely new set of workers in parallel.
1
2
0
I need to restart the celery daemon but I need it to tell the current workers to shutdown as their tasks complete and then spin up a new set of workers while the old ones are still shutting down. The current graceful option on the daemon waits for all tasks to complete before restarting which is not useful when you have long running jobs. Please do not suggest autoreload as it is currently undocumented in 4.0.2.
Celery Production Graceful Restart
0
0
0
3,057
42,812,962
2017-03-15T14:43:00.000
0
0
1
0
python,ipython
42,813,438
2
true
0
0
Answer is !ls -lh which returns info about file size
1
0
0
Is it possible to get file size info with a magic command in notebook? I've searched the internet but couldn't find an answer.
Get file size info in Notebook with $magic
1.2
0
0
2,514
42,814,029
2017-03-15T15:25:00.000
0
0
0
0
python,qt-designer,qspinbox,qradiobutton
42,814,203
1
true
0
1
I have no idea what you have been doing (perhaps you should mention what exactly instead of writing "tried several things" ;)) but yes, you can. In the slot-signal view select the radiobutton and drag a connection to the spinbox. For the signal emitted by the radiobutton choose toggled(bool). For the slot in the spinbox you need to check the Show signals and slots inherited from QWidget first (at the bottom of the connection dialog) in order to display all possible slots and then select either setDisabled(bool) or setEnabled(bool).
1
0
0
Is it possible to deactivate a spinbox if a radiobutton is chosen? I've tried several things, but either the spinbox is deactived all the time or it wont deactivate at all.
Deactivate a spinbox in Qt Designer
1.2
0
0
403
42,817,322
2017-03-15T17:59:00.000
0
0
0
0
android,appium,robotframework,python-appium
62,967,696
1
false
1
0
capabilities.setCapability(“autoAcceptAlerts”,true);
1
1
0
I am downloading an .apk file from a link. I open browser > Go to URL > Chrome browser is displayed. How do I click on "OK" from the alert. I am using AppiumLibrary.
Handle browser alert/pop up in Robot framework + appium
0
0
1
480
42,817,337
2017-03-15T17:59:00.000
0
0
0
0
python,arrays,numpy,save
42,817,461
3
false
0
0
How about ndarray's .tofile() method? To read use numpy.fromfile().
1
0
1
I have a code which outputs an N-length Numpy array at every iteration. Eg. -- theta = [ 0, 1, 2, 3, 4 ] I want to be able to save the arrays to a text file or .csv file dynamically such that I can load the data file later and extract appropriately which array corresponds to which iteration. Basically, it should be saved in an ordered fashion. I am assuming the data file would look something like this:- 0 1 2 3 4 1 2 3 4 5 2 3 4 5 6 ... (Random output) I thought of using np.c_ but I don't want to overwrite the file at every iteration and if I simply save the terminal output as > output.txt, it saves as arrays including the brackets. I don't know how to read such a text file. Is there a proper method to do this, i.e. write and read the data?
How do I save numpy arrays such that they can be loaded later appropriately?
0
0
0
741
42,819,741
2017-03-15T20:10:00.000
0
0
1
0
ipython,anaconda,ipython-notebook,jupyter-notebook,jupyter
42,832,210
1
false
0
1
1 think is either not possible or very difficult - it's hard in regular python, never mind Jupyter. In regular python, you'd probably use something like curses but I tried it in Jupyter and it affects the console Jupyter is running from - not the stdout in the notebook. If you really wanted 2, you could probably write your own version of input that uses carriage return \r after stripping the trailing newline via writing a function that loops listening for keystrokes and submitting once a newline is pressed. The carriage return should mean that the next line overwrites the previous line. 3 Caching the input shouldn't be too difficult though I presume? Just store them in an array or something then you can display them back to the user after using IPython.display.clear_output.
1
0
0
Before I get reamed for this, I know there are some posts detailing how to clear ALL the output from a cell, but I'm interested in only clearing part of it. Let me provide some background. I am creating a word-guessing game like LINGO where the user is prompted to guess a five-letter word, then using CSS to provide some visual feedback as to which letters are in the correct position, and which letters are in the word but in the wrong position. The way my program is structured, it displays the feedback after each guess and then prompts the user again and displays the guess. So something like this: Guess a word: word FEEDBACK guess again: word FEEDBACK ... You get the picture. My goal is to come as close to duplicating LINGO as possible, which would mean removing the user input from the screen after it has been submitted and having it show a sequence of feedback. This to me means one of three things: 1) Find a way to clear part of the output 2) Find a way to prompt the user for text input without displaying it on the screen 3) Cache the user input, delete the all the output after each iteration, and display the cached guesses. I've researched 1 and 2 and haven't been able to find anything. 3 would be a PITA so before I went to the trouble I thought I would ask.
Clear part of an IPython cell's output
0
0
0
538
42,819,987
2017-03-15T20:24:00.000
1
0
0
0
python,r,machine-learning,scikit-learn,xgboost
42,821,370
1
true
0
0
Since XGBoost uses decision trees under the hood it can give you slightly different results between fits if you do not fix random seed so the fitting procedure becomes deterministic. You can do this via set.seed in R and numpy.random.seed in Python. Noting Gregor's comment you might want to set nthread parameter to 1 to achieve full determinism.
1
1
1
I'm using python's XGBRegressor and R's xgb.train with the same parameters on the same dataset and I'm getting different predictions. I know that XGBRegressor uses 'gbtree' and I've made the appropriate comparison in R, however, I'm still getting different results. Can anyone lead me in the right direction on how to differentiate the 2 and/or find R's equivalence to python's XGBRegressor? Sorry if this is a stupid question, thank you.
Python's XGBRegressor vs R's XGBoost
1.2
0
0
1,321
42,821,309
2017-03-15T21:47:00.000
1
0
0
0
python-3.x,sockets,packet-sniffers
45,215,859
1
false
0
0
It's not documented on docs.python.org so I did some research. I'm now in a position to answer to myself. The tuple returned by recvfrom is similar to a sockaddr_ll structure returned by the Linux kernel. The tuple contains 5 components: - [0]: interface name (eg 'eth0') - [1]: protocol at the physical level (defined in linux/if_ether.h) - [2]: packet type (defined in linux/if_packet.h) - [3]: ARPHRD (defined in linux/if_arp.h) - [4]: physical address The example provided in the question can be decoded into: - 'eno1' - ARP protocol (0x806) - Incoming packet - Ethernet frame - MAC address In case of a WiFi interface in monitor mode, the [3] element would be 803 (meaning 'IEEE802.11 + RadioTap header'). Hope this will help somebody
1
1
0
I'm developing a custom packet sniffer in Python3. It does not have to be platform independant. I'm using Linux. The method I use is to recvfrom() from a socket (AF_PACKET, SOCK_RAW). It works fine but I have problem with info returned by recvfrom(). recvfrom() returns a tuple with 5 components. Example: ('eno1', 2054, 0, 1, b'\x00!\x9b\x16\xfa\xd1') How do I interpret the last 4 components? Where is it documented? I prefer not to use libpcap or scapy. OK! here's a code fragment: s = socket.socket( socket.AF_PACKET, socket.SOCK_RAW, socket.ntohs(0x0003)) ... packet,pktAdr = s.recvfrom(65565) print( 'pktAdr:'+str(pktAdr)) Thanks!
How to interpret result of recvfrom (raw socket)
0.197375
0
1
873
42,822,902
2017-03-15T23:53:00.000
0
0
1
1
python-3.5
57,057,654
6
false
0
0
I faced the same issue and my solution was to:- Create a new environment with Python3.5 conda create -n pht python=3.5 anaconda Install Prophet using the command. conda install -c conda-forge fbprophet I didn't install 'gcc' although this was advised before installing Prophet.
1
2
0
Can someone help me in installing python package "Prophet" on windows 10 . I tried installing python 3.5 and the dependency 'pystan' but yet I get below error. "The package setup script has attempted to modify files on your system that are not within the EasyInstall build area, and has been aborted.This package cannot be safely installed by EasyInstall, and may not support alternate installation locations even if you run its setup script by hand.Please inform the package's author and the EasyInstall maintainers to find out if a fix or workaround is available. Command "python setup.py egg_info" failed with error code 1 in c:\users\suman\appdata\local\temp\pip-build-aqoiqs\fbprophet\"`
Can someone help me in installing python package "Prophet" on windows 10
0
0
0
14,608
42,823,336
2017-03-16T00:43:00.000
1
0
0
0
python,django
42,823,497
1
true
1
0
You will want to save() them when the user submits the form at the start. Add a BooleanField to your model that says whether the row has been moderated and accepted. Then in your application, filter out all non-moderated rows, and on the admin side, filter out only rows that need moderation.
1
1
0
I'm working on a project where i do need to do a communication in 3 parts. Let me explain: 1.- The user fills a form, which is used to create a set of objects from the models. This objects are not saved in the database on this moment. All the objects are related between them. 2.- This objects must be saved in some way for an admin user to check the data and decide if proceed to save it, or if any element is invalid and reject the form. 3.- If the admin decide that the data is correct, select an option to send a .save() to add the data to the database. At least that's the idea i have on how should work. I decided to create the objects before send it to the admin because it sounded easier to show that if i sended the request.POST and the request.FILES. My problem is that i don't know where could save the objects to show it to the admin when he connects(There are 2 types of users, Normal and Admin, the normal are the ones that fill the forms, the admins only check it, the two of them has their own views). So, does anyone how could i send the data and storage it until the admin connects? I'm open to any idea if this is not possible, the only thing that is necesary is the flow of: User fills the form, admin check it, data is saved or rejected.
How to storage objects for communication without the database in Django?
1.2
0
0
38
42,826,560
2017-03-16T06:14:00.000
6
0
0
1
python,mysql,google-app-engine,google-cloud-sql
42,827,972
1
false
1
0
Figured it out eventually - perhaps this will be useful to someone else encountering the same problem. Problem: The problem was that the "Cloud SQL Editor" role is not a superset of the "Cloud SQL Client", as I had imagined; "Cloud SQL Editor" allows administration of the Cloud SQL instance, but doesn't allow basic connectivity to the database. Solution: Deleting the IAM entry granting Cloud SQL Editor permissions and replacing it with one granting Cloud SQL Client permissions fixed the issue and allowed the database connection to go through.
1
3
0
I'm attempting to access a Google Cloud SQL instance stored on one Cloud Platform project from an App Engine application on another project, and it's not working. Connections to the SQL instance fail with this error: OperationalError: (2013, "Lost connection to MySQL server at 'reading initial communication packet', system error: 38") I followed the instructions in Google's documentation and added the App Engine service account for the second project to the IAM permissions list for the project housing the Cloud SQL instance (with "Cloud SQL Editor" as the role). The connection details and configuration I'm using in my app are identical to those being used in a perfectly functioning App Engine app housed in the same project as the Cloud SQL instance. The only thing that seems off about my configuration is that in my second GCP project, while an App Engine service account that looks like the default one ([MY-PROJECT-NAME]@appspot.gserviceaccount.com) appears in the IAM permissions list, this service account is not listed under the Service Accounts tab of IAM & Admin. The only service account listed is the Compute Engine default service account. I haven't deleted any service accounts; there's never been an App Engine default service account listed here, but apart from the MySQL connection the App Engine app runs fine. Not sure if it's relevant, but I'm running a Python 2.7 app on the App Engine Standard Environment, connecting using MySQLdb.
Can't access Google Cloud SQL instance from different GCP project, despite setting IAM permissions
1
1
0
3,103
42,828,585
2017-03-16T08:17:00.000
0
0
0
0
user-interface,pyqt,wxpython
43,019,627
1
false
0
1
Learning PyQt will definitely help you learn to create IOS and Android apps. Even more, PyQt comes with QtDesinger which is a visual editor for creating apps with minimal coding. Tkinter in my perspective is for very light GUI programming. If you feel like you want to make money of these apps, I would highly advise you to check out Kivy.
1
0
0
I made a really simple, pseudo GUI using pythondialogue (its a wrapper for whiptail for bash), I need it to be cross platform between linux and Mac OSX. The main issue is its really hard to find information on pythondialogue, the only documentation seems to be on their own site. I would just use whiptail, but I'm learning python so using this to hone my python skills. What I like about pythondialogue (and whiptail) is that its not really a GUI, just a dialogue inside the CLI, so it can used purely through the command line such as if you SSH to the computer you want to run it on. Can tkinter do this too? Either way, a big thing I'm wondering is what benefits tkinter would provide over regular pythondialogue. Obviously the difference is it lets you create proper GUI applications, but would it be wisest to only create a GUI application in cases where its absolutely necessary? tkinter sounds like the easiest way to code GUIs in python. What disadvantages does it have to PyQt or wxPython. I wan't to start developing mobile apps as soon as possible and I see iOS and android apps can be written using python and Qt can be used to write both Android and iOS apps. So with this in mind, would learning PyQt mean I would also be developing the skills I'll need to create iOS and android apps? If so, this is most definitely what I'm going to do.
Python - pythondiologue vs. tkinker vs. wxPython vs. pyQt
0
0
0
1,504
42,830,209
2017-03-16T09:37:00.000
0
0
1
0
python,arrays
42,830,903
2
false
0
0
Just read your text document, pick up rights informations, then put in in 3 differents arrays for name, GTIN, and price. Maybe you can show what your document looks like.
1
0
0
I am trying to make a receipt creator where you "buy" products then go to the checkout to confirm you want to "buy" your items.(this is just a python program, you don't spend money). However, i feel like this could become extremely easy for me if i could put the names of all of the items in one array, the GTIN 8 number into another and the price into a final array. My problem is that I MUST use some sort of text document to store the items with their GTIN 8 number and their price. Is it possible to do this, and if so, how? Here is an example of a document that i would use: GTIN 8 NO. NAME. PRICE. 66728009, NET, 10.00 74632558, OATMEAL, 5.00 05103492, FISHING ROD, 20.00 45040122, FISH BAIT, 5.00 20415112, MILK, 2.00 37106560, SHOES, 25.00 51364755, T-SHIRT, 10.00 64704739, TROUSERS, 15.00 47550544, CEREAL, 2.00 29783656, TOY, 10.00
Python: how to add different part of a text document to different arrays
0
0
0
47
42,832,429
2017-03-16T11:11:00.000
0
0
1
1
python,argparse
42,842,312
1
false
0
0
rsync has been around long enough that it (or many implementations) probably uses getopt for parsing the commands (if it doesn't do its own parsing). Python has a version of getopt. Neither the c version or the python has a mechanism for replacing a -a command with -rlptgoD. Any such replacement is performed after parsing. The primary purpose of a parser is to decode what the user wants. Acting on that information is the responsibility of your code. I can imagine writing a custom Action class that would set multiple attributes at once. But it wouldn't save any coding work. It would look a lot like a equivalent function that is used after parsing.
1
0
0
Some Linux commands provide a single option that is equivalent to a given group of options, for convenience. For example, rsync has an option -a which is equivalent to -rlptgoD. For a Python script, is it possible to implement this behaviour using argparse? Or should I just pass the -a option to my code and handle it there?
Can Python's argparse replace a single option by a group of options?
0
0
0
65
42,835,852
2017-03-16T13:44:00.000
3
0
1
0
python,nlp,nltk,pos-tagger,lemmatization
42,841,898
2
true
0
0
Part of speech is important for lemmatisation to work, as words which have different meanings depending on part of speech. And using this information, lemmatization will return the base form or lemma. So, it would be better if POS Tagging implementation is done first. The main idea behind lemmatisation is to group different inflected forms of a word into one. For example, go, going, gone and went will become just one - go. But to derive this, lemmatisation would have to know the context of a word - whether the word is a noun or verb etc. So, the lemmatisation function can take the word and the part of speech as input and return the lemma after processing the information.
2
1
1
If I wanted to make a NLP Toolkit like NLTK, which features would I implement first after tokenisation and normalisation. POS Tagging or Lemmatisation?
Which comes first in order of implementation: POS Tagging or Lemmatisation?
1.2
0
0
519
42,835,852
2017-03-16T13:44:00.000
2
0
1
0
python,nlp,nltk,pos-tagger,lemmatization
42,875,622
2
false
0
0
Sure make the POS Tagger first. If you do lemmatisation first you could lose the best possible classification of words when doing the POS Tagger, especially in languages where ambiguity is commonplace, as it is in Portuguese.
2
1
1
If I wanted to make a NLP Toolkit like NLTK, which features would I implement first after tokenisation and normalisation. POS Tagging or Lemmatisation?
Which comes first in order of implementation: POS Tagging or Lemmatisation?
0.197375
0
0
519
42,835,931
2017-03-16T13:47:00.000
1
0
0
0
python,cassandra,cqlsh,cassandra-2.1
42,836,985
1
false
0
0
Assuming you have access to port 9042 (CQL protocol port) you can make a direct connection and perform any operation like normal. Where the actual nodes live doesn't matter. To ensure the lowest latency, you'll want the clients as close to the cluster nodes as possible.
1
0
0
How to create a Cassandra key space in predix cloud and make operations like create , insert, Update and select Operations?
Connecting to cloud cassandra using python cassandra driver
0.197375
0
0
37
42,837,372
2017-03-16T14:47:00.000
2
0
0
0
python,flask,flask-login
42,837,507
1
true
1
0
In the same way that you logged in the current user, call login_user with a different user instance to become that user. You can't acquire their actual session data though unless you're using server side sessions. The default cookie-based session only stores the data as a cookie, which only the real user has access to. Consider the security implications before implementing something like this. If a superuser can become any user, then only one login needs to be compromised to get everyone's data.
1
1
0
I need to implement a special superuser account with ability to change the current session to a new session of any other user. How do I change to a different account if I'm already logged in?
Change Flask-Login currently logged in user
1.2
0
0
475
42,838,181
2017-03-16T15:21:00.000
0
0
1
0
python-3.x,python-imaging-library,macos-sierra,pillow
43,301,432
2
false
0
0
To Add onto @Roland_Smith's answer This worked for me brew install homebrew/science/pillow
1
0
0
I've just updated my OS and cannot import PIL. I installed Xcode after I updated my OS so that is up to date. I've seen many answers on this site refer to older versions of MAC OS X but nothing has worked for me. I installed Pillow as that was the advice given in another answer. I installed Pillow with brew install pillow In my module I have from PIL import Image And I'm getting the error: ModuleNotFoundError: No module named 'PIL'
Can't install PIL on MAC OS X 10.12
0
0
0
1,619
42,839,785
2017-03-16T16:32:00.000
0
0
1
0
biopython
43,069,961
1
true
0
0
I ended up subclassing the Biopython EMBL parser and hacking up my own GENESEQ parser.
1
0
0
I'm trying to parse files from the Derwent GENESEQ database. The files are supposedly in EMBL formatted, but there are small differences that break SeqIO.parse('foo.dat', 'embl'). Has anyone successfully parsed these files with Biopython or other Python libs?
Can Biopython parse Derwent GENESEQ format?
1.2
0
0
49
42,840,072
2017-03-16T16:45:00.000
12
0
1
0
python,string
42,840,110
1
true
0
0
F-strings were a new feature introduced in Python 3.6, so of course they're not going to work in 3.5.2.
1
3
0
I have a variable called method, it's value is POST but when I try to run print(f"{method} method is used.") it keeps giving a syntax error at the last double quote and I can't find the reason on why it does this. I am using python 3.5.2.
F string prefix in python giving a syntax error
1.2
0
0
8,625
42,841,892
2017-03-16T18:16:00.000
0
0
1
0
python,nlp,information-retrieval
42,842,833
2
false
0
0
It just depends on what you're calling a "document". To me it sounds like you're describing a paragraph within a document. It could be a document, but then you'd have to identify "documents" by the full text and some sort of offset within that document, and feed the documents in appropriately.
1
0
0
For example, when calculating generality discount for a word in a corpus, one formula is log(N/n), where N is the number of documents in a corpus and n is the number documents that contains the word. Is a document a string that ends with a new line?
Is a document a sentence in a text file?
0
0
0
42
42,844,636
2017-03-16T20:57:00.000
6
0
1
0
python,python-3.x,functools
42,844,721
3
false
0
0
As @HaiVu said in his comment partial called in a class definition will create a staticmethod, while partialmethod will create a new bound method which when called will be passed self as the first argument.
1
21
0
I found out that functools module of Python 3 has two very similar methods: partial and partialmethod. Can someone provide good examples of using each one?
What is the difference between partial and partialmethod?
1
0
0
5,872
42,845,548
2017-03-16T21:59:00.000
1
0
1
0
python-3.x,pip,virtualenv
42,846,135
1
true
0
0
Fixed by using pip3 freeze requirements.txt
1
1
0
I need to copy a python 3.6 environment to another machine(using windows 10 on both). There are several questions addressing this to some extent, but they all seem to come to the same conclusions which either isn't working for me or I am missing something. Basically, everyone says use virtualenv <path\to\env> --system-site-packages to make and environment. Activate the environment with pip freeze > requirements.txt, then on my other machine make a new virtual env again, active it and run pip install -r requirements.txt. I could not get the first step to work, because I had python 2.7 also installed, and the --python option also didn't work. I did some digging and found this command, which worked: python -m venv <path/to/env> once in my (activated) venv, I ran pip freeze > requirements.txt which worked fine, but when I went to install into another "blank" virtual environment-with pip install -r requirements.txt -I got the following error: "No matching distribution found for backports.datetime-timestamp==1.0.2.dev0". After looking into that, it seems that the requirements.txt file copied all of my libraries including the built in ones....At least that was what it seems like. I am wondering if there is a way to have pip freeze ignore built in libraries, or otherwise if there is a better to move virtual envs? I could also zip the whole virtual environment up, but it seemed like most people discouraged doing that; if anyone could also shed some light on why this is a bad practice that would also be useful. Or did I just mess up some step along the way?
Clean way to move python 3.6 venv between machines?
1.2
0
0
2,009
42,846,803
2017-03-16T23:48:00.000
2
0
1
0
python,pycharm
47,140,011
4
false
0
0
If you use Win 10, 64Bits. Run your codes using Ctrl + Shift + F10 or simply right click on the workspace and click Run from the options.
3
5
0
If I go to "tools" and select "python console", and enter several lines of code, how do I execute this? If my cursor is at the end of the script, I can just hit enter. But how can I run the code using keyboard shortcuts if the cursor is not at the end? In Spyder this is done using shift+enter, but I can't figure out how to do it here. I've seen places say control+enter, but that doesn't work. Thanks!
How to run code in Pycharm
0.099668
0
0
41,595
42,846,803
2017-03-16T23:48:00.000
1
0
1
0
python,pycharm
49,919,793
4
false
0
0
in mac. you can use fn+shift+f10 and happy coding with python
3
5
0
If I go to "tools" and select "python console", and enter several lines of code, how do I execute this? If my cursor is at the end of the script, I can just hit enter. But how can I run the code using keyboard shortcuts if the cursor is not at the end? In Spyder this is done using shift+enter, but I can't figure out how to do it here. I've seen places say control+enter, but that doesn't work. Thanks!
How to run code in Pycharm
0.049958
0
0
41,595
42,846,803
2017-03-16T23:48:00.000
0
0
1
0
python,pycharm
52,305,481
4
false
0
0
Right click on project name / select New / select Python File Pycharm needs to know you're running a Python file before option to run is available
3
5
0
If I go to "tools" and select "python console", and enter several lines of code, how do I execute this? If my cursor is at the end of the script, I can just hit enter. But how can I run the code using keyboard shortcuts if the cursor is not at the end? In Spyder this is done using shift+enter, but I can't figure out how to do it here. I've seen places say control+enter, but that doesn't work. Thanks!
How to run code in Pycharm
0
0
0
41,595
42,848,692
2017-03-17T03:26:00.000
0
0
0
1
google-app-engine,google-app-engine-python
42,870,689
1
false
1
0
A common reason for not seeing your changes instantly after deploying is that you didn't change the application version. Instances with the same version will continue serving traffic until they die off, which could take a while. If instead you bump the default version, traffic will only be routed to instances that are funning the newer version of the code.
1
0
0
I have an app on Google App Engine, I used to deploy my application and see the deployed files and changes instantly. But recently I have to wait about 5 minutes to see if the files are changed. The only thing that I suspect is that I changed the application Zone. I am not sure what was the default Zone but now I set it to us-central1-a. How can I solve this issue? I want to see all changes instantly as before. Thanks!
After deploying python app on Google App Engine changes are reflected after several minutes
0
0
0
47
42,849,056
2017-03-17T04:06:00.000
1
0
0
0
python,mongodb,pymongo
42,850,060
2
false
0
0
Secondaries are read-only by default. However, you can specify the read preference to read from secondaries. By default, it reads from the primary. This can be achieved using readPreference=secondary in connection string
1
1
0
How to achieve a read-only connection to the secondary nodes of the MongoDB. I have a primary node and two secondary nodes. I want a read-only connection to secondary nodes. I tried MongoReplicaSetClient but did not get what I wanted. Is it possible to have a read-only connection to primary node?
How to achieve a read only connection using pymongo
0.099668
1
0
1,793
42,849,572
2017-03-17T05:02:00.000
4
0
0
0
python,dataframe,dask
42,857,362
1
true
0
0
Short answer is probably "no, there is no way to do this without looking at the data". The reason here is that the structure of the graph depends on the values of your lazy partitions. For example we'll have a different number of nodes in the graph depending on your total datasize.
1
4
1
I have a dask dataframe created from delayed functions which is comprised of randomly sized partitions. I would like to repartition the dataframe into chunks of size (approx) 10000. I can calculate the correct number of partitions with np.ceil(df.size/10000) but that seems to immediately compute the result? IIUC to compute the result it would have had to read all the dataframes into memory which would be very inefficient. I would instead like to specify the whole operation as a dask graph to be submitted to the distributed scheduler so no calculations should be done locally. Is there some way to specify npartitions without having it immediately compute all the underlying delayed functions?
How to repartition a dataframe into fixed sized partitions?
1.2
0
0
298
42,850,344
2017-03-17T06:06:00.000
0
0
0
0
python,facebook
42,853,495
2
false
0
0
If I'm not mistaken, 'y' is the data you're using yo fit with, i.e. the input to prophet. 'yhat' is the mean(or median?) of the predicted distribution.
1
1
1
I can use fbprophet (in python) to get some predicted data, but it just includes 't', 'yhat', 'yhat_upper', 'yhat_lower' and or so rather than 'y' which I also want to acquire. At present I think I can't get the value of 'y' from the predicted data because Prophet doesn't work for predicting the future value like 'y'. Am I predicting by a wrong way?
How can i get 'y' rather than 'yhat' from predicted data by using fbprophet?
0
0
0
475
42,853,347
2017-03-17T09:13:00.000
0
1
0
1
python,mongodb,pymongo
65,763,003
1
false
0
0
First you need to ensure that your directory is in the correct folder. for example you can write cd name_of_folder then to run it you need to typepython your_filen_name.py
1
0
0
I have a python file named abc.py. I can run it in mongodb with the help of robomongo but i couldnt run it in cmd. Can anyone tell me how to run a .py file in mongodb using cmd ?
how to run python file in mongodb using cmd
0
0
0
82
42,858,256
2017-03-17T13:00:00.000
0
0
0
0
python,user-interface,kivy
42,865,590
1
false
0
1
just like the way Toast works in kivyMD, you can try and implement that feature. you have to use animation to hide the widget far from the screen and then call a function to display it.Go through the kivymd toast to see how you can do it
1
1
0
How can I dynamically hide and show widget without removing it from parent widget (such as using the property "visible" in over UI frameworks)?
Dynamically hide and show widget in KivyMD
0
0
0
550
42,859,075
2017-03-17T13:40:00.000
0
0
1
1
python,logging,distributed-computing,multiple-instances
42,872,180
1
true
0
0
I will use MySQL. This way I will have a standard tool for log analysis (MySQL Workbench), will solve the problem with multiple instance logging serialization. The best way would be probably to write a handler to standard logging module but at the moment I'll sent all messages through rabbitmq to service that stores them.
1
0
0
I have a worker application written in python for a distributed system. There is a situation when I need to start multiple instances of this worker on a single server. Logging should be written into file I suspect that I cannot write to the same file from different instances. So what should I do, pass log-file name as command line argument to each instance? Is there a standard approach for such situation?
If I have multiple instance of the same python application running how to perform logging into file?
1.2
0
0
303
42,864,636
2017-03-17T18:21:00.000
2
1
0
0
python,bitbucket,bitbucket-pipelines
42,899,528
1
false
0
0
You can create a deployment key in rep C and add the key as an environment variable in rep P. Then, rep P is able to checkout the code from rep C and do whatever it needs/wants to do with it. The checkout could either use a fixed branch such as “master”, or dynamically checkout a branch whose name is derived from $BITBUCKET_BRANCH in rep P.
1
4
0
We are using Bitbucket for version control and we have two repositories. One (rep C) has C++ code that we re-compile rarely, and the other one (rep P) has Python code which calls the C++ code. This is where most of the work happens. I want to set up pipelines so that when we push code in rep P, it runs all the unit tests. My problem is that the python code requires the compiled C++ binaries of rep C. Is there a way to set up BitBucket pipelines such that when we push code in rep P it compiles the code of rep C, so that the unit tests of rep P can use those binaries? Is it necessary to add the binaries and their libraries in rep P for that to happen?
How to use data that is not in the repository on BitBucket Pipelines
0.379949
0
0
66
42,865,704
2017-03-17T19:30:00.000
0
0
1
0
python,git,pip
42,867,251
1
false
0
0
Working Out the Name and Version For each candidate item, pip needs to know the project name and version. For wheels (identified by the .whl file extension) this can be obtained from the filename, as per the Wheel spec. For local directories, or explicitly specified sdist files, the setup.py egg_info command is used to determine the project metadata. For sdists located via an index, the filename is parsed for the name and project version (this is in theory slightly less reliable than using the egg_info command, but avoids downloading and processing unnecessary numbers of files). Any URL may use the #egg=name syntax to explicitly state the project name.
1
1
0
Both of the following commands successfully install my package without error. pip install git+https://path_to_repo/[email protected] pip install git+https://path_to_repo/[email protected]#egg=repo_name What is the difference? I'm using pip 7.1.0 and 9.0.1
What is the difference between pip installing a git repo with and without #egg=
0
0
0
267
42,867,232
2017-03-17T21:16:00.000
0
1
0
0
python,qt,serial-port,pyside,qtserialport
42,908,493
2
false
0
0
i found a workaround which is using PyQt5 serial port module and building a standalone module that handle serial communication and communicating with my main application using Inter Process Communication (IPC) or a local network socket, that will do it for now, and i have no problem open sourcing this serial communication module, and my main application is intact.
1
0
0
i am working with pyside and trying to do asynchronous serial communication with it but the QtSerialPort is not yet available, i have used pyserial and moved the serial communication to another thread using moveToThread() but i have to check if there there is a message regularly, so i used a QTimer to handle that every 200 ms,but this solution is over kill, if i can have Qt send a readyRead signal every time there is data available, the question is precisely is : is there is a ready module that help with without breaking my whole code depenedency on pyside? if there isn't, what is your tips for quickly implementing one? thanks in advance.
asynchronous serial communication using pyside
0
0
0
1,533
42,868,546
2017-03-17T23:21:00.000
1
0
0
0
python,tensorflow,deep-learning,text-classification,text-recognition
42,870,291
1
false
0
0
To group elements on a page, like paragraphs of text and images, you can use some clustering algo, and/or blob detection with some tresholds. You can use Radon transform to recognize lines and detect skew of a scanned page. I think that for character separation you will have to mess with fonts. Some polynomial matching/fitting or something. (this is a very wild guess for now, don't take it seriously). But similar aproach would allow you to get the character out of the line and recognize it in same step. As for recognition, once you have a character, there is a nice trigonometric trick of comparing angles of the character to the angles stored in a database. Works great on handwriting too. I am not an expert on how page segmentation exactly works, but it seems that I am on my way to become one. Just working on a project including it. So give me a month and I'll be able to tell you more. :D Anyway, you should go and read Tesseract code to see how HP and Google did it there. It should give you pretty good ideas. Good luck!
1
0
1
I a working on a text recognition project. I have built a classifier using TensorFlow to predict digits but I would like to implement a more complex algorithm of text recognition by using text localization and text segmentation (separating each character) but I didn't find an implementation for those parts of the algorithms. So, do you know some algorithms/implementation/tips I, using TensorFlow, to localize text and do text segmentation in natural scenes pictures (actually localize and segmentation of text in the scoreboard for sports pictures)? Thank you very much for any help.
Text recognition and detection using TensorFlow
0.197375
0
0
2,116
42,869,367
2017-03-18T01:14:00.000
3
0
1
0
python
42,869,385
1
true
0
0
As is clearly stated in operator precedence documentation, exponents are evaluated right-to-left. The rationale is exactly what you demonstrated in your posting: there's a better way to write a left-to-right evaluation, so the programmer likely intended the other interpretation.
1
0
0
In python 2.7 when you evaluate 2**2**2**2**0 you get 16 whereas the mathematical result is 2^2^2^2^0 = 2^(2*2*2*0) = 1. Obviously the expression is being evaluated in the wrong order. I am scared of making a mistake. Is there a way to change this behaviour ? Edit : The question is wrong, see below.
Python order of evaluation of nested power operands
1.2
0
0
111
42,869,754
2017-03-18T02:18:00.000
0
0
0
0
python,windows,hyperlink,rename,file-rename
44,369,474
1
false
0
0
You cannot rename a file with extension .lnk. Looks like Windows blocks that operation. Instead i suggest you copy the file with new name, in your case 0.lnk, and then remove the original file.
1
1
0
I have some link files on windows, then I put them in a folder, I want to rename them as 0 1 2 ... , so windows can sort them automatically. I tried os.rename('src.txt.link', '0'), but the result is that the "0" file can not be opened. Is there an another python api to do this?
How to rename a link file on Windows with python?
0
0
0
355
42,870,882
2017-03-18T05:33:00.000
4
0
1
0
python,python-3.x,abstract-syntax-tree,pyro
42,870,898
2
false
0
0
You cannot. You will need to reimplement it from scratch, adding support for your class within it.
1
2
0
I have my custom class that represents an object. I want to make that object compatible with "ast.literal_eval()" How can I do that? I can add necessary method/code to my class if necessary.
How to make python class "ast.literal_eval()" compatible?
0.379949
0
0
430
42,871,248
2017-03-18T06:24:00.000
0
0
0
0
python,sockets,network-programming,udp
42,891,009
2
true
0
0
Ideally, you want the timeout to be equal to the exact time it takes from the moment you've sent a packet, until the moment you receive the acknowledgement from the other party - this pretty much the RTT. But it's almost impossible to know the exact ideal timeout in advance, so we have to guess. Let's consider what happens if we guess wrong. If we use a timeout which happens to be lower than the actual RTT, we'll timeout before the ack had time to arrive. This is known as a premature timeout, and it's bad - we're re-transmitting a packet even though the transmission was successful. If we use a timeout which happens to be higher than the actual RTT, it'll take us longer to identify and re-transmit a lost packet. This is also bad - we could have identified the lost packet earlier and re-transmitted it. Now, about your question: Can it be just any arbitrary integer multiple of the round-trip-time (RTT)? First of all, the answer is yes. You could use any positive integer, from 1 to basically infinity. Moreover, you don't even have to use an integer, what's wrong with a multiplier of x2.5? But what's important is to understand is what would happen with different multipliers. If you pick a low multiplier such as 1, you'd run into quite a few premature timeouts. If you pick a large number, such as 100, you'd have many late timeout, which will cause your transmissions to halt for long periods of time.
1
0
0
How should be the timeout of the socket be estimated in a stop-and-wait protocol over UDP? Can it be just any arbitrary integer multiple of the round-trip-time (RTT)?
Setting timeout in Stop-and-Wait protocol in UDP
1.2
0
1
1,651
42,871,961
2017-03-18T08:02:00.000
0
1
0
0
php,python,mysql
42,872,603
1
false
1
0
You are probably not using the virtual python environment. In your terminal type which python. If the output is /usr/bin/python you need to switch to your virtual environment. Go to the directory where you created your virtualenv and in terminal enter source bin/activate. Then use which python to verify your are now using your virtual environment to run the server.
1
0
0
i have published my website using python-php mysql hosting server. Now they told me to create an virtualenv after that i was able to install my required packages like pypi newspaper. My python scripts are totally dependent on pypi newspaper. Now the issue is that when i call my index.php from publi_html and call my python script it shows me following error: Traceback (most recent call last):" [1]=> string(79) " File "/home/adpnewsi/public_html/adpScripts/getImage.py", line 3, in " [2]=> string(33) " from newspaper import Article" [3]=> string(38) "ImportError: No module named newspaper" }
error while executing python script from php
0
0
0
50
42,873,222
2017-03-18T10:34:00.000
-1
0
0
1
python,json,linux,shell,automation
42,874,372
2
false
0
0
You can use command df Provides an option to display sizes in Human Readable formats (e.g., 1K 1M 1G) by using ‘-h’.This is the most common command but you can also check du and di. di in fact provides even more info than df.
1
2
0
I am planning to automate a process of cleaning file systems in Linux using a set of scripts in Shell, Python and I'll create a simple dashboard using Node.js to allow a more visual approach. I have a script in Shell which already cleans a file system in a specific server - but I have to login and then issue this command. Now I am proceeding with a dashboard in HTML/CSS/JS to visualize all servers which are having space problems. My idea is: create a Python scrip to login and get a list of filesystems and its usage and update a single JSON file, then, my dashboard uses this JSON to feed the screen. My question is how to get the list of file system in Linux and its usage?
Get a list of all mounted file systems in Linux with python
-0.099668
0
0
1,490
42,873,225
2017-03-18T10:34:00.000
1
0
0
0
python,django
42,874,336
1
true
1
0
You should create separate module. There is no reason to have same function implemented in multiple places, it makes your code hard to maintenance. Each functionality should have own module (in this case app), so modifying email you will have to modify only one file/module, not all email functions in all modules
1
0
0
I am creating a social networking site using django. I will send lots of emails depending on the user actions. Should I create an email app or just create an email_funcitons.py file in each individual app. Which is best practice? Apps created so far: Accounts NewsFeed Profile Notifications Messaging Privacy
I am creating a social networking site using django, should I have create a email app
1.2
0
0
285
42,873,815
2017-03-18T11:32:00.000
1
0
0
0
android,python,version
42,873,940
2
true
0
0
maybe store the min and max as two variables, then do if os(version)> (min value): then check if os(version) < (max value)
1
0
0
First of all, nice day to everyone! My code needs to detect which android version is being used on by means of getprop ro.build.version.release. I do so using systems with root and python for android. For the time being the devices I tested the code things were working nice but partially of course because I haven't taking into consideration the updates in between new os versions. I just added the new version I was testing on to a dictionary. So let's say I have this dictionary: VersionName = {'4.0':"Ice Cream Sandwich", '4.0.4':"Ice Cream Sandwich", '4.1':"Jelly Bean", '4.2':"Jelly Bean", '4.3':"Jelly Bean", '4.4':"KitKat", '5.0':"Lollipop", '5.1':'Lollipop', '6.0':"Marshmallow", '7.1.1':"Nougat"} I would add: '4.4.2':'KitKat' To save and detect the new version I was working on but that's not going to work. To overcome this I just simply made a search to select the first and the last release of an android version. VersionName = {'Ice Cream Sandwich':['4.0', '4.0.4'], 'Jelly Bean':['4.1', '4.3.1'], 'KitKat':['4.4', '4.4.4'], 'Lollipop':['5.0', '5.1.1'], 'Marshmallow':['6.0', '6.0.1'], 'Nougat':['7.0', '7.1.1']} The problems comes if the update version is in between the two values. Given {'KitKat':['4.4', '4.4.4']} and device version 4.4.2 how can I detect if it's part of KitKat
Select android os name based on version number
1.2
0
0
111
42,878,237
2017-03-18T18:27:00.000
0
0
1
0
python,visual-studio-2013
56,850,040
2
false
1
0
PTVS 2.2.2 works for Visual Studio 2013 + Python 3.6.
1
1
0
I tried to install python tools 2.2 for Visual studio 2013 for flask web development, but i keep getting an error saying, Unsupported python version - 3.6 My python version is 3.6.0. Is python 3.6 supported on Visual Studio 2013?
Python 3.6 supported for visual studio 2013
0
0
0
4,969
42,879,963
2017-03-18T21:14:00.000
5
0
0
0
python,session,flask,socket.io,flask-socketio
42,888,376
2
true
1
0
There are a couple of ways to go at this, depending on how you have your two servers set up. The easiest solution is to have both servers appear to the client on the same domain and port. For example, you can have www.example.com/socket.io as the root for your Socket.IO server, and any other URLs on www.example.com going to your HTTP server. To achieve this, you need to use a reverse proxy server, such as nginx. Clients do not connect directly to your servers, instead they connect to nginx on a single port, and nginx is configured to forward requests the the appropriate server depending on the URL. With the above set up both servers are exposed to the client on the same domain, so session cookies will be sent to both. If you want to have your servers appear separate to your client, then a good option to share session data is to switch to server-side sessions, stored in Redis, memcache, etc. You can use the Flask-Session to set that up. Hope this helps!
2
4
0
I have a backend with two flask servers. One that handles all RESTfull request and one that is an flask-socketio server. Is there a way how I can share the session variables (Logged in User etc.) between these two applications? They do run over different ports if that is important. How I have understood sessions they work over client session cookies so shouldn't both of these servers have access to the information? If yes how? and if not is there a way to achieve the same effect?
Sharing sessions between two flask servers
1.2
0
0
2,550
42,879,963
2017-03-18T21:14:00.000
0
0
0
0
python,session,flask,socket.io,flask-socketio
62,935,664
2
false
1
0
I found that flask.session.sid = sid_from_another_domain works fine in having individual sub domains case. I have several flask apps has individual domain names like A.domain.com, B.domain.com, and C.domain.com. They all are based on flask and has redis session manager connected to same redis server. I wanted to federate them to be logged in at once and to be logged out too. I had to save the session id on db with user infomation together when I logged in A and passed it to domain B. These domains communicates using oauth2 protocol, and I used flask_dance in this case. And set it into flask.session.sid on B domain. And then I could confirmed that this implementation works fine.
2
4
0
I have a backend with two flask servers. One that handles all RESTfull request and one that is an flask-socketio server. Is there a way how I can share the session variables (Logged in User etc.) between these two applications? They do run over different ports if that is important. How I have understood sessions they work over client session cookies so shouldn't both of these servers have access to the information? If yes how? and if not is there a way to achieve the same effect?
Sharing sessions between two flask servers
0
0
0
2,550
42,881,071
2017-03-18T23:24:00.000
2
0
1
0
python,multithreading,performance,time,benchmarking
42,881,144
1
false
0
0
No CPU time is the time spent by all cpus on the task. So if cpu1 spent 2 minutes on a task and cpu2 spent 3 minutes on the same task the cpu time will be 1 + 3 = 4. So in multithreaded programs we would expect that cpu time will usually be more than the wall time. Now you might ask why does the same hold for your single-threaded program. The answer will probably be that even if your code does not explicitly use parallelism, there is probably a library you use that does.
1
1
0
Followings are the result after profiling using %time in ipython- single-thread: CPU time: user 6m44s sys 1.78s total 6m46s Wall time: 5m19s 4-thread: CPU time: user 10m12s sys 2.83s total 10m15s Wall time: 4m14s Shouldn't CPU time be lesser for multi-threaded code ? Also, how can be CPU time be more than wall time, as wall time is total elapsed time. Could you please clarify these terminology.
how to analyze cpu time while benchmarking in python (multiprocessing)?
0.379949
0
0
141
42,881,154
2017-03-18T23:34:00.000
0
0
1
0
python,pip,installation,importerror
43,602,549
2
false
0
0
Met the same problem. Fixed by installing the pybrain from github: pip install https://github.com/pybrain/pybrain/archive/0.3.3.zip
1
1
1
I have no experience with Python. Just trying to run a program I downloaded from GitHub. Had a lot of problems trying to run it. After adding PATH, and installing a few modules(?), I got stuck: When I type in "python filename.py", I get the error: ImportError: cannot import name: 'SequentialDataSet' I got the same error with different names. Just typed in "pip install (name)" to fix it. Got the same error again with another name, installed, but now I'm stuck with the error: Could not find a version that satisfies the requirement SequentialDataSet (from versions: ). No matching distribution found for SequentialDataSet. Let me know if there is any info you need Thanks
Cannot import name -> No matching distribution when trying to install
0
0
0
2,259
42,882,604
2017-03-19T03:29:00.000
-2
0
0
0
python,algorithm,python-3.x,euclidean-distance
52,922,702
3
false
0
0
I had the same issue before, and it worked for me once I normalized the values. So try to normalize the data before calculating the distance.
1
15
1
I have a MxN array, where M is the number of observations and N is the dimensionality of each vector. From this array of vectors, I need to calculate the mean and minimum euclidean distance between the vectors. In my mind, this requires me to calculate MC2 distances, which is an O(nmin(k, n-k)) algorithm. My M is ~10,000 and my N is ~1,000, and this computation takes ~45 seconds. Is there a more efficient way to compute the mean and min distances? Perhaps a probabilistic method? I don't need it to be exact, just close.
Efficient calculation of euclidean distance
-0.132549
0
0
1,648
42,883,603
2017-03-19T06:28:00.000
1
0
0
0
python,deep-learning,caffe,convolution,pycaffe
42,885,380
2
false
0
0
SigmoidWithLoss layer outputs a single number per batch representing the loss w.r.t the ground truth labels. On the other hand, Sigmoid layer outputs a probability value for each input in the batch. This output does not require the ground truth labels to be computed. If you are looking for the probability per input, you should be looking at the output of the Sigmoid layer
1
2
1
I fine tunes vgg-16 for binary classification. I used sigmoidLoss layer as the loss function. To test the model, I coded a python file in which I loaded the model with an image and got the output using : out = net.forward() My doubt is should I take the output from Sigmoid or SigmoidLoss layer. And What is the difference between 2 layers. My output will actually be the probability of input image being class 1.**
PyCaffe output layer for testing binary classification model
0.099668
0
0
329
42,884,375
2017-03-19T08:17:00.000
0
0
0
0
python,image,itk,elastix,simpleitk
42,889,299
1
false
0
1
If I understand your question correctly, you want the impossible: to have Hausdorff distance measure as if the image were segmented, but without segmenting it because the segmentation is hard.
1
1
1
I have registered two images, let's say fixed and moving are Registered. After registration I want to measure overlap ratio etc. The SimpleITK has overlap measure filters and to use overlap_measures_filter.Execute(fixed, moving) and hausdroff_measures_filter.Execute() we need to segment the image and we need labels in input. But the image is hard to segment using just thresholding or connected component filters. Now the question is then how can we evaluate registration accuracy using SimpleITK with just fixed image and the registered image.(without segmentation ad labeling the image)
Image Registration accuracy evaluation (Hausdroff distance) using SimpleITK without segmenting the image
0
0
0
363
42,886,286
2017-03-19T11:56:00.000
1
0
0
0
python,opencv,anaconda,conda
62,689,216
6
false
0
0
The question is old but I thought to update the answer with the latest information. My Anaconda version is 2019.10 and build channel is py_37_0 . I used pip install opencv-python==3.4.2.17 and pip install opencv-contrib-python==3.4.2.17. Now they are also visible as installed packages in Anaconda navigator and I am able to use patented methods like SIFT etc.
2
10
1
Can anyone tell me commands to get contrib module for anaconda I need that module for matches = flann.knnMatch(des1,des2,k=2) to run correctly error thrown is cv2.error: ......\modules\python\src2\cv2.cpp:163: error: (-215) The data should normally be NULL! in function NumpyAllocator::allocate Also I am using Anaconda openCV version 3, and strictly dont want to switch to lower versions P.S. as suggested at many places to edit file cv2.cpp option is not available with anaconda.
how to get opencv_contrib module in anaconda
0.033321
0
0
29,093
42,886,286
2017-03-19T11:56:00.000
14
0
0
0
python,opencv,anaconda,conda
44,329,928
6
false
0
0
I would recommend installing pip in your anaconda environment then just doing: pip install opencv-contrib-python. This comes will opencv and opencv-contrib.
2
10
1
Can anyone tell me commands to get contrib module for anaconda I need that module for matches = flann.knnMatch(des1,des2,k=2) to run correctly error thrown is cv2.error: ......\modules\python\src2\cv2.cpp:163: error: (-215) The data should normally be NULL! in function NumpyAllocator::allocate Also I am using Anaconda openCV version 3, and strictly dont want to switch to lower versions P.S. as suggested at many places to edit file cv2.cpp option is not available with anaconda.
how to get opencv_contrib module in anaconda
1
0
0
29,093
42,888,269
2017-03-19T15:16:00.000
2
0
0
0
python,python-3.x,sqlite,sql-delete,delete-row
42,889,578
1
false
0
0
A cursor is a read-only object, and cursor rows are not necessarily related to table rows. So this is not possible. And you must not change the table while iterating over it. SQLite computes result rows on demand, so deleting the current row could break the computation of the next row.
1
0
0
I understand you can do DELETE FROM table WHERE condition, but I was wondering if there was a more elegant way? Since I'm iterating over every row with c.execute('SELECT * FROM {tn}'.format(tn=table_name1)), the cursor is already on the row I want to delete.
While iterating over the rows in an SQLite table, is it possible to delete the cursor's row?
0.379949
1
0
346
42,890,105
2017-03-19T17:58:00.000
0
0
0
0
python,numpy,matrix,tensorflow,broadcast
42,890,197
1
false
0
0
Ok I figure it out. tf.matrix_diag() does the trick...
1
0
1
Given A = [[1,2],[3,4],[5,6]]. How to use tf.diag() to construct a 3d tensor where each stack is a 2d diagonal matrix using the values from A? So the output should be B = [[[1,0],[0,2]],[[3,0],[0,4]],[[5,0],[0,6]]]. I want to use this as my Gaussian covariance matries.
construct 3d diagonal tensor using 2d tensor
0
0
0
153
42,891,919
2017-03-19T20:32:00.000
1
0
1
0
python-3.x,google-contacts-api
52,516,469
2
false
0
0
For me I had to install like pip install git+https://github.com/dvska/gdata-python3 (without the egg). Since the package itself contains src dir. Otherwise import gdata would fail. (python 3.6.5 in virtual env)
2
1
0
I'm trying to work with the Google contacts API using Python 3.5, this presents an issue because the gdata library that is supposed to be used is not up to date for use with Python 3.5. I can use oAuth2 to grab the contact data in JSON and use that in my project, but part of the application is also adding a contact into the users contact list. I cannot find any documentation on this part, besides using the Gdata library, something I cannot do. The majority of project requires Python 3 so, switching to Python 2 would just not be something I could easily do. Is there any further documentation or a work around using the gdata library with Python 3? I'm actually very surprised that the contacts API seems so thinly supported on Python. If anyone has any further information it would be much appreciated.
Python 3.5 support for Google-Contacts V3 API
0.099668
0
1
715
42,891,919
2017-03-19T20:32:00.000
0
0
1
0
python-3.x,google-contacts-api
44,453,995
2
false
0
0
GData Py3k version: pip install -e git+https://github.com/dvska/gdata-python3#egg=gdata
2
1
0
I'm trying to work with the Google contacts API using Python 3.5, this presents an issue because the gdata library that is supposed to be used is not up to date for use with Python 3.5. I can use oAuth2 to grab the contact data in JSON and use that in my project, but part of the application is also adding a contact into the users contact list. I cannot find any documentation on this part, besides using the Gdata library, something I cannot do. The majority of project requires Python 3 so, switching to Python 2 would just not be something I could easily do. Is there any further documentation or a work around using the gdata library with Python 3? I'm actually very surprised that the contacts API seems so thinly supported on Python. If anyone has any further information it would be much appreciated.
Python 3.5 support for Google-Contacts V3 API
0
0
1
715
42,892,350
2017-03-19T21:15:00.000
1
0
1
0
python,python-2.7,comparison,string-comparison,comparison-operators
42,892,389
1
true
0
0
Strings of the same type are ordered naively, with lower byte values or code points ordered before higher byte values or code points.
1
0
0
I've so far discovered that in Python: [space] < 0-9 < A-Z < a-z when ordering strings. But why is it that '[space] a' < 'a'?  And why is it that 'abc' > 'ABCDEFG'? How are strings ordered in Python? Is there a flowchart that will help me understand this process?
How does Python order strings?
1.2
0
0
61
42,895,292
2017-03-20T03:19:00.000
1
0
1
0
python,arrays,multithreading,numpy,memcpy
57,453,505
1
false
0
0
If you are certain that the types/memory layout of both arrays are identical, this might give you a speedup: memoryview(A)[:] = memoryview(B) This should be using memcpy directly and skips any checks for numpy broadcasting or type conversion rules.
1
2
1
Suppose we have two large numpy arrays of the same data type and shape, of size on the order of GB's. What is the fastest way to copy all the values from one into the other? When I do this using normal notation, e.g. A[:] = B, I see exactly one core on the computer at maximum effort doing the copy for several seconds, while the others are idle. When I launch multiple workers using multiprocessing and have them each copy a distinct slice into the destination array, such that all the data is copied, using multiple workers is faster. This is true regardless of whether the destination array is a shared memory array or one that becomes local to the worker. I can get a 5-10x speedup in some tests on a machine with many cores. As I add more workers, the speed does eventually level off and even slow down, so I think this achieves being memory-performance bound. I'm not suggesting using multiprocessing for this problem; it was merely to demonstrate the possibility of better hardware utilization. Does there exist a python interface to some multi-threaded C/C++ memcpy tool? Update (03 May 2017) When it is possible, using multiple python processes to move data can give major speedup. I have a scenario in which I already have several small shared memory buffers getting written to by worker processes. Whenever one fills up, the master process collects this data and copies it into a master buffer. But it is much faster to have the master only select the location in the master buffer, and assign a recording worker to actually do the copying (from a large set of recording processes standing by). On my particular computer, several GB can be moved in a small fraction of a second by concurrent workers, as opposed to several seconds by a single process. Still, this sort of setup is not always (or even usually?) possible, so it would be great to have a single python process able to drop into a multi-threaded memcpy routine...
faster numpy array copy; multi-threaded memcpy?
0.197375
0
0
1,300
42,899,257
2017-03-20T08:54:00.000
1
0
1
0
python-2.7
42,899,339
1
true
0
0
I think that in the first case you are just considering positive numbers and single digit numbers, which is in turn the problem for the second case. test 1) 1+80=81 test 2) 47+1+90+45=183
1
0
0
function that given an array A consisting of N integers, returns the sum of all two -digit numbers. def solution(A): # write your code in Python 2.7 sum = 0 for i in A: if i in range(0,1000): sum = sum+i return sum A = [47,1900,1,90,45] why would i get 183 instead of 182,please assist Running solution... Compilation successful. Example test: [1, 1000, 80, -91] WRONG ANSWER (got 81 expected -11) Example test: [47, 1900, 1, 90, 45] WRONG ANSWER (got 183 expected 182) Detected some errors.
# function that given an array A consisting of N integers, returns the sum of all two -digit numbers.
1.2
0
0
2,274
42,901,363
2017-03-20T10:35:00.000
1
0
1
0
python,bdd,scenarios,python-behave
42,931,021
2
true
0
0
Calling scenarios from scenarios is not supported by Gherkin and thus not possible. What you can do is call a strep implementation from another step. Calling steps from another step is, however, an anti-pattern and not a good idea. It will lead you down a bad path. What you want to do is to call a helper method from both step implementations. I.e. move the desired functionality from a step to a common helper method and use that functionality from both steps.
1
0
0
In Behave, for python How to call a scenario from other scenarios present in separate feature files? i.e. In Feature1.feature file Scenario1 Feature2.scenario2 Feature3.scenario3
How to call a scenario from other scenario present in separate feature files? i.e. Feature1.feature Scenario1 Feature2>>scenario2
1.2
0
0
796
42,903,669
2017-03-20T12:30:00.000
1
0
1
0
python,maya,autodesk,pymel
42,905,764
1
false
1
0
I figured it out by looking at the script editor. It's within the runtime module. So for anyone else with this problem, the command is: pymel.Core.runtime.RenderIntoNewWindow()
1
0
0
How can I use pymel, or just python in general, to render the current frame with whatever settings are given? I've looked into pymel.core.rendering, but all I can find are render specific commands. I tried the basic cmds.render() but it didn't do anything. I basically want the same functionality as the "Render Current Frame"-Button at the top of the UI - that it renders the currently active view with whatever the settings are. I hope you can help me!
"Render Current Frame" in Maya with Python
0.197375
0
0
748
42,909,952
2017-03-20T17:18:00.000
0
1
0
0
java,python,hadoop,cucumber,cucumber-jvm
43,516,507
3
false
0
0
To use cucumber to test desktop applications you can use specflow which uses a framework in visual studio called teststack.white. Just google on cucumber specflow, teststack.white, etc and you should be able to get on track
2
0
1
I'm currently part of a team working on a Hadoop application, parts of which will use Spark, and parts of which will use Java or Python (for instance, we can't use Sqoop or any other ingest tools included with Hadoop and will be implementing our own version of this). I'm just a data scientist so I'm really only familiar with the Spark portion, so apologies for the lack of detail or if this question just sucks in general - I just know that the engineering team needs both Java and Python support. I have been asked to look into using Cucumber (or any other BDD framework) for acceptance testing our app front to back once we're further along. I can't find any blogs, codebases, or other references where cucumber is being used in a polyglot app, and barely any where Hadoop is being used. Would it be possible to test our app using Cucumber or any other existing BDD framework? We already plan to do unit and integration testing via JUnit/PyUnit/etc as well.
Can I use Cucumber to test an application that uses more than one language?
0
0
0
994
42,909,952
2017-03-20T17:18:00.000
0
1
0
0
java,python,hadoop,cucumber,cucumber-jvm
42,931,219
3
false
0
0
The feature files would be written using Gherkin. Gherkin looks the same if you are using Java or Python. So in theory, you are able to execute the same specifications from both Java end Python. This would, however, not make any sense. It would just be a way to implement the same behaviour in two different languages and therefore two different places. The only result would be duplication and miserable developers. What you can do is to use BDD and Gherkin to drive the implementation. But different behaviour in different languages. This will lead you to use two different sets of features. That is possible and probably a good idea given the context you describe.
2
0
1
I'm currently part of a team working on a Hadoop application, parts of which will use Spark, and parts of which will use Java or Python (for instance, we can't use Sqoop or any other ingest tools included with Hadoop and will be implementing our own version of this). I'm just a data scientist so I'm really only familiar with the Spark portion, so apologies for the lack of detail or if this question just sucks in general - I just know that the engineering team needs both Java and Python support. I have been asked to look into using Cucumber (or any other BDD framework) for acceptance testing our app front to back once we're further along. I can't find any blogs, codebases, or other references where cucumber is being used in a polyglot app, and barely any where Hadoop is being used. Would it be possible to test our app using Cucumber or any other existing BDD framework? We already plan to do unit and integration testing via JUnit/PyUnit/etc as well.
Can I use Cucumber to test an application that uses more than one language?
0
0
0
994
42,910,531
2017-03-20T17:47:00.000
1
0
0
0
python,physics,scientific-computing
43,048,476
1
true
0
1
In fact there is a working example in the file examples/pybullet/constraint.py of pybullet.
1
0
0
createConstraint method in pybullet allows to constrain one joint position w.r.t. another. Is there a way to use this function in order to prevent an object moving e.g. outside of a sphere? So far I am checking the object position every timestep and change the position manually in case of violated constraints.
How to use pybullet createConstraint for constraining world frame position of an object
1.2
0
0
1,314
42,915,249
2017-03-20T22:38:00.000
1
0
0
0
python,date,format,regression
42,915,731
1
true
0
0
You will need to convert the string describing the week into an integer you can use as the abscissa (x-coordinate, or independent variable). Pick a "zero point", such as FY2012 WK 52, so that FY2013 WK 01 translates to the integer 1. I don't that DateTime handles this conversion; you might have to code the translation yourself: parse the string into year and week integers, and compute the abscissa from that: 52*(year-2013) + week You might also want to keep a dictionary of those translations, as well as a reverse list (week => FY_week) for output labelling. Does that move you toward a solution you can implement?
1
0
1
My current data set is by Fiscal Week. It is in this format "FY2013 WK 2". How can I format it, so that I can use a regression model on it and predict a value for let's say "FY2017 WK 2". Should I treat Fiscal Week as a categorical value and use dmatrices?
Week Number in python for regression
1.2
0
0
94
42,916,551
2017-03-21T00:49:00.000
0
0
0
1
python,file,windows-7
42,988,360
2
false
0
0
User letmaik was able to help me with this. It turned out that the error was caused by my version of pip being too old. The command "python -m pip install -U pip" did not work to upgrade pip; "easy_install -U pip" was required. This allowed rawpy to be installed successfully.
1
1
0
I was trying to download a Python wrapper called rawpy on my Windows machine. I used the command "pip install rawpy". I have already looked at many other SO threads but could find no solution. The exact error is : IO Error: [Errno 2] No such file or directory: 'external/LibRawcmake/CMakeLists.txt' The only dependency for the wrapper is numpy, which I successfully installed. I would like to know how to fix this. Quite new to Python, so any information would help.
python - IO Error [Errno 2] No such file or directory when downloading package
0
0
0
1,408
42,919,339
2017-03-21T05:50:00.000
4
1
0
1
javascript,python,c,webassembly
42,920,349
1
true
0
0
If you are actually implementing an interpreter then you don't need to generate machine code at runtime, so everything can stay within Wasm. What you actually seem to have in mind is a just-in-time compiler. For that, you indeed have to call back into the embedder (i.e., JavaScript in the browser) and create and compile new Wasm modules there on the fly, and link them into the running program -- e.g., by adding new functions to an existing table. The synchronous compilation/instantiation interface exists for this use case. In future versions it may be possible to invoke the compilation API directly from within Wasm, but for now going through JavaScript is the intended approach.
1
3
0
When thinking of the why a interpeter work: parse code -> producer machine byte code -> allocate exec mem -> run how can it be done in wasm? thanks!
Compile a JIT based lang to Webassembly
1.2
0
0
763
42,927,141
2017-03-21T12:26:00.000
0
0
0
0
python,hive,package,udf
43,289,954
1
false
0
0
I recently started looking into this approach and I feel like the problem is not about to get all the 'hive nodes' having sklearn on them (as you mentioned above), I feel like it is rather a compatibility issue than 'sklearn node availability' one. I think sklearn is not (yet) designed to run as a parallel algorithm such that large amount of data can be processed in a short time. What I'm trying to do, as an approach, is to communicate python to 'hive' through 'pyhive' (for example) and implement the necessary sklearn libraries/calls within that code. The rough assumption here that this 'sklearn-hive-python' code will run in each node and deal with the data at the 'map-reduce' level. I cannot say this is the right solution or correct approach (yet) but this is what I can conclude after searching for sometime.
1
0
1
I know how to create a hive udf with transform and using, but I can't use sklearn because not all the node in hive cluster has sklearn. I have an anaconda2.tar.gz with sklearn, What should I do ?
How to create an udf for hive using python with 3rd party package like sklearn?
0
0
0
314
42,931,450
2017-03-21T15:29:00.000
2
0
1
0
python,regex,python-2.7
42,934,795
1
true
0
0
' '.join(re.findall('-|_|abc|\d+KK', 'abc3KK-_')) would work for your example. It does not use re.sub but still uses a regex. Since there is not much information about the kind of strings you want to handle, I don't know if it fits your needs.
1
2
0
How to add spaces before & after a set of expressions showing below: "-", "_", "abc", "%dKK" (%d means an integer here) For example, "abc3KK-_" will be split as " abc 3KK - _ ". Thanks.
How to add spaces before & after a set of expressions using re.sub() in Python?
1.2
0
0
129
42,931,469
2017-03-21T15:30:00.000
0
0
0
0
python,matlab,scikit-learn,svm
42,931,849
1
false
0
0
I guess you understand how SVM works, so what I would do is to train the model again in python just on the support vectors you found rather than on all the original training data and the result should remain the same (as if you trained it on the full data), since the support vectors are the "interesting" vectors in the data, that are sitting on the boundaries.
1
1
1
I have a SVM model trained in MATLAB (using 6 features) for which I have: Support Vectors [337 x 6] Alpha [337 x 1] Bias Kernel Function: @rbf_kernel Kernel Function Args = 0.9001 GroupNames [781 x 1] Support Vector Indices [337 x 1] Scale Data containing: shift [1 x 6] scale factor [1 x 6] These above are all data that I am able to load in python. Now I would like to use this model in python without retraining to perform classification in python. In particular I would like to create a SVM model in python from the support vector generated in MATLAB Is it possible? How? Any help would be very appreciated! I can't retrain it in python because I don't have the training data (and labels) anymore.
Use SVM model trained in Matlab for classification in python
0
0
0
519
42,931,794
2017-03-21T15:43:00.000
0
0
0
0
python,openmdao
42,932,214
1
true
0
0
Depending on your setup, you can raise an error inside the component that will kill the run. They you just change the input and start up the next run. Alternately, modify your wrapper for the subsequent code so that if it sees a NAN it skips running and just reports a garbage number thats easily identifiable.
1
0
1
I am using OpenMDAO 1.7.3 for an optimization problem on a map. My parameters are the coordinates on this map. The first thing I do is interpolating the height at this location from a map in one component. Then some more complex calculations follow in other components. If OpenMDAO chooses a location outside the boundaries of the map I will get a height of NaN. I already know that the rest of there is no additional information to be gained from this optimization step. How can I make OpenMDAO move on to the next evaluation point as soon before doing the more complex calculations? In my case the other calculations (in an external program) will even fail if they encounter a NaN, so I have to check the value before calling them in each of the components and assign NaN outputs for each of them. Is there a better way to do that?
How can I stop OpenMDAO evaluating at a given location early
1.2
0
0
55
42,932,916
2017-03-21T16:27:00.000
0
0
0
0
python,http,urllib,sony,sony-camera-api
43,143,860
1
false
0
1
I don't know about the range header, but it will still not allow you to take more pictures than your downloadspeed allows (unless you have some larger than 2.5 seconds intervals now and then). Maybe you can reduce the image resolution to a size that fits into the 2.5 sec interval? Or (just some thinking outside of the box:-) use 2 QX1's switching, so you get a 5 second interval for each...
1
0
0
I'm currently using the sony QX1 for wireless transfers for large images. The camera is being triggered over the USB port. Pictures from the camera are being transferred with URLLib to a raspberry pi. (I can't use the api to trigger the camera. It has to be from this external source.) The camera is triggered around every 2.5 seconds. Through timing testing it seems like I'm able to get the larger picture back to the pi at ~ 3.2 seconds per image. I've noticed that when the camera is triggered my transfer is terminated. I'm assuming this has to do with the embedded design of the camera itself and there isn't a way to get around this but please correct me if I'm wrong! Does the camera support the range header? Basically I grab the image size from the header. I'm trying to grab the beginning X bytes until the camera triggers again then grab the next X bytes until I get the entire image. Thanks for the help and let me know if I need to give a deeper explanation of what is going on here.
Python wireless transfers with SonyQX1
0
0
0
108
42,935,980
2017-03-21T19:00:00.000
0
0
1
0
python,pyodbc,canopy
42,963,908
1
false
0
0
For the record: the attempted import was in a different Python installation. It is never good, and usually impossible, to use a package which was installed into one Python installation, in another Python installation.
1
0
0
I need help with the pyodbc Python module. I installed it via Canopy package management, but when I try to import it, I get an error (no module named pyodbc). Why? Here's the output from my Python interpreter: import pyodbc Traceback (most recent call last): File "", line 1, in import pyodbc ImportError: No module named 'pyodbc
No Module After Install Package via Canopy Package Management
0
1
0
62
42,937,469
2017-03-21T20:24:00.000
0
1
0
0
twitter,python-twitter
44,587,156
1
true
0
0
Use: api.GetSearch(raw_query='q=to%3ArealDonaldTrump')
1
0
0
Is there a way to get a list of all tweets sent to a twitter user? I know I can get all tweets sent by the user using api.GetUserTimeline(screen_name='realDonaldTrump'), but is there a way to retrieve tweets to that user?
Get all tweets to user python-twitter
1.2
0
1
397
42,940,941
2017-03-22T01:04:00.000
1
0
0
1
macos,kivy,python-3.4
46,702,178
1
false
0
1
Just had this issue, and was able to fix it following the directions on the kivy mac OS X install page, with one modification as follows: $ brew install pkg-config sdl2 sdl2_image sdl2_ttf sdl2_mixer gstreamer $ pip3 install Cython==0.25.2 $ pip3 install kivy pip3 is my reference to pip for Python 3.6 as I have two different versions of python on my system. May just be pip install for you. Hope this helps!
1
1
0
so I am trying to install kivy on my mac.From their instructions page, I am on step 2, and have to enter the command $ USE_OSX_FRAMEWORKS=0 pip install kivy. However, when I put this in terminal, I get the error error: command '/usr/bin/clang' failed with exit status 1, and as a result Failed building wheel for kivy. Does anyone know how to address this issue?
Trying to install kivy for python on mac os 10.12
0.197375
0
0
1,854
42,944,116
2017-03-22T06:20:00.000
0
0
0
0
python,sql-server,database-connection,pyodbc
42,947,078
1
true
0
0
As installation error showed, installing Visual C++ 9.0 solves problem because setup.py tries to compile some C++ libraries while installing plugin. I thing Cygwin C++ will also work due to contents of setup.py.
1
1
0
I downloaded the pyodbc module as a zip and installed it manually using the command python setup.py install. Although I can find the folder inside the Python directory which I pasted, while importing I am getting the error: ImportError: No module named pyodbc I am trying to use to this to connect with MS SQL Server. Help
ImportError: No module named pyodbc
1.2
1
0
8,261
42,948,547
2017-03-22T10:12:00.000
20
1
0
1
python,celery,celery-task
43,895,350
2
false
0
0
funny that this question scrolled by. We just switched from eventlet to gevent. Eventlet caused hanging broker connections which ultimately stalled the workers. General tips: Use a higher concurreny if you're I/O bound, I would start with 25, check the cpu load and tweak from there, aim for 99,9% cpu usage for the process. you might want to use --without-gossip and --without-mingle if your workforce grows. don't use RabbitMQ as your result backend (redis ftw!), but RabbitMQ is our first choice when it comes to a broker (the amqp emulation on redis and the hacky async-redis solution of celery is smelly and caused a lot of grief in our past). More advanced options to tune your celery workers: pin each worker process to one core to avoid the overhead of moving processes around (taskset is your friend) if one worker isn't always working, consider core-sharing with one or two other processes, use nice if one process has priority
1
13
0
I have 3 remote workers, each one is running with default pool (prefork) and single task. A single task is taking 2 to 5 minutes for completion as it runs on many different tools and inserts database in ELK. worker command: celery -A project worker -l info Which pool class should I use to make processing faster? is there any other method to improve performance?
Which pool class should i use prefork, eventlet or gevent in celery?
1
0
0
6,858
42,952,516
2017-03-22T13:06:00.000
0
0
1
0
python,regex
42,952,724
2
false
0
0
You can try using the negated versions (\W, \D) which match any non-word or non-digit characters.
1
0
0
I need to write a regular expression in Python that will capture some text which could possibly include any special character (like !@#$%^). Is there a character class similar to [\w] or [\d] that will capture any special character? I could write down all the special characters in my regex but it would end up looking unreadable. Any help appreciated.
Regular Expression - character class for special characters
0
0
0
920
42,955,649
2017-03-22T15:11:00.000
1
0
1
0
python,dictionary,generator
42,956,082
3
false
0
0
Assuming the problem you are facing is accessing key-value structured data from a csv file, you have 3 options: Load the entire data into a dictionary, copying it into RAM as a whole, and then have fast, constant access time. This is what you said you want to avoid. Searching through the data line-by-line every time you want to access data by a key. This does not have any memory overhead, but needs to scan over the entire document each time, having linear access time. Use or copy the data into some database engine (or any key-value storage), which supports disk-based indexing, allowing for constant time access while not requiring to load the data into memory first.
1
0
0
I have a big list of company information in an excel spreadsheet. I need to bring the company info into my program to process. Each company has a unique label which is used for accessing the companies. I can create a dictionary using the labels as the keys and the company info as the values, such as {label1: company1, label2: company2, ...}. By doing it this way, when the dictionary is created, it eats up too much memory. Is it possible to create a generator that can be used like a dictionary?
Can a python generator work like a dictionary?
0.066568
0
0
2,405
42,956,237
2017-03-22T15:34:00.000
2
1
1
0
python,dictionary,mosquitto
42,965,278
2
false
0
0
I would recommend you read a basic MQTT tutorial if you have not done that already. That will help you decide what your topics and data should be like. To get you started, here is an example of how you could publish and subscribe for your use case. The publisher could iterate through the keys in the dictionary and publish data to topic "keys/$key_name" with the message being the value for that key in the dictionary. The subscriber could subscribe to topic "keys/#". This way the subscriber will get all the keys and the corresponding data and reconstruct the dictionary. There are many more ways you could publish data depending on the nature of data in your dictionary.
2
1
0
Basically I have two raspberry pi's and I want one to publish data obtained from a dictionary in a python file and the the other to subscribe to this dictionary data. Apologies if this is a very bland question but I can't find any info on the internet regarding this.
How does one send data from a dictionary over an mqtt broker
0.197375
0
0
1,588
42,956,237
2017-03-22T15:34:00.000
1
1
1
0
python,dictionary,mosquitto
43,082,905
2
false
0
0
If you want to send a dictionary directly from a python script on Host A to a python script on Host B there is a way. Convert your dictionary into a string. Send the string as a payload from Host A to the broker. Subscribe to the broker with Host B and receive the payload. Evaluate the string with ast.literal_eval() which will convert it back into a dictionary. If that explanation is unclear I could post some example code. I would probably use JSON or multiple topics instead but the above procedure will work.
2
1
0
Basically I have two raspberry pi's and I want one to publish data obtained from a dictionary in a python file and the the other to subscribe to this dictionary data. Apologies if this is a very bland question but I can't find any info on the internet regarding this.
How does one send data from a dictionary over an mqtt broker
0.099668
0
0
1,588
42,958,619
2017-03-22T17:21:00.000
0
0
1
0
python,python-2.7
42,960,513
1
true
0
0
If you want to have 5 suggestions and the user only provides a username he likes to use you could do the following. Just start a counter from 1 to 100, append that number to the username and check if it is in the database. If not then save that suggestion in a list. If that list has 5 entries, show them to your user to choose from.
1
1
0
In a Python/Django website I maintain, users keep unique usernames at sign up. While registering, if a username isn't available, they have to guess another one. Sometimes users have to contend with multiple "username already exists" messages before they're able to sign up. I want to ameliorate this issue via suggesting a username based upon the already used username they currently put in. Can someone illustrate a neat Python solution for this? I haven't tried anything yet. But I was thinking what would work is taking the current nickname the user wants, and then somehow doing an ordinal based diff with 4-5 neighboring nicknames from the DB (these I can easily query). The diffs that are found can then somehow be used to guess an available nickname for the user, which is also sufficiently based on the one they already wanted. Something like that. Being a neophyte, I'm still trying to wrap my head around a viable solution.
Using diff of two strings to suggest a unique string (Python 2.7)
1.2
0
0
87
42,961,484
2017-03-22T19:57:00.000
1
0
0
1
python,debugging,wing-ide
42,989,924
1
true
0
0
Following the comments above, I have copied the wingdbstub.py file (from debugger packages of Wing ide) to the folder I am currently running my project on and used 'import wingdbstub' & initiated the debug process. All went well, I can now debug modules.
1
2
0
I am running a project that makes calls to C++ framework functions and python modules, I can run it on Wing IDE with no problems (personal version). However, I can not debug on the run. It only lets me debug a certain file, which is pretty useless. I make a call to a shell script to run the framework function via a python file (init) and that function calls a python module that I want to debug. I have had the same problem with pyCharm. I have spent quite a while trying to figure this out, something that should be very basic. How can I fix this problem and debug on the go???
Wing IDE not stopping at break points
1.2
0
0
406
42,962,995
2017-03-22T21:29:00.000
1
1
1
0
python,function,spyder
42,965,330
2
false
0
0
Thank you Alex yeah the problem was I was not working in the same directory as the package I made so I opened up the command prompt in Spyder, went into the correct directory and was able to import/install it with no issues!!!
1
0
0
Hello everyone I am learning Python. While using Spyder I finished writing my own function (test.py) I saved the script in a new folder. In Spyder I made sure to change my working directory to where the test.py is located as well as the PYTHON PATH. Now when I try to import test it says in the console that there are no modules named 'test'. Any help will be appreciated thank you
Python : Importing my custom script in Spyder to use with another script
0.099668
0
0
1,277
42,963,066
2017-03-22T21:34:00.000
0
1
0
0
python,svn,github,auto-update
44,626,580
1
true
0
0
As per the comments I eventually managed to get this to work using urllib.urlretrieve to download a zip and then use zipfile to unzip and overwrite.
1
0
0
I have an application written in Python which runs on RPi. I want it to auto update itself by downloading the latest code into the directory where it is installed...which may vary from user to user. It will also need to run a SQL script on occasion. What is the best approach for this? How do I ensure the code downloads to the right directory? GitHub tutorials I have seen are focused on updating the central repository. I want the reverse to occur. Is git the best tool or would SVN or a simple HTTP download from my site be better?
How to auto update code via python
1.2
0
0
493
42,964,564
2017-03-22T23:29:00.000
1
0
1
0
python,package,anaconda,uninstallation
42,964,783
1
true
0
0
OK. I think I figured it out. I need to add --offline to the command!
1
2
0
I am trying to remove a package in anaconda (python) on a Linux server. The system cannot connect to the outside because of firewalls. So I need to do it locally. I tried this: #conda remove package-name but it wants to connect to the outside. How can I uninstall a package locally? If I just delete it, would it be uninstalled properly?
How do I uninstall a package locally in anaconda?
1.2
0
0
2,147
42,967,242
2017-03-23T04:26:00.000
2
0
0
1
python,flask,multiprocessing,uwsgi
42,967,483
1
true
1
0
I think there are two routes you could go down. Have an endpoint "/set_es_cluster" that gets hit by your SNS POST request. This endpoint then sets the key "active_es_cluster", which is read on every ES request by your other processes. The downside of this is that on each ES request you need to do a redis lookup first. Have a seperate process that gets the POST request specifically (I assume the clusters are not changing often). The purpose of this process is to receive the post request and just have uWSGI gracefully restart your other flask processes. The advantages of the second option: Don't have to hit redis on every request Let uWSGI handle the restarts for you (which it does well) You already setup the config pulling at runtime anyway so it should "just work" with your existing application
1
4
0
We have a flask app running behind uwsgi with 4 processes. Its an API which serves data from one of our two ElasticSearch clusters. On app bootstrap each process pulls config from external DB to check which ES cluster is active and connects to it. Evey now and then POST request comes (from aws SNS service) which informs all the clients to switch ES cluster. That triggers the same function as on bootstrap - pull config from DB reconnect to active ES cluster. It works well running as a single process, but when we have more then one process running only one of them will get updated (the one which picks up POST request)... where other processes are still connected to inactive cluster. Pulling config on each request to make sure that ES cluster we use is active would be to slow. Im thinking to install redis locally and store the active_es_cluster there... any other ideas?
Python + uwsgi - multiprocessing and shared app state
1.2
0
0
504
42,967,472
2017-03-23T04:51:00.000
3
0
0
0
python,apache-spark,jar
42,968,126
3
false
0
0
sparksession._jsc.addJar does the job.
1
2
1
I am using pyspark from a notebook and I do not handle the creation of the SparkSession. I need to load a jar containing some functions I would like to use while processing my rdds. This is something which you can easily do using --jars which I cannot do in my particular case. Is there a way to access the spark scala context and call the addJar method? I tried to use the JavaGateway (sparksession._jvm...) but have not been successful so far. Any idea? Thanks Guillaume
Adding a jar file to pyspark after context is created
0.197375
0
0
4,578
42,968,895
2017-03-23T06:38:00.000
1
1
0
0
python,playframework
42,971,068
1
true
0
0
There is no difference in testing REST servers, you can use Postman with Play as with node.js or your PHP server
1
0
0
I need to test some API created in Python Play Framework. While using PHP I used Postman, what is the similar tool to be used to check Play API's?
How to test Play Framework APIs and perform Unit Testing over it?
1.2
0
0
91
42,969,345
2017-03-23T07:07:00.000
1
0
0
1
python,push-notification,command-line-interface,amazon-sns,google-cloud-pubsub
43,002,548
1
false
1
0
It is not possible for CLI applications The workarounds are Have a web api and register the endpoint with SNS. SNS will push notifications to the web API. From the web api somehow pass that to the CLI app, using either RPC calls or some other mechanism Have the SNS push notifications to AWS SQS and then from your CLI poll the SQS
1
0
0
I am developing a python application which will majorly be used as a command line interface. I want to push notifications from Amazon SNS or Google PubSub to the python application. Is this possible? If yes, what is the best solution to this? If no, is there a workaround? Thank you for the help.
SNS/PubSub notifications on a Python CLI Application?
0.197375
0
0
80
42,972,563
2017-03-23T09:52:00.000
-1
0
0
0
python,xml,linux,fedora
42,973,365
2
false
0
0
It's possible to use sax with untangle?, mean that I load the file by sax and read it by untangle , because I have a lot of code wrote using untagle and I developped since long times , and I don't want to restart from scratch Thanks
1
0
0
Can please telle me if there are any way to parse an XML file(size = 600M) with unstagle /python In Fact I use untangle.parse(file.xml) and I got error message : Process finished with exit code 137 IS there any way to parse this file by bloc for example or other option used by the function untangle.parse() or a specific linux configuration...? Thanks
Parsing Huge XML Files With 600M
-0.099668
0
1
428
42,972,951
2017-03-23T10:09:00.000
4
0
0
0
python,django,django-models
42,973,432
1
true
1
0
so this is what I did and it worked for me: list_of_models = sorted(list_of_models , key = lambda x: x.object.time)
1
1
0
I have a list of Django objects, all from the same class and those objects are not saved in the DB. I want to sort the list (again, a list of models) by a specific field in that class. How can I do that? Thanks!
sort a list of django models by specific fields
1.2
0
0
1,595
42,973,795
2017-03-23T10:45:00.000
0
0
0
0
python,django,database,csv
42,976,604
2
false
1
0
Django admin (and django in general) does not provide any editing functionality for editing CSV files. Your only option is to let users download the uploaded file, change it and then upload it again in the place of the old one. There's also no third party tool to make this in the admin. You have write your own probably.
1
2
0
I want to know, how one can edit an uploaded CSV file to the database, using Django admin panel and then save the changes. Details: So I have uploaded a csv file to the database and I want my users to go the Django admin panel, log in with their username and password and then edit the uploaded CSV file and then save the changes. P.S: I am a beginner in Django so any help will be much much appreciated. Thanks :)
How to edit uploaded csv files using django?
0
0
0
1,138