Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
31,619,813 | 2015-07-24T21:08:00.000 | 1 | 0 | 1 | 0 | ipython-notebook,autosave,jupyter | 54,931,733 | 18 | false | 0 | 0 | For me the issue turned out to be the file path was too long. Renaming the folder resolved the issue. | 12 | 48 | 0 | I am working in iPython 3/Jupyter running multiple kernels and servers. As such, i often forget to personally save things as I jump around a lot. The autosave has failed for the past 3 hours.
The error says: "Last Checkpoint: 3 hours ago Autosave Failed!
I try to manually File>>Save and Checkpoint, and nothing changes. Help!
Next to my Python 2 kernel name, there is a yellow box that say forbidden instead of edit. It goes away when i click on it. I don't know if that has anything to do with the failure to save, but it doesn't change once clicked. | iPython Notebook/Jupyter autosave failed | 0.011111 | 0 | 0 | 51,059 |
31,619,813 | 2015-07-24T21:08:00.000 | 2 | 0 | 1 | 0 | ipython-notebook,autosave,jupyter | 54,058,557 | 18 | false | 0 | 0 | i faced this same issue severely :( ,, and finally i found it alone ))) ///
and its all abt RANSOMEWARE PROTECTION on my windows 10 pro.. it wont let u over write any files and folder by third party apps under this protection.
Open the Windows Security app by clicking the shield icon in the
task bar or searching the start menu for Defender.
Click the Virus & threat protection tile (or the shield icon on the left menu bar) and then click Ransomware protection.
Set the switch for Controlled folder access to "OFF"
and happy jupyter))) | 12 | 48 | 0 | I am working in iPython 3/Jupyter running multiple kernels and servers. As such, i often forget to personally save things as I jump around a lot. The autosave has failed for the past 3 hours.
The error says: "Last Checkpoint: 3 hours ago Autosave Failed!
I try to manually File>>Save and Checkpoint, and nothing changes. Help!
Next to my Python 2 kernel name, there is a yellow box that say forbidden instead of edit. It goes away when i click on it. I don't know if that has anything to do with the failure to save, but it doesn't change once clicked. | iPython Notebook/Jupyter autosave failed | 0.022219 | 0 | 0 | 51,059 |
31,619,813 | 2015-07-24T21:08:00.000 | 1 | 0 | 1 | 0 | ipython-notebook,autosave,jupyter | 52,612,779 | 18 | false | 0 | 0 | I had a separate problem.
Looking in my jupyter notebook console window, I saw the message:
[I 09:36:14.717 NotebookApp] Malformed HTTP message from ::1: Content-Length too long
It made me think maybe there was some huge amount of text in one of my cells' outputs or something, so I started clearing the outputs.
When I cleared one cell with a plotly chart, it worked again. Maybe there was some problem with Plotly. | 12 | 48 | 0 | I am working in iPython 3/Jupyter running multiple kernels and servers. As such, i often forget to personally save things as I jump around a lot. The autosave has failed for the past 3 hours.
The error says: "Last Checkpoint: 3 hours ago Autosave Failed!
I try to manually File>>Save and Checkpoint, and nothing changes. Help!
Next to my Python 2 kernel name, there is a yellow box that say forbidden instead of edit. It goes away when i click on it. I don't know if that has anything to do with the failure to save, but it doesn't change once clicked. | iPython Notebook/Jupyter autosave failed | 0.011111 | 0 | 0 | 51,059 |
31,619,813 | 2015-07-24T21:08:00.000 | 1 | 0 | 1 | 0 | ipython-notebook,autosave,jupyter | 51,133,069 | 18 | false | 0 | 0 | I know this question is very old but I have encountered the same issue recently and got a simpler workaround. Note that in my case, I don't know what caused the issue, but certainly not a multiple users since the notebook is run internally on an offline computer (no outside access whatsoever).
In order to resume the autosaves, I just had to re-open the notebook in another tab and manually copy paste all the unsaved cells. Burdensome but it fixed the problem. (Also note that I was able to keep working on the notebook, as long as I didn't want to save or restart the kernel) | 12 | 48 | 0 | I am working in iPython 3/Jupyter running multiple kernels and servers. As such, i often forget to personally save things as I jump around a lot. The autosave has failed for the past 3 hours.
The error says: "Last Checkpoint: 3 hours ago Autosave Failed!
I try to manually File>>Save and Checkpoint, and nothing changes. Help!
Next to my Python 2 kernel name, there is a yellow box that say forbidden instead of edit. It goes away when i click on it. I don't know if that has anything to do with the failure to save, but it doesn't change once clicked. | iPython Notebook/Jupyter autosave failed | 0.011111 | 0 | 0 | 51,059 |
31,619,813 | 2015-07-24T21:08:00.000 | 0 | 0 | 1 | 0 | ipython-notebook,autosave,jupyter | 58,499,719 | 18 | false | 0 | 0 | For me, the key is that I hide my ".ipynb_checkpoints".Just make the folder visible. | 12 | 48 | 0 | I am working in iPython 3/Jupyter running multiple kernels and servers. As such, i often forget to personally save things as I jump around a lot. The autosave has failed for the past 3 hours.
The error says: "Last Checkpoint: 3 hours ago Autosave Failed!
I try to manually File>>Save and Checkpoint, and nothing changes. Help!
Next to my Python 2 kernel name, there is a yellow box that say forbidden instead of edit. It goes away when i click on it. I don't know if that has anything to do with the failure to save, but it doesn't change once clicked. | iPython Notebook/Jupyter autosave failed | 0 | 0 | 0 | 51,059 |
31,619,813 | 2015-07-24T21:08:00.000 | 6 | 0 | 1 | 0 | ipython-notebook,autosave,jupyter | 49,019,800 | 18 | false | 0 | 0 | I had the same problem while running iPython3/Jupyter local with multiple notebooks open. I solved the problem by:
1 Refreshing the dashboard tab (localhost:8888/tree#).
Running 'jupyter notebook list' in the terminal.
Copying the token from the terminal into the password box on refreshed dashboard. | 12 | 48 | 0 | I am working in iPython 3/Jupyter running multiple kernels and servers. As such, i often forget to personally save things as I jump around a lot. The autosave has failed for the past 3 hours.
The error says: "Last Checkpoint: 3 hours ago Autosave Failed!
I try to manually File>>Save and Checkpoint, and nothing changes. Help!
Next to my Python 2 kernel name, there is a yellow box that say forbidden instead of edit. It goes away when i click on it. I don't know if that has anything to do with the failure to save, but it doesn't change once clicked. | iPython Notebook/Jupyter autosave failed | 1 | 0 | 0 | 51,059 |
31,619,813 | 2015-07-24T21:08:00.000 | 57 | 0 | 1 | 0 | ipython-notebook,autosave,jupyter | 36,214,720 | 18 | false | 0 | 0 | I had same problem and I found out I was logged out from Jupyter. I found that when I went to Jupyter home page and it asked me to enter password. After I entered password I could save my notebook (it was still running in other tab). | 12 | 48 | 0 | I am working in iPython 3/Jupyter running multiple kernels and servers. As such, i often forget to personally save things as I jump around a lot. The autosave has failed for the past 3 hours.
The error says: "Last Checkpoint: 3 hours ago Autosave Failed!
I try to manually File>>Save and Checkpoint, and nothing changes. Help!
Next to my Python 2 kernel name, there is a yellow box that say forbidden instead of edit. It goes away when i click on it. I don't know if that has anything to do with the failure to save, but it doesn't change once clicked. | iPython Notebook/Jupyter autosave failed | 1 | 0 | 0 | 51,059 |
31,619,813 | 2015-07-24T21:08:00.000 | 1 | 0 | 1 | 0 | ipython-notebook,autosave,jupyter | 53,213,275 | 18 | false | 0 | 0 | I had the same issue,i tried these methods unfortunately it doesn't work.
At last i found a method
Copy your filename.ipynb file manully to same directory
Rename it with filename at about 5 characters. Then open it in jupyter notebook , and it can be saved successfully .
After that you can rename it to any name you want ! | 12 | 48 | 0 | I am working in iPython 3/Jupyter running multiple kernels and servers. As such, i often forget to personally save things as I jump around a lot. The autosave has failed for the past 3 hours.
The error says: "Last Checkpoint: 3 hours ago Autosave Failed!
I try to manually File>>Save and Checkpoint, and nothing changes. Help!
Next to my Python 2 kernel name, there is a yellow box that say forbidden instead of edit. It goes away when i click on it. I don't know if that has anything to do with the failure to save, but it doesn't change once clicked. | iPython Notebook/Jupyter autosave failed | 0.011111 | 0 | 0 | 51,059 |
31,621,373 | 2015-07-24T23:46:00.000 | 0 | 0 | 0 | 1 | api,python-2.7,google-analytics,insert,http-error | 31,866,981 | 2 | false | 1 | 0 | The problem was I using a service account when I should have been using an installed application. I did not need a service account since I had access using my own credentials.That did the trick for me! | 1 | 2 | 0 | I am trying to add users to my Google Analytics account through the API but the code yields this error:
googleapiclient.errors.HttpError: https://www.googleapis.com/analytics/v3/management/accounts/**accountID**/entityUserLinks?alt=json returned "Insufficient Permission">
I have Admin rights to this account - MANAGE USERS. I can add or delete users through the Google Analytics Interface but not through the API. I have also added the service account email to GA as a user. Scope is set to analytics.manage.users
This is the code snippet I am using in my add_user function which has the same code as that provided in the API documentation.
def add_user(service):
try:
service.management().accountUserLinks().insert(
accountId='XXXXX',
body={
'permissions': {
'local': [
'EDIT',
]
},
'userRef': {
'email': '[email protected]'
}
}
).execute()
except TypeError, error:
# Handle errors in constructing a query.
print 'There was an error in constructing your query : %s' % error
return None
Any help will be appreciated. Thank you!! | Google Analytics Management API - Insert method - Insufficient permissions HTTP 403 | 0 | 1 | 0 | 480 |
31,621,414 | 2015-07-24T23:53:00.000 | 2 | 0 | 1 | 0 | python,ipython,ipython-notebook,ipython-magic | 31,643,131 | 5 | false | 0 | 0 | If your data is in a single variable then have a try at saving it to a file using the %save magic in one notebook and then reading it back in another.
The one difficulty is that the text file will contain the data but no variable definition so I usually contatenate it with a variable definition and then exec the result. | 1 | 43 | 0 | If I have several IPython notebooks running on the same server. Is there any way to share data between them? For example, importing a variable from another notebook? Thanks! | Share data between IPython Notebooks | 0.07983 | 0 | 0 | 33,466 |
31,622,256 | 2015-07-25T02:27:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,rollback | 31,622,554 | 1 | true | 0 | 0 | In the windows command prompt, reference your python executable:
eg:
[cmd]
cd C:\Program Files (x86)\Google\google_appengine (ie: [GAE dir])
C:\Python27\python.exe appcfg.py rollback [deploy dir] | 1 | 0 | 0 | I am running on Windows 8 and I was recently uploading an application using the standard Google App Engine launcher but it froze mid way and when I closed it and reopened it and tried to upload again it would say a transaction is already in progress for this application and that I would need to rollback the application using appcfg.py.
I looked all over the internet and I understand what to execute, however I don't know how/where.
I tried doing it in the standard Windows command prompt but it just opened the appcfg.py file for me, I tried doing it in the python console but it said is not a valid function, I also tried to run the application locally and access the interactive console but it just said the same thing as the python console attempt.
What do I do? | How to rollback a python application | 1.2 | 0 | 0 | 181 |
31,622,348 | 2015-07-25T02:44:00.000 | -1 | 0 | 1 | 0 | python,virtualenv,python-wheel | 31,926,376 | 2 | false | 0 | 0 | Probably one of the itens listed in your requirements.txt needs wheel package installed to work.
Although it's a common (and good) pratice list the fundamental python packages of a project inside requirements.txt, not all of the packages the eventually will be installed are there.
Your pip log may helps you to point what package required wheel to be installed. | 1 | 3 | 0 | I just re-installed python & virtualenv after accidentally getting everything messy. I noticed that my requirements.txt in my newly created environment automatically includes wheel==0.24.0. Is this normal, and what is it? I understand that virtualenv should create new environments without any system-installed packages. | Python virtualenv requirements.txt by default includes wheel | -0.099668 | 0 | 0 | 1,659 |
31,622,397 | 2015-07-25T02:52:00.000 | -1 | 0 | 0 | 0 | python-3.x,fonts,tkinter | 31,623,769 | 1 | false | 0 | 1 | Im not certain but I believe you can use ttk to change global font size | 1 | 0 | 0 | I would like to change the font size globally in a Python3/Tkinter program. I've managed to do it in the buttons and labels in the main window, but I also have a messagebox in the program that only displays the default font size. | Changing all font sizes in Tkinter/Python3 | -0.197375 | 0 | 0 | 1,191 |
31,622,553 | 2015-07-25T03:20:00.000 | 0 | 0 | 0 | 0 | python,django,django-models,django-views | 37,597,365 | 2 | false | 1 | 0 | I had this problem as well.
A common reason for getting a full local path is when you use a function to change the path in the upload_to field. You just have to make sure that function returns a relative path, not a full path.
In my case my function would create de necessary dirs if none existed. I needed to use the full MEDIA_ROOT path for that, but had to make sure it returned a relative path value for upload_to. | 1 | 0 | 0 | Image.url of ImageField returns local path in view.py. I've checked MEDIA_ROOT and MEDIA_URL in settings.py but image.url (moreover, image.path and image.name) always return local full path. I can access the file with the correct url and the file saved the correct path.
Please help me! | image.url returns local full path in view.py Django 1.6 | 0 | 0 | 0 | 1,037 |
31,625,552 | 2015-07-25T10:36:00.000 | 2 | 1 | 0 | 0 | php,android,python,apache,raspberry-pi | 31,626,041 | 2 | false | 0 | 0 | I'll just add that to use Python with Apache you'll have to enable mod_wsgi or mod_python or just write a standard CGI or FastCGI script.
I think bottle/flask supports all methods.
There is somewhere compiled JVM with support for hardware floating points, and that version should work a little better.
But, well, Raspberry Pi and Java aren't exactly friends.
I admit, it is a little odd, because we know that Java works perfectly fine on ARMs (e.g. Android), but current state for RasPi is as it is. | 1 | 0 | 0 | What code should i use in raspberry pi using python to implement this. I have done the client part.I have installed apache/php/mysql in my pi. What framework should i use here. Besides what part am i missing.I need to read/write to sensors through my pi. I have made an api hosted in a site which i have successfully tested in my app. Now i need to implement this in my pi remotely.
P.S.I don't know nothing about my pi.Should i write all my code in python(I prefer java), what libraries should i be using. | I want to make a remote webserver using raspberry pi contrplled by android app | 0.197375 | 0 | 0 | 80 |
31,626,044 | 2015-07-25T11:34:00.000 | 0 | 0 | 1 | 0 | raspberry-pi,ipython,ipython-notebook | 36,211,430 | 1 | false | 0 | 0 | On my Raspberry PI the .json are located in /home/<username>/.config/ipython/profile_default/security/ | 1 | 1 | 0 | I'm trying to setup a remote kernel on my Raspberry Pi right now, using IPython as my remote kernel and try to connect to this kernel using Spyder.
Using Spyder to create local kernels and use them to interpret code is working perfectly fine. Starting a kernel on my Raspberry Pi also works well using ipython kernel.
As described by many other users before, the .JSON file for the connection details I have to hand to Spyder, is located at /home/<username>/.ipython/profile_default/security/kernel-<id>.json. Unfortunately I can't find this .JSON file on my Raspberry Pi, but if I try to connect an existing kernel on my local PC I can find all local kernels.
What is the problem with the kernels on my Raspberry Pi? Why aren't they saved as .JSON files?
Another question: I accidently created another profile in IPython, how can I remove this profile? | Can't find IPython Kernel .JSON file | 0 | 0 | 0 | 809 |
31,630,636 | 2015-07-25T20:09:00.000 | 3 | 0 | 0 | 0 | python,http | 31,630,829 | 3 | false | 0 | 0 | Is it possible for a HTTP request to be that big ?
Yes it's possible but it's not recommended and you could have compatibility issues depending on your web server configuration. If you need to pass large amounts of data you shouldn't use GET.
If so how do I fix the OptionParser to handle this input?
It appears that OptionParser has set its own limit well above what is considered a practical implementation. I think the only way to 'fix' this is to get the Python source code and modify it to meet your requirements. Alternatively write your own parser.
UPDATE: I possibly mis-interpreted the question and the comment from Padraic below may well be correct. If you have hit an OS limit for command line argument size then it is not an OptionParser issue but something much more fundamental to your system design that means you may have to rethink your solution. This also possibly explains why you are attempting to use GET in your application (so you can pass it on the command line?) | 3 | 3 | 0 | I am debugging a test case. I use Python's OptionParser (from optparse) to do some testing and one of the options is a HTTP request.
The input in this specific case for the http request was 269KB in size.
So my python program fails with "Argument list too long" (I verified that there was no other arguments passed, just the request and one more argument as expected by the option parser. When I throw away some of the request and reduce its size, things work fine. So I have a strong reason to believe the size of the request is causing my problems here.)
Is it possible for a HTTP request to be that big ?
If so how do I fix the OptionParser to handle this input? | Is a HTTP Get request of size 269KB allowed? | 0.197375 | 0 | 1 | 82 |
31,630,636 | 2015-07-25T20:09:00.000 | 0 | 0 | 0 | 0 | python,http | 31,630,668 | 3 | true | 0 | 0 | Typical limit is 8KB, but it can vary (like, be even less). | 3 | 3 | 0 | I am debugging a test case. I use Python's OptionParser (from optparse) to do some testing and one of the options is a HTTP request.
The input in this specific case for the http request was 269KB in size.
So my python program fails with "Argument list too long" (I verified that there was no other arguments passed, just the request and one more argument as expected by the option parser. When I throw away some of the request and reduce its size, things work fine. So I have a strong reason to believe the size of the request is causing my problems here.)
Is it possible for a HTTP request to be that big ?
If so how do I fix the OptionParser to handle this input? | Is a HTTP Get request of size 269KB allowed? | 1.2 | 0 | 1 | 82 |
31,630,636 | 2015-07-25T20:09:00.000 | 2 | 0 | 0 | 0 | python,http | 31,630,678 | 3 | false | 0 | 0 | A GET request, unlike a POST request, contains all its information in the url itself. This means you have an URL of 269KB, which is extremely long.
Although there is no theoretical limit on the size allowed, many servers don't allow urls of over a couple of KB long and should return a 414 response code in that case. A safe limit is 2KB, although most modern software will allow a bit more than that.
But still, for 269KB, use POST (or PUT if that is semantically more correct), which can contain larger chunks of data as the content of a request rather than the url. | 3 | 3 | 0 | I am debugging a test case. I use Python's OptionParser (from optparse) to do some testing and one of the options is a HTTP request.
The input in this specific case for the http request was 269KB in size.
So my python program fails with "Argument list too long" (I verified that there was no other arguments passed, just the request and one more argument as expected by the option parser. When I throw away some of the request and reduce its size, things work fine. So I have a strong reason to believe the size of the request is causing my problems here.)
Is it possible for a HTTP request to be that big ?
If so how do I fix the OptionParser to handle this input? | Is a HTTP Get request of size 269KB allowed? | 0.132549 | 0 | 1 | 82 |
31,636,454 | 2015-07-26T11:30:00.000 | 2 | 0 | 0 | 1 | python,celery,python-asyncio | 43,289,761 | 2 | false | 0 | 0 | I implement on_finish function of celery worker to publish a message to redis
then in the main app uses aioredis to subscribe the channel, once got notified, the result is ready | 1 | 2 | 0 | I am having a Python application which offloads a number of processing work to a set of celery workers. The main application has to then wait for results from these workers. As and when result is available from a worker, the main application will process the results and will schedule more workers to be executed.
I would like the main application to run in a non-blocking fashion. As of now, I am having a polling function to see whether results are available from any of the workers.
I am looking at the possibility of using asyncio get notification about result availability so that I can avoid the polling. But, I could not find any information on how to do this.
Any pointers on this will be highly appreciated.
PS: I know with gevent, I can avoid the polling. However, I am on python3.4 and hence would prefer to avoid gevent and use asyncio. | Collecting results from celery worker with asyncio | 0.197375 | 0 | 0 | 2,413 |
31,636,560 | 2015-07-26T11:43:00.000 | 0 | 0 | 0 | 0 | android,python,linux,kivy | 31,639,040 | 1 | false | 0 | 1 | Since it's a pure python module, and it's on pypi, you can just add it to requirements in buildozer, it doesn't need a specific recipe. | 1 | 0 | 0 | I am trying to importing pygoogle by import pygoogle in my main.py but when i make an app via buildozer , in logcat i am finding that no module named pygoogle I have installed pygoogle in my kali linux os and then i went to usr/local/lib/python2.7 and copied pygoogle folder to my home folder where my main.py is.....then i tried again but still same error and then i went distribute.sh -m 'kivy pygoogle everything went good but still was facing same error and then i went to buildozer and in requirments added pygoogle but in aviable module pygoogle is not showing by buildozer .... not with pygoogle only ..with all lbrary i am facing same problem ... any solution some know ???I am not linux expert | Facing issues while importing third party (pygoogle) library in main.py for making android apk via buildozer | 0 | 0 | 0 | 49 |
31,639,596 | 2015-07-26T16:59:00.000 | 3 | 0 | 0 | 0 | python,django,heroku,scrapy | 31,639,931 | 1 | false | 1 | 0 | It's impossible for apps in the same project to be on different Python versions; the server has to run on one or the other. But it would be possible to have two projects, with your models in a shared app that is installed in both models, and the configuration pointing to the same database. | 1 | 1 | 0 | I have been building a project on Ubuntu 15.04 with Python 3.4 and django 1.7. Now I want to use scrapy djangoitem, but that only runs on python 2.7. It's easy enough to have separate virtualenvs to do the developing in, but how can i put these different apps together in a single project, not only on my local machine, but later on heroku?
If it was just content, I could move the scrapy items over once the work was done, but the idea of djangoitem is that it uses the django model. Does that mean the django model has to be on python 2.7 also in order for djangoitem to access it? Even that is not insurmountable if I then port it to python 3, but it isn't very DRY, especially when i have to run scrapy for frequent updates. Is there a more direct solution, such as a way to have one app be 2.7 and another be 3.4 in the same project? Thanks. | multiple versions of django/python in a single project | 0.53705 | 0 | 0 | 67 |
31,642,940 | 2015-07-26T23:19:00.000 | 1 | 0 | 1 | 0 | python,regex,string | 53,998,316 | 6 | false | 0 | 0 | You could split the string and check to see if it contains at least one first/last name that is correct. | 1 | 22 | 1 | I want to find out if you strings are almost similar. For example, string like 'Mohan Mehta' should match 'Mohan Mehte' and vice versa. Another example, string like 'Umesh Gupta' should match 'Umash Gupte'.
Basically one string is correct and other one is a mis-spelling of it. All my strings are names of people.
Any suggestions on how to achieve this.
Solution does not have to be 100 percent effective. | Finding if two strings are almost similar | 0.033321 | 0 | 0 | 17,280 |
31,643,100 | 2015-07-26T23:41:00.000 | 1 | 0 | 0 | 0 | python,r,opencv,computational-geometry,spatstat | 31,643,427 | 1 | false | 0 | 0 | Do you want it to be a spatstat study region (of class owin) since you have the spatstat tag on there? In that case you can just use owin(poly=x) where x is your nx2 matrix (after loading the spatstat library of course). The rows in this matrix should contain the vertices of the polygon in the order that you want them connected (that's how R knows which point to connect with which). See help(owin) for more details. | 1 | 1 | 1 | Please allow me to start the question with a simplest task:If I have four points which are vertices of a rectangle, stored in a 4x2 matrix, how can I turn this into a rectangular window? (Please do not use any special command specific to drawing rectangles as the rectangle is raised just to represent a general class of regular geometrical object)
To make things more complicated, suppose I have a nx2 matrix, how can I connect all of the n points so that it becomes a polygon? Note the object is not necessarily convex. I think the main difficulty is that, how can R know which point should be connected with which?
The reason I am asking is that I was doing some image processing on a fish, and I managed to get the body line of the fish by finding the contour with opencv in python, and output it as a nx2 csv file. When I read the csv file into R and tried to use the SpatialPolygnos in the sp package to turn this into a polygon, some very unexpected behavior happened; there seems to be a break somewhere in the middle that the polygon got cut in half, i.e. the boundary of the polygon was not connected. Is there anyway I can fix this problem?
Thank you.
Edit: Someone kindly pointed out that this is possibly a duplicate of another question: drawing polygons in R. However the solution to that question relies on the shape being drawn is convex and hence it makes sense to order by angels; However here the shape is not necessarily convex and it will not work. | Spatstat: Given a list of 2-d points, how to connect them into a polygon, and further make it the study region? | 0.197375 | 0 | 0 | 422 |
31,644,298 | 2015-07-27T02:54:00.000 | 1 | 1 | 0 | 1 | python,path,terminal,pycharm | 43,356,885 | 2 | false | 0 | 0 | I came across this error too in PhpStorm, to fix it simply navigate through to...
Preferences > Tools > Terminal
Under 'Application Settings' click [...] at the end of Shell path and open the .bash profile.
This should grey out the Shell path to '/bin/bash'
You can now launch Terminal. | 2 | 2 | 0 | I accidentally changed the "Shell path" specified in the Terminal setting for PyCharm and now I am getting this error:
java.io.IOException:Exec_tty error:Unkown reason
I replaced the default value with the string returned by echo $PATH which is:
/usr/local/cuda-7.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/bin
I've been trying to google what the default value is that goes here, but I have not been able to find it. Can someone help me resolve this?
Notes:
The specific setting is found in Settings > Tools > Terminal > Shell path | Default values for PyCharm Terminal? | 0.099668 | 0 | 0 | 1,300 |
31,644,298 | 2015-07-27T02:54:00.000 | 1 | 1 | 0 | 1 | python,path,terminal,pycharm | 31,661,642 | 2 | true | 0 | 0 | The default value is the value of the $SHELL environment variable, which is normally /bin/bash. | 2 | 2 | 0 | I accidentally changed the "Shell path" specified in the Terminal setting for PyCharm and now I am getting this error:
java.io.IOException:Exec_tty error:Unkown reason
I replaced the default value with the string returned by echo $PATH which is:
/usr/local/cuda-7.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/bin
I've been trying to google what the default value is that goes here, but I have not been able to find it. Can someone help me resolve this?
Notes:
The specific setting is found in Settings > Tools > Terminal > Shell path | Default values for PyCharm Terminal? | 1.2 | 0 | 0 | 1,300 |
31,644,834 | 2015-07-27T04:08:00.000 | 0 | 1 | 1 | 0 | python-2.7,module | 36,118,903 | 4 | false | 0 | 0 | To make this work consistently, you can put the module into the lib folder inside the python folder, then you can import it regardless of what directory you are in | 2 | 0 | 0 | I know how to import a module I have created if the script I am working on is in the same directory. I would like to know how to set it up so I can import this module from anywhere. For example, I would like to open up Python in the command line and type "import my_module" and have it work regardless of which directory I am in. | Importing a python module I have created | 0 | 0 | 0 | 31 |
31,644,834 | 2015-07-27T04:08:00.000 | 0 | 1 | 1 | 0 | python-2.7,module | 36,118,957 | 4 | false | 0 | 0 | You could create pth file with path to your module and put it into your Python site-packages directory. | 2 | 0 | 0 | I know how to import a module I have created if the script I am working on is in the same directory. I would like to know how to set it up so I can import this module from anywhere. For example, I would like to open up Python in the command line and type "import my_module" and have it work regardless of which directory I am in. | Importing a python module I have created | 0 | 0 | 0 | 31 |
31,645,016 | 2015-07-27T04:30:00.000 | 0 | 0 | 0 | 0 | python,mysql,database-connection,mysql-python,remote-server | 31,717,545 | 2 | true | 0 | 0 | This issue is due to so many pending request on the remote database.
So in this situation MySql closes the connection to the running script.
to overcome this situation put
time.sleep(sec) # here int is a seconds in number that to sleep the script.
it will solve this issue.. without transferring database to local server or any other administrative task on mysql | 2 | 2 | 0 | I am trying to access the remote database from one Linux server to another which is connected via LAN.
but it is not working.. after some time it will generate an error
`_mysql_exceptions.OperationalError: (2003, "Can't connect to MySQL server on '192.168.0.101' (99)")'
this error is random it will raise any time.
each time create a new db object in all methods.
and close the connection as well then also why this error raise.
can any one please help me to sort out this problem | python mysql database connection error | 1.2 | 1 | 0 | 1,039 |
31,645,016 | 2015-07-27T04:30:00.000 | -3 | 0 | 0 | 0 | python,mysql,database-connection,mysql-python,remote-server | 41,724,945 | 2 | false | 0 | 0 | My solution was to collect more queries for one commit statement if those were insert queries. | 2 | 2 | 0 | I am trying to access the remote database from one Linux server to another which is connected via LAN.
but it is not working.. after some time it will generate an error
`_mysql_exceptions.OperationalError: (2003, "Can't connect to MySQL server on '192.168.0.101' (99)")'
this error is random it will raise any time.
each time create a new db object in all methods.
and close the connection as well then also why this error raise.
can any one please help me to sort out this problem | python mysql database connection error | -0.291313 | 1 | 0 | 1,039 |
31,645,343 | 2015-07-27T05:11:00.000 | -1 | 0 | 0 | 1 | python,database,cron,crontab,distributed | 31,645,469 | 1 | false | 1 | 0 | simple way:
- start cron before need time (for example two minutes)
- force synchronize time (using ntp or ntpdate) (optional paranoid mode)
- wait till expected time, run job | 1 | 0 | 0 | I have a django management command run as a cron job and it is set on multiple hosts to run at the same time. What is the best way to ensure that cron job runs on only one host at any time? One approach is to use db locks as the cron job updates a MySQL db but I am sure there are better(django or pythonic) approaches to achieve what I am looking for | How can I ensure cron job runs only on one host at any time | -0.197375 | 0 | 0 | 416 |
31,648,780 | 2015-07-27T08:54:00.000 | 0 | 0 | 0 | 0 | python-2.7,amazon-web-services,amazon-s3,amazon-emr | 32,658,324 | 1 | false | 1 | 0 | Yes you can specify a folder whose sub folders contain all the input files. However in your code you need to ensure that your functions look for the sub-folders in the input, and not just take the main folder as input. | 1 | 0 | 0 | i'm currently trying to run a mapreduce job where the inputs are scattered in different folders underneath catch-all bucket in S3.
My original approach was to create a cluster for each of the input files and write separate outputs for each of them. However, that would require spinning up more than 200+ clusters and I don't think thats the most efficient way.
I was wondering if I could instead of specifying a file as input into EMR, specify a folder whose subfolders contain all of the input files.
Thanks! | Can AWS ElasticMapReduce take S3 folders as Input? | 0 | 0 | 0 | 71 |
31,649,314 | 2015-07-27T09:21:00.000 | 2 | 0 | 0 | 0 | python,django,wordpress | 31,649,614 | 2 | false | 1 | 0 | There are many ways to do this. You will have to provide more info about what you are trying to accomplish to give the right advise.
make a page with a redirect (this is an ugly solution in seo and user perspective)
handle this on server level.
load your Django data with an ajax call | 1 | 0 | 0 | I need to do such thing, but I don't even know if it is possible to accomplish and if so, how to do this.
I wrote an Django application which I would like to 'attach' to my wordpress blog. However, I need a permalink (but no page in wordpress pages section) which would point to Django application on the same server. Is that possible? | How to create empty wordpress permalink and redirect it into django website? | 0.197375 | 0 | 0 | 138 |
31,660,214 | 2015-07-27T18:07:00.000 | 1 | 0 | 0 | 0 | python,gnuradio,gnuradio-companion | 31,667,553 | 1 | true | 0 | 0 | The "QT GUI Frequency Sink" block will display the frequency domain representation of a signal. You can save a static image of the spectrum by accessing the control panel using center-click and choosing "Save". | 1 | 1 | 1 | I have generated the spectrogram with GNU Radio and want to save the output graph but have no idea how to do it. | How to save a graph that is generated by GNU Radio? | 1.2 | 0 | 0 | 791 |
31,660,951 | 2015-07-27T18:48:00.000 | 2 | 0 | 1 | 0 | python,pip | 31,661,080 | 3 | true | 0 | 0 | You should have multiple executables of pip.
Use pip2 and pip3 interchangeably.
Anyway, you should consider using virtualenv package, initialize it like virtualenv -p /usr/bin/python2.7 env_name or virtualenv-3.4 -p /usr/bin/python3.4 env_name then each time you use your code, type source env_name/bin/activate and "python" should be aliased to virtualised version. | 2 | 1 | 0 | I have 2 Python versions
Python 3.4.3
Python 2.7.10
Env variable works with Python 3.4(in my system), so when I pip install*package_name it will only install the package into Python 3.4
I have a system variable for Python 2.7 -- %python27% -- also.
My question is; how can I pip install a package/module into Python 2.7 without changing the Env. Variable.
Note: %python27% pip install *package_name doesn't work.
Thank you. | How to pip install packages into different versions of Python | 1.2 | 0 | 0 | 1,917 |
31,660,951 | 2015-07-27T18:48:00.000 | 0 | 0 | 1 | 0 | python,pip | 32,007,372 | 3 | false | 0 | 0 | I had the same problem, but it was installing to Python 2.7 rather than Python 3.4. Using $ pip3 install *package_name solved the issue. | 2 | 1 | 0 | I have 2 Python versions
Python 3.4.3
Python 2.7.10
Env variable works with Python 3.4(in my system), so when I pip install*package_name it will only install the package into Python 3.4
I have a system variable for Python 2.7 -- %python27% -- also.
My question is; how can I pip install a package/module into Python 2.7 without changing the Env. Variable.
Note: %python27% pip install *package_name doesn't work.
Thank you. | How to pip install packages into different versions of Python | 0 | 0 | 0 | 1,917 |
31,661,138 | 2015-07-27T18:58:00.000 | 0 | 0 | 0 | 0 | python,wxpython | 31,680,789 | 1 | false | 0 | 1 | I don't think the regular wx.PopupMenu will work that way. However if you look at the wxPython demo, you will see a neat widget called wx.PopupWindow that claims it can be used as a menu and it appears to work the way you want. The wx.PopupTransientWindow might also work. | 1 | 1 | 0 | I am using wxPython to write an app. I have a menu that pops up. I would like to know how to keep it on the screen after the user clicks an item on the menu. I only want it to go away after the click off it or if I tell it to in the programming. Does anyone know how to do this?
I am using RHEL 6 and wxPython 3.01.1 | Keep menu up after clicking in wxPython | 0 | 0 | 0 | 55 |
31,661,485 | 2015-07-27T19:19:00.000 | 0 | 0 | 0 | 0 | java,python,excel,apache-poi,xlsxwriter | 31,662,734 | 2 | false | 1 | 0 | 255 characters in a URL is an Excel 2007+ limitation. Try it in Excel.
I think the XLS format allowed longer URLs (so perhaps that is the difference).
Also XlsxWriter doesn't use the HYPERLINK() function internally (although it is available to the user via the standard interface). | 2 | 1 | 0 | I'm trying to embed a bunch of URLs into an Excel file using Python with XLSXWriter's function write_url(), but it gives me the warning of it exceeding the 255 character limit. I think this is happening because it may be using the built-in HYPERLINK Excel function.
However, I found that Apache POI from Java doesn't seem to have that issue. Is it because they directly write it into the cell itself or is there a different reason? Also, is there a workaround in Python that can solve this issue? | Why is Apache POI able to write a hyperlink more than 255 characters but not XLSXWriter? | 0 | 1 | 0 | 599 |
31,661,485 | 2015-07-27T19:19:00.000 | 1 | 0 | 0 | 0 | java,python,excel,apache-poi,xlsxwriter | 36,582,681 | 2 | false | 1 | 0 | Obviously the length limitation of a hyperlink address in .xlsx (using Excel 2013) is 2084 characters. Generating a file with a longer address using POI, repairing it with Excel and saving it will yield an address with a length of 2084 characters.
The Excel UI and .xls files seem to have a limit of 255 characters, as already mentioned by other commenters. | 2 | 1 | 0 | I'm trying to embed a bunch of URLs into an Excel file using Python with XLSXWriter's function write_url(), but it gives me the warning of it exceeding the 255 character limit. I think this is happening because it may be using the built-in HYPERLINK Excel function.
However, I found that Apache POI from Java doesn't seem to have that issue. Is it because they directly write it into the cell itself or is there a different reason? Also, is there a workaround in Python that can solve this issue? | Why is Apache POI able to write a hyperlink more than 255 characters but not XLSXWriter? | 0.099668 | 1 | 0 | 599 |
31,663,007 | 2015-07-27T20:47:00.000 | 0 | 0 | 0 | 0 | python,flask | 33,986,241 | 2 | false | 1 | 0 | Use app.run(host='0.0.0.0') if you want flask to accept any host name. | 2 | 5 | 0 | I need configure flask application to handle requests with any host in HTTP header
If some fqdn is specified in SERVER_NAME I have 404 error if request goes with any other domain.
How should be defined SERVER_NAME in configuration?
How can be requested/routed/blueprint-ed HTTP hostname? | multidomain configuration for flask application | 0 | 0 | 0 | 504 |
31,663,007 | 2015-07-27T20:47:00.000 | 0 | 0 | 0 | 0 | python,flask | 34,395,827 | 2 | false | 1 | 0 | To allow any domain name just remove 'SERVER_NAME' from application config | 2 | 5 | 0 | I need configure flask application to handle requests with any host in HTTP header
If some fqdn is specified in SERVER_NAME I have 404 error if request goes with any other domain.
How should be defined SERVER_NAME in configuration?
How can be requested/routed/blueprint-ed HTTP hostname? | multidomain configuration for flask application | 0 | 0 | 0 | 504 |
31,666,601 | 2015-07-28T02:54:00.000 | 3 | 0 | 0 | 0 | python-2.7,flask,basic-authentication,www-authenticate | 31,666,814 | 2 | true | 1 | 0 | This is a common problem when working with REST APIs and browser clients. Unfortunately there is no clean way to prevent the browser from displaying the popup. But there are tricks that you can do:
You can return a non-401 status code. For example, return 403. Technically it is wrong, but if you have control of the client-side API, you can make it work. The browser will only display the login dialog when it gets a 401.
Another maybe a bit cleaner trick is to leave the 401 in the response, but not include the WWW-Authenticate header in your response. This will also stop the login dialog from appearing.
And yet another (that I haven't tried myself, but have seen mentioned elsewhere) is to leave the 401 and the WWW-Authenticate, but change the auth method from Basic to something else that is unknown to the browser (i.e. not Basic and not Digest). For example, make it CustomBasic. | 1 | 4 | 0 | I have developed an API in flask which is using HttpBasicAuth to authenticate users. API is working absolutely fine in fiddler and returning 401 when we pass wrong credential but when I am using the same on login page I am getting extra pop up from browser. I really don't want to see this extra pop-up which is asking for credential (default behaviour of browser when returning
401
with
WWW-Authenticate: Basic realm="Authentication Required"
).
It is working fine when deployed locally but not working when hosted on remote server.
How can we implement 401 which will not let browser to display popup asking for credentials. | How to return 401 authentication from flask API? | 1.2 | 0 | 1 | 13,044 |
31,669,940 | 2015-07-28T07:27:00.000 | 1 | 0 | 0 | 0 | python,matplotlib,permissions | 31,671,206 | 1 | true | 0 | 0 | matplotlib.pyplot.savefig() does not have capability to change file permissions. It has to be done afterwards with os.chmod(path, mode) for example os.chmod(fname, 0o400). | 1 | 1 | 1 | How can I specify the *nix read/write permissions for an output file (e.g. PDF) produced by the matplotlib savefig() command from within a Python script? i.e. without having to use chmod after the file has been produced. | How to set output file permissions using matplotlib savefig()? | 1.2 | 0 | 0 | 1,542 |
31,671,522 | 2015-07-28T08:49:00.000 | 0 | 0 | 1 | 0 | python,scripting,refactoring,pycharm | 31,672,707 | 1 | true | 0 | 0 | PyCharm is written in Java. There is no way to invoke PyCharm's refactoring system from a Python script.
Instead, you can write a plugin for PyCharm in Java, and implement that logic through PyCharm's plugin API. | 1 | 0 | 0 | I am trying to create a script that will do the following.
Iterate over each file in project.
_Iterate over each method in the file.
__if some_condition(method_name):
___user PyCharm's refactoring system to refactor the given method.
Couldn't find any reference to the above.
Does not request a given script(though wouldn't mind), any reference and start guidance would be welcome.
Using
python 2.7
PyCharm Community Edition 4.5.3 | Python script that uses pycharm's refactoring methods | 1.2 | 0 | 0 | 65 |
31,675,214 | 2015-07-28T11:38:00.000 | 1 | 0 | 0 | 1 | python,python-2.7,numpy,mpi4py | 31,676,562 | 1 | false | 0 | 0 | Did you try pip install --user mpi4py?
However, I think the best solution would be to just talk to the people in charge of the cluster and see if they will install it. It seems pretty useless to have a cluster without mpi4py installed. | 1 | 2 | 1 | I have some parallel code I have written using numpy and mpi4py modules. Till now I was running it on my laptop but now I want to attack bigger problem sizes by using the computing clusters at my university. The trouble is that they don't have mpi4py installed. Is there anyway to use the module by copying the necessary files to my home directory in the cluster?
I tried some ways to install it with out root access but that didn't workout. So I am looking for a way to use the module by just copying it to the remote machine
I access the cluster using ssh from terminal | Using mpi4py (or any python module) without installing | 0.197375 | 0 | 0 | 283 |
31,675,839 | 2015-07-28T12:05:00.000 | 0 | 0 | 0 | 0 | python,odoo | 31,680,404 | 2 | false | 1 | 0 | It's pretty basic and simple any python class can be called from it's name space, so call your class from namespace and instanciate the class.
Even Model class or any class inherited from Model can be called and instanciated like this.
Self.pool is just orm cache to access framework persistent layer.
Bests | 1 | 0 | 0 | I am aware that you can get a reference to an existing model from within another model by using self.pool.get('my_model')
My question is, how can I get a reference to a model from a Python class that does NOT extend 'Model'? | Access ORM models from different classes in Odoo/OpenERP | 0 | 0 | 0 | 714 |
31,681,276 | 2015-07-28T15:47:00.000 | 2 | 0 | 0 | 0 | python,django,git | 31,681,446 | 1 | false | 1 | 0 | When doing Django development in Git you'll typically want to exclude *.db files, *.pyc files, your virtualenv directory, and whatever files your IDE and OS may create (eg: DS_store, *.swp, *.swo) | 1 | 0 | 0 | I'm starting with python and Django development and I'm creating a project that I want to share it with git. When I started the app I saw folders like "local", "lib", "bin", and "include". Should I ignore this folders or can I commit it?
There's a .gitignore "master" to django files? I found some files on google but any of them mentioned this folders. | What is the best way to use git with Django? | 0.379949 | 0 | 0 | 71 |
31,684,375 | 2015-07-28T18:29:00.000 | 0 | 0 | 1 | 0 | python,dependencies,python-import,requirements.txt | 72,116,250 | 21 | false | 0 | 0 | To help solve this problem, always run requirements.txt on only local packages. By local packages I mean packages that are only in your project folder. To do this do:
Pip freeze —local > requirements.txt
Not pip freeze > requirements.txt.
Note that it’s double underscore before local.
However installing pipreqs helps too.
Pip install pipreqs.
The perfect solution though is to have a pipfile. The pipfile updates on its own whenever you install a new local package. It also has a pipfile.lock similar to package.json in JavaScript.
To do this always install your packages with pipenv not pip.
So we do pipenv | 1 | 778 | 0 | Sometimes I download the python source code from github and don't know how to install all the dependencies. If there is no requirements.txt file I have to create it by hands.
The question is:
Given the python source code directory is it possible to create requirements.txt automatically from the import section? | Automatically create requirements.txt | 0 | 0 | 0 | 795,098 |
31,684,878 | 2015-07-28T18:56:00.000 | 1 | 0 | 1 | 0 | python,compression,ascii | 59,207,681 | 2 | false | 0 | 0 | Use data as unsigned characters? With numpy or struct.pack, etc. | 1 | 0 | 0 | I have an ordered finite sequence of integers. Each integer is less than 256 and at least zero. They can be greater than 127. I want to write them to a text file using an ASCII representation so I can just have them all on one line and save memory space. (If I write them as integers, I have to have some kind of separator between each element, such as a \t or \n).
My idea was to just take each number, convert them to respective ASCII, and then when I need them later I can just go character by character and translate the ASCII character into a number.
However, I notice that characters like Æ are encoded in Python as actually two characters \xc3\x86 while other characters are just one character like @ and D. So my method of just going character by character to process it is not going to work out.
How can I do this using ASCII or some other method to save space? | Python - Storing integers all less than 256 with ASCII | 0.099668 | 0 | 0 | 800 |
31,685,048 | 2015-07-28T19:05:00.000 | 2 | 0 | 1 | 0 | python-2.7,web2py,anaconda,gensim | 31,687,769 | 1 | true | 1 | 0 | The Windows binary includes it's own Python interpreter and will therefore not see any packages you have in your local Python installation.
If you already have Python installed, you should instead run web2py from source. | 1 | 0 | 0 | I am new to Web2Py and Python stack. I need to use a module in my Web2Py application which uses "gensim" and "nltk" libraries. I tried installing these into my Python 2.7 on a Windows 7 environment but came across several errors due to some issues with "numpy" and "scipy" installations on Windows 7. Then I ended up resolving those errors by uninstalling Python 2.7 and instead installing Anaconda Python which successfully installed the required "gensim" and "nltk" libraries.
So, at this stage I am able to see all these "gensim" and "nltk" libraries resolving properly without any error in "Spyder" and "PyCharm". However, when I run my application in Web2Py, it still complains about "gensim" and gives this error: <type 'exceptions.ImportError'> No module named gensim
My guess is if I can configure Web2Py to use the Anaconda Python then this issue would be resolved.
I need to know if it's possible to configure Web2Py to use Anaconda Python and if it is then how do I do that?
Otherwise, if someone knows of some other way resolve that "gensim" error in Web2Py kindly share your thoughts.
All your help would be highly appreciated. | Configure Web2Py to use Anaconda Python | 1.2 | 0 | 0 | 712 |
31,685,165 | 2015-07-28T19:12:00.000 | 1 | 1 | 1 | 0 | python,excel,datanitro | 31,688,486 | 1 | true | 0 | 0 | The best way to give non-technical users access to DataNitro is to copy the VBA interface: hook the script up to an Excel button and have users press that button to run it. (There's no difference between running a Python script with DataNitro and running VBA code from the user's point of view.)
Each person using the script would need a DataNitro license.
There's no way to make DataNitro work with py2exe, unfortunately.
Source: I'm one of the DataNitro developers. | 1 | 1 | 0 | I'm looking to be able to create an executable with py2exe or something similar that takes information from an excel sheet and returns a word file.
Since my coworkers are technically challenged, I need to create an executable that will take the work out of it for them.
Two questions here:
I have to be able to import something into the python script that represents DataNitro. What module represents DataNitro?
Is this legal? I won't be using a DataNitro license on every machine this exe will run on, besides my own, so if it's even possible, is this a bit shady?
Thank you.
P.S. If I'm not able to do this I will probably have to use xlrd,xlwt,etc. | Can I integrate Datanitro into an executable? | 1.2 | 0 | 0 | 108 |
31,685,279 | 2015-07-28T19:19:00.000 | 1 | 1 | 0 | 0 | python,postgresql,gis,pgrouting | 31,685,995 | 2 | false | 0 | 0 | Heres and idea. Get the latlongs of the house and all the stores.
Calculate the geohash of all points(house and stores) with maximum precision (12) and check if geohash of any stores matches that of the house. If it doesnt, calculate the geohash with lower precision (11) and then rinse and repeat till you get a store(May be multiple Ill get into that later) that matches the geohash of the house.
This is a fuzzy-distance calculation. This will work great and with minimal processing time. But this will fail if you get two or more stores with the same geohash with some precision. So this is what I recommend you do
The geohash loop with decreasing precision. Break when geohash of store(s) match the gohash of the house
IF (more than one sores match) Go for plain distance calculation and find the closest store and return it
ELSE return the one store that the matches the geohash
Advantage of this method : Changes your strict requirements to a fuzzy probability problem. If you get a single sore, great. If you dont at least you reduce the number of candidates for distance calculation
Disadvantage of this method: What if all stores land in the same geohash ? We introduce the same complexity here.
Youll be banking on the chances that not all(or most) stores come under the same geohash. Realistically speaking, the disadvantage is only a disadvantage in corner cases. So overall you should soo performance improvement | 1 | 0 | 0 | So I have a set of points in a city (say houses or residences) and I want to find the shortest distance between these points and a set of candidate points for a store. I am looking for the best store locations that will minimize the distance to all of the houses in the set. So I will iteratively move the candidate store points and then recompute the distances between each store and house (again using Djikstra's algorithm). Because of the sheer volume of calculations, I cannot keep hitting the database for each iteration of the optimization algorithm.
I have used pgrouting many times and this would work, however it would be too slow because of the large number of points and the fact that it has to search the disk each time.
Is there a tool where I an load some small Open Street Maps city map in memory and then calculate the shortest routes in memory? I would need something fast, so preferably in C or python? But any language is okay as long as it works. | Way to calculate road distances in a city very quickly | 0.099668 | 0 | 0 | 1,546 |
31,686,237 | 2015-07-28T20:16:00.000 | 1 | 0 | 1 | 0 | python,multiprocessing | 31,686,446 | 2 | true | 0 | 0 | Don't use locking and don't write from multiple processes; Let the child processes return the output to the parent (e.g. via standard output), and have it wait for the processes to join to read it. I'm not 100% on the multiprocessing API but you could just have the parent process sleep and wait for a SIGCHLD and only then read data from an exited child's standard output, and write it to your output file.
This way only one process is writing to the file and you don't need any busy looping or whatever. It will be much simpler and much more efficient. | 2 | 0 | 0 | While using the multiprocessing module in Python, is there a way to prevent the process of switching off to another process for a certain time?
I have ~50 different child processes spawned to retrieve data from a database (each process = each table in DB) and after querying and filtering the data, I try to write the output to an excel file.
Since all the processes have similar steps, they all end up to the writing process at similar times, and of course since I am writing to a single file, I have a lock that prevents multiple processes to write on the file.
The problem is though, the writing process seems to take very long, compared to the writing process when I had the same amount of data written in a single process (slower by at least x10)
I am guessing one of the reasons could be that while writing, the cpu is constantly switching off to other processes, which are all stuck at the mutex lock, only to come back to the process that is the only one that is active. I am guessing the context switching is a significant waste of time since there are a lot of processes to switch back and forth from
I was wondering if there was a way to lock a process such that for a certain part of the code, no context switching between processes happen
Or any other suggestions to speed up this process? | Python multiprocessing prevent switching off to other processes | 1.2 | 0 | 0 | 924 |
31,686,237 | 2015-07-28T20:16:00.000 | 0 | 0 | 1 | 0 | python,multiprocessing | 31,686,362 | 2 | false | 0 | 0 | You can raise the priority of your process (Go to Task Manager, right click on the process and raise it's process priority). However the OS will context switch no matter what, your process has no better claim then other processes to the OS. | 2 | 0 | 0 | While using the multiprocessing module in Python, is there a way to prevent the process of switching off to another process for a certain time?
I have ~50 different child processes spawned to retrieve data from a database (each process = each table in DB) and after querying and filtering the data, I try to write the output to an excel file.
Since all the processes have similar steps, they all end up to the writing process at similar times, and of course since I am writing to a single file, I have a lock that prevents multiple processes to write on the file.
The problem is though, the writing process seems to take very long, compared to the writing process when I had the same amount of data written in a single process (slower by at least x10)
I am guessing one of the reasons could be that while writing, the cpu is constantly switching off to other processes, which are all stuck at the mutex lock, only to come back to the process that is the only one that is active. I am guessing the context switching is a significant waste of time since there are a lot of processes to switch back and forth from
I was wondering if there was a way to lock a process such that for a certain part of the code, no context switching between processes happen
Or any other suggestions to speed up this process? | Python multiprocessing prevent switching off to other processes | 0 | 0 | 0 | 924 |
31,687,263 | 2015-07-28T21:17:00.000 | -1 | 0 | 0 | 0 | python,machine-learning,gensim,dimensionality-reduction | 31,699,669 | 2 | false | 0 | 0 | you can only perform dimensionality reduction in an unsupervised manner OR supervised but with different labels than your target labels.
For example you could train a logistic regression classifier with a dataset containing 100 topics. the output of this classifier (100 values) using your training data could be your dimensionality reduced feature set. | 1 | 0 | 1 | I've got BOW vectors and I'm wondering if there's a supervised dimensionality reduction algorithm in sklearn or gensim capable of taking high-dimensional, supervised data and projecting it into a lower dimensional space which preserves the variance between these classes.
Actually I'm trying to find a proper metric for the classification/regression, and I believe using dimensionality can help me. I know there's unsupervised methods, but I want to keep the label information along the way. | supervised dimensionality redunction/topic model using sklearn or gensim | -0.099668 | 0 | 0 | 468 |
31,689,040 | 2015-07-28T23:53:00.000 | 1 | 0 | 0 | 0 | python,logging,cherrypy | 31,707,202 | 3 | true | 0 | 0 | I can disable access messages by setting up my own:
logging.basicConfig(filename='error.log', filemode='w', level=logging.ERROR)
and then setting:
cherrypy.log.access_log.propagate = False | 1 | 1 | 0 | I'm trying to set up a logger to only catch my ERROR level messages, but my logger always seems to write INFO:cherrypy.access messages to my log file, which I dont want. I tried setting the log.error_file global, and using the standard python logging module logging.basicConfig(filename='error.log', filemode='w', level=logging.ERROR), but even though I specify the threshold to be ERROR level messages, I still get those INFO level messages written to my log file. Is there any way to prevent this? | Prevent cherrypy from logging access | 1.2 | 0 | 0 | 998 |
31,692,090 | 2015-07-29T05:47:00.000 | 1 | 1 | 0 | 0 | python,django,testing | 31,694,536 | 1 | false | 1 | 0 | Whether it’s compulsory depends on organization you work for. If others say it is, then it is. Just check how tests are normally written in the company and follow existing examples.
(There’re a lot of ways Django-based website can be tested, different companies do it differently.)
Why write tests?
Regression testing. You checked that your code is working, does it still work now? You or someone else may change something and break your code at some point. Running test suite makes sure that what was written yesterday still works today; that the bug fixed last week wasn’t accidentally re-introduced; that things don’t regress.
Elegant code structuring. Writing tests for your code forces you to write code in certain way. For example, if you must test a long 140-line function definition, you’ll realize it’s much easier to split it into smaller units and test them separately. Often when a program is easy to test it’s an indicator that it was written well.
Understanding. Writing tests helps you understand what are the requirements for your code. Properly written tests will also help new developers understand what the code does and why. (Sometimes documentation doesn’t cover everything.)
Automated tests can test your code under many different conditions quickly, sometimes it’s not humanly possible to test everything by hand each time new feature is added.
If there’s the culture of writing tests in the organization, it’s important that everyone follows it without exceptions. Otherwise people would start slacking and skipping tests, which would cause regressions and errors later on. | 1 | 1 | 0 | I am working for a company who wants me to test and cover every piece of code I have.
My code works properly from browser. There is no error no fault.
Except my code works properly on browser and my system is responding properly do I need to do testing? Is it compulsory to do testing? | is testing compulsory if it works fine on realtime on browser | 0.197375 | 0 | 1 | 33 |
31,693,319 | 2015-07-29T06:57:00.000 | 0 | 0 | 0 | 0 | python,parsing,runtime | 32,441,545 | 2 | true | 0 | 0 | As @dmargol1 suggested in the comments, I was able to avoid deepcopy() and copy() by instead building the graph from scratch, rather than copying and modifying it, which was actually a lot faster.
If that is possible: do it!
If copying is neccessary, there are two ways. If you don't need to alter the values, copy() is the way to go, since it is a lot faster than deepcopy() (see the comment of @george-solymosi). If alteration of the values is needed, deepcopy is the only way (see the comment of @gall). | 1 | 0 | 1 | I am looking for a solution to avoid deepcopy() in my task using Python.
I am implementing a statistical dependency parser using chu-liu-edmonds algorithm. I have a graph represented as a dictionary with every head node stored as a key with each having a list containing one or more objects of the class arc as the value.
In the cle-algorithm, I need to modify the graph (contract a cycle). That means, that I need to delete arc objects and heads, and add others, while I later need the original graph to expand those contracted cycles. Right now, I achieve this by deepcopying the original graph and pass it to the contract function.
Now I ran my programm with cProfile and found out that everything that has to do with deepcopy is by far the part of the algorithm that takes the most time.
So my question is: Is there any way to avoid/reduce this in my situation? | Avoid deepcopy() in Python when changing dictionary | 1.2 | 0 | 0 | 1,493 |
31,696,857 | 2015-07-29T09:46:00.000 | -2 | 0 | 0 | 0 | java,python,node.js,source-code-protection | 31,705,928 | 2 | false | 1 | 0 | Do you know how easy it is to decompile java class files?
Seriously, you pop the jar into IntelliJ IDEA (or almost any other IDE) and it spits out decompiled code that's readable enough to reverse engineer. Compiled code offers no security advantages versus interpreted code.
Rather than trying to "encrypt" or "hide" your NodeJS code, why not secure the server better? You will never outpace people reverse engineering your code, you are much better off defending the box that the chocolates are in than poisoning the chocolates. | 1 | 1 | 0 | For an usual NodeJS instance, we can start it by node server.js. The problem with this is that, in a production server, when a hacker compromises my machine they will be able to view and copy all of my server-side source code. This is a big risk, since the source code contains intellectual property. Is there a way to prevent it from happening?
For example, in Java, code is usually built into jar package or .class files and we only deploy the built file. When a hacker compromises the machine, they can only see the jar or .class file which is only byte code and not understandable.
I have a similar concern on my Python Flask server. | Run NodeJS server without exposing its source code | -0.197375 | 0 | 0 | 3,620 |
31,697,451 | 2015-07-29T10:10:00.000 | 2 | 0 | 1 | 0 | python,python-2.7,odoo,openerp-7,rml | 31,700,814 | 1 | false | 0 | 0 | Try to put [[ int(your_integer) or '0' ]].
Regards. | 1 | 0 | 0 | I want to print integer number in RML report like 0,1,2 but I in RML report 0.00,1.00 and so on number is display.
I return and try to convert it in integer using type casting but same output return in RML report.
please suggest me how to print integer number or remove fraction part in RML report. | Print integer number in RML report | 0.379949 | 0 | 0 | 261 |
31,699,447 | 2015-07-29T11:40:00.000 | 1 | 0 | 0 | 0 | python,python-2.7,button,tkinter | 31,700,809 | 3 | false | 0 | 1 | button1.grid(row = 0, column = 0, padx = 0, pady = 0)
But this cannot be used together with pack(), you need to stick to either one.
And this only orders objects relatively, so if you have only one object and you set the row and column to 40 and 50, respectively, the object will still be on the top left corner. | 1 | 2 | 0 | I am new to Python and Tkinter and I am needing to move a button.
I have been using button1.pack() to place the button.
I am not able to move the button from its original position at the bottom of the screen. | How to move a Tkinter button? | 0.066568 | 0 | 0 | 12,820 |
31,699,938 | 2015-07-29T12:05:00.000 | 0 | 0 | 1 | 0 | wxpython,gauge | 31,782,512 | 1 | true | 0 | 1 | Have you considered using a vertical slider as a work around? | 1 | 0 | 0 | I add a vertical gauge which counts from 0 to 100. It works perfectly however it moves from bottom to top. Is there a way to change rotation of it? There is no any setting for that in gauge class. | Change rotation of vertical gauge in wxpython | 1.2 | 0 | 0 | 40 |
31,706,295 | 2015-07-29T16:38:00.000 | 42 | 0 | 1 | 0 | ipython,spyder | 36,760,914 | 1 | true | 0 | 0 | IPython console to Editor: Command +Shift + E
Editor to IPython console: Command + Shift + I
(On windows you can replace Command with CTRL) | 1 | 29 | 0 | I'm using Spyder via Anaconda on a Mac, and often switch back and forth between the editor and console. I was wondering if there's a keyboard shortcut to switch quickly between these two panes. It's just not quite convenient to do it with the trackpad or mouse. | Is there a keyboard shortcut to switch from editor to console in Spyder? | 1.2 | 0 | 0 | 8,901 |
31,711,555 | 2015-07-29T21:34:00.000 | 0 | 0 | 0 | 0 | python,r,machine-learning,random-forest | 31,742,947 | 2 | false | 0 | 0 | You can do a grid serarch over the 'regularazation' parameters to best match your target behavior.
Parameters of interest:
max depth
number of features | 1 | 1 | 1 | Using the randomForest package in R, I was able to train a random forest that minimized overall error rate. However, what I want to do is train two random forests, one that first minimizes false positive rate (~ 0) and then overall error rate, and one that first maximizes sensitivity (~1), and then overall error. Another construction of the problem would be: given a false positive rate and sensitivity rate, train two different random forests that satisfy one of the rates respectively, and then minimize overall error rate. Does anyone know if theres an r package or python package, or any other software out there that does this and or how to do this? Thanks for the help. | random forest with specified false positive and sensitivity | 0 | 0 | 0 | 1,366 |
31,716,514 | 2015-07-30T06:25:00.000 | 0 | 0 | 1 | 0 | tkinter,exe,python-3.4 | 31,782,009 | 2 | true | 0 | 1 | Turns out I was being a derp after all and just downloaded the wrong version of the program I was trying to use which was cx_freeze. | 1 | 0 | 0 | I've probably been completely oblivious to the answer but I want to turn a Tkinter program into an exe file and all the programs I've found that do so either don't work for Python 3.4 or I can't install properly and I don't know why. Can I have some help please? | Tkinter program into exe | 1.2 | 0 | 0 | 171 |
31,716,833 | 2015-07-30T06:42:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,web-scraping | 31,717,519 | 1 | false | 1 | 0 | Doesn't the website have RSS or API or something?
Anyway, you could store the list of scraped news titles (might not be unique though) / IDs / URLs as entity IDs in the datastore right after you send them to your email & just before sending the email you would first check whether the news IDs exist in the datastore with simply not including the onces that do.
Or depending in what strucure the articles are being published and what data is available (Do they have an incrimental post ID? Do they have a date of when an article was posted?) you may simply need to remember the highest value of your previous scrapping and only send email to yourself with the articles where that value is higher than the one previously saved. | 1 | 1 | 0 | I am hosting a Python script on Google App Engine which uses bs4 and mechanize to scrap news section of a website, it runs every 2 hours and sends an email to me all the news.
The Problem is, I want only the Latest news to be sent as mail, As of now it sends me all the news present every time.
I am storing all the news in a list, is there a way to send only the latest news, which has not been mailed to me, not the complete list every time? | Python Script on Google App Engine, which scrapes only updates from a website | 0.197375 | 0 | 0 | 65 |
31,719,451 | 2015-07-30T09:02:00.000 | 11 | 0 | 1 | 1 | python,ubuntu-14.04,hdf5,pytables | 31,719,735 | 4 | true | 0 | 0 | Try to install libhdf5-7 and python-tables via apt | 1 | 25 | 0 | I am trying to install tables package in Ubuntu 14.04 but sems like it is complaining.
I am trying to install it using PyCharm and its package installer, however seems like it is complaining about HDF5 package.
However, seems like I cannnot find any hdf5 package to install before tables.
Could anyone explain the procedure to follow? | install HDF5 and pytables in ubuntu | 1.2 | 0 | 0 | 76,957 |
31,723,140 | 2015-07-30T11:56:00.000 | 0 | 0 | 1 | 0 | python,pycharm,remote-server | 31,731,015 | 5 | false | 0 | 0 | Pycharm needs access to the project's directory.
You can open a remote project if the storage partition for that project directory is shared and mounted/mapped on your local machine with the right permissions by running pycharm on your machine and opening that locally visible project directory.
Note: you should not be opening the project in multiple such pycharm sessions simultaneously as they will collide/conflict with each-other.
Alternatively if you use a version control system (VCS) that supports remote repository access you can create a local copy of the project, work in that copy and push your changes to the remote project as needed (depending on your VCS specifics). | 1 | 29 | 0 | I'm working on a project that is located in a remote server. Can I open it in PyCharm from my local machine ? I couldn't find the way. | Opening remote project in PyCharm | 0 | 0 | 0 | 42,237 |
31,724,287 | 2015-07-30T12:50:00.000 | 1 | 0 | 1 | 0 | regex,python-3.x | 31,724,354 | 1 | false | 0 | 0 | You can not use duplicate group names in python regex because it may be cause a confusion actually python use them as a dictionary keys.
(?P<name>...)
Similar to regular parentheses, but the substring matched by the group is accessible via the symbolic group name name. Group names must be valid Python identifiers, and each group name must be defined only once within a regular expression. A symbolic group is also a numbered group, just as if the group were not named. | 1 | 0 | 0 | I have this regular expression -
(?P<Title>.+)(?P<ReleaseYear>([0-9]+))|(?P<Title>.+)(?P<Prginfo>-[0-9])|(?P<Title>.+)(?P<Prginfo>\s+\d+\s+сезон\s*)|(?P<Title>.+)(?P<Prginfo>\s+сезон\s*\d+)|(?P<Title>.+)
This works perfectly fine in .NET code. But when I try to use it in python I am getting error - "sre_constants.error: redefinition of group name 'Title' as group 3; was group 1" | regular expression for python : removing duplicate group name | 0.197375 | 0 | 0 | 551 |
31,726,919 | 2015-07-30T14:43:00.000 | 1 | 0 | 0 | 0 | widget,wxpython,focus | 31,727,286 | 1 | true | 0 | 1 | No, it is not possible. wxPython does not support setting focus to multiple widgets at once. | 1 | 1 | 0 | I am trying to make four buttons in a panel receive focus at once. Is this possible? | Can multiple widgets receive focus at once in wxPython? | 1.2 | 0 | 0 | 32 |
31,731,980 | 2015-07-30T19:01:00.000 | 0 | 1 | 0 | 1 | python,linux,signals,shutdown | 31,732,143 | 1 | false | 0 | 0 | When Linux is shutting down, (and this is slightly dependent on what kind of init scripts you are using) it first sends SIGTERM to all processes to shut them down, and then I believe will try SIGKILL to force them to close if they're not responding to SIGTERM.
Please note, however, that your script may not receive the SIGTERM - init may send this signal to the shell it's running in instead and it could kill python without actually passing the signal on to your script.
Hope this helps! | 1 | 1 | 0 | I'm developing a python script that runs as a daemon in a linux environment. If and when I need to issue a shutdown/restart operation to the device, I want to do some cleanup and log data to a file to persist it through the shutdown.
I've looked around regarding Linux shutdown and I can't find anything detailing which, if any, signal, is sent to applications at the time of shutdown/restart. I assumed sigterm but my tests (which are not very good tests) seem to disagree with this. | Handling a linux system shutdown operation "gracefully" | 0 | 0 | 0 | 214 |
31,732,233 | 2015-07-30T19:15:00.000 | 6 | 0 | 1 | 0 | python,insert,append | 31,732,274 | 2 | false | 0 | 0 | The difference between append and insert here is the same as in normal usage, and in most text editors. Append adds to the end of the list, while insert adds in front of a specified index. The reason they are different methods is both because they do different things, and because append can be expected to be a quick operation, while insert might take a while depending on the size of the list and where you're inserting, because everything after the insertion point has to be reindexed.
I'm not privy to the actual reasons insert and append were made different methods, but I would make an educated guess that it is to help remind the developer of the inherent performance difference. Rather than one insert method, with an optional parameter, which would normally run in linear time except when the parameter was not specified, in which case it would run in constant time (very odd), a second method which would always be constant time was added. This type of design decision can be seen in other places in Python, such as when methods like list.sort return None, instead of a new list, as a reminder that they are in-place operations, and not creating (and returning) a new list. | 1 | 8 | 0 | I'm surely not the Python guru I'd like to be and I mostly learn studying/experimenting in my spare time, it is very likely I'm going to make a trivial question for experienced users... yet, I really want to understand and this is a place that helps me a lot.
Now, after the due premise, Python documentation says:
4.6.3. Mutable Sequence Types
s.append(x) appends x to the end of the sequence (same as
s[len(s):len(s)] = [x])
[...]
s.insert(i, x) inserts x into s at the index given by i (same as
s[i:i] = [x])
and, moreover:
5.1. More on Lists
list.append(x) Add an item to the end of the list. Equivalent to
a[len(a):] = [x].
[...]
list.insert(i, x) Insert an item at a given position. The first
argument is the index of the element before which to insert, so
a.insert(0, x) inserts at the front of the list, and a.insert(len(a),
x) is equivalent to a.append(x).
So now I'm wondering why there are two methods to do, basically, the same thing? Wouldn't it been possible (and simpler) to have just one append/insert(x, i=len(this)) where the i would have been an optional parameter and, when not present, would have meant add to the end of the list? | Is there a reason why append and insert are both there? | 1 | 0 | 0 | 1,997 |
31,733,583 | 2015-07-30T20:38:00.000 | 0 | 0 | 0 | 0 | python,postgresql,sqlalchemy | 31,781,831 | 1 | false | 1 | 0 | Actually it was a problem with Alembic migration, in migration table must be also created with the PasswordType, not String or any other type | 1 | 0 | 0 | In SQLAlchemy, when I try to query for user by
request.db.query(models.User.password).filter(models.User.email == email).first()
Of course it works with different DB (SQLite3).
The source of the problem is, that the password is
sqlalchemy.Column(sqlalchemy_utils.types.passwordPasswordType(schemes=['pbkdf2_sha512']), nullable=False)
I really don't know how to solve it
I'm using psycopg2 | PasswordType not supported in Postgres | 0 | 1 | 0 | 84 |
31,734,447 | 2015-07-30T21:33:00.000 | 0 | 0 | 0 | 0 | python-2.7,selenium-webdriver | 31,735,197 | 2 | false | 1 | 0 | You can define it only while initializing driver. So to do it with a new path you should driver.quit and start it again. | 1 | 2 | 0 | So I am trying to download multiple excel links to different file paths depending on the link using Selenium.
I am able to set up the FirefoxProfile to download all links to a certain single path, but I can't change the path on the fly as I try to download different files into different file paths. Does anyone have a fix for this?
self.fp = webdriver.FirefoxProfile()
self.ft.set_preferences("browser.download.folderList", 2)
self.ft.set_preferences("browser.download.showWhenStarting", 2)
self.ft.set_preferences("browser.download.dir", "C:\SOURCE FILES\BACKHAUL")
self.ft.set_preferences("browser.helperApps.neverAsk.saveToDisk", ("application/vnd.ms-excel))
self.driver = webdriver.Firefox(firefox_profile = self.fp)
This code will set the path I want once. But I want to be able to set it multiple times while running one script. | Changing FirefoxProfile() preferences more than once using Selenium/Python | 0 | 0 | 1 | 667 |
31,735,552 | 2015-07-30T23:06:00.000 | 1 | 0 | 0 | 1 | python,memory,amazon-web-services,subprocess | 31,735,713 | 1 | true | 0 | 0 | the process gets terminated on my mac due to "Low Swap" which I believe refers to lack of memory
SWAP space is part of your Main Memory - RAM.
When a user reads a file it puts in it Main Memory (caches, and RAM). When its done it removes it.
However, when a user writes to a file, changes need to be recorded. One problem. What if you are writing to a different file every millisecond. The RAM and L caches reach capacity, so the least recently used (LRU) files are put into SWAP space. And since SWAP is still part of Main Memory (not the hard drive), it is possible to overflow it and lose information, which can cause a crash.
Is it possible that I have some sort of memory leak in the script even though I'm not doing hardly anything?
Possibly
Is there anyway that I reduce the memory usage to run this successfully?
One way is to think of how you are managing the file(s). Reads will not hurt SWAP because the file can just be scrapped, without the need to save. You might want to explicitly save the file (closing and opening the file should work) after a certain amount of information has been processed or a certain amount of time has gone by. Thus, removing the file from SWAP space. | 1 | 1 | 0 | I'm new to python so I apologize for any misconceptions.
I have a python file that needs to read/write to stdin/stdout many many times (hundreds of thousands) for a large data science project. I know this is not ideal, but I don't have a choice in this case.
After about an hour of running (close to halfway completed), the process gets terminated on my mac due to "Low Swap" which I believe refers to lack of memory. Apart from the read/write, I'm hardly doing any computing and am really just trying to get this to run successfully before going any farther.
My Question: Does writing to stdin/stdout a few hundred thousand times use up that much memory? The file basically needs to loop through some large lists (15k ints) and do it a few thousand times. I've got 500 gigs of hard drive space and 12 gigs of ram and am still getting the errors. I even spun up an EC2 instance on AWS and STILL had memory errors. Is it possible that I have some sort of memory leak in the script even though I'm not doing hardly anything? Is there anyway that I reduce the memory usage to run this successfully?
Appreciate any help. | Python Process Terminated due to "Low Swap" When Writing To stdout for Data Science | 1.2 | 0 | 0 | 70 |
31,737,012 | 2015-07-31T02:07:00.000 | 0 | 0 | 1 | 0 | python,c++,mfc,gil | 31,738,521 | 1 | false | 0 | 1 | I solved the problem and would like to paste it here if someone else needs.
After A calls Py_Initialize(), PyEval_InitThreads() is also needed.
After A calls Python function B, GIL needs to be released by PyEval_ReleaseThread(PyThreadState_Get()). Then thread E can startup.
After E is over, PyGILState_Ensure() should be called in C++ to get back the lock. | 1 | 0 | 0 | I'm designed a small tool with MFC and python.
In this program, I use C++ to cooperate with python API.
I need:
C++ function A calls Py_Initialize(), and then call python function B with Python API.
In the python script Python function B starts a new python thread E, which will create a new file and make some output to this file.
Then C++ function F calls Py_Finalize(). End.
But things works unexpectedly.
B can be called, but E will not start. After calling F, Py_Finalize() is called, then E will start and create the new file.
I'm wondering what's wrong with this? It seems python is blocked by C++. Does this related with python GIL? If so, what should I do? | Create a new Thread in a python script, which is embeded in MFC | 0 | 0 | 0 | 93 |
31,737,396 | 2015-07-31T02:59:00.000 | 1 | 0 | 0 | 0 | python,json,rest,cassandra,data-migration | 31,738,908 | 1 | false | 0 | 0 | Cassandra 2.2.0 give feature to insert and get data as JSON .So you can use that .
Like for insert json data .
CREATE TABLE test.example (
id int PRIMARY KEY,
id2 int,
id3 int
) ;
cqlsh > INSERT INTO example JSON '{"id":10,"id2":10,"id3":10}' ;
For Select data as Json :
cqlsh > SELECT json * FROM example;
[json]
{"id": 10, "id2": 10, "id3": 10} | 1 | 1 | 0 | I need to fetch the data using REST Endpoints(returns JSON file) and load the data(JSON) into Cassandra cluster which is sitting on AWS.
This is a migration effort, which involves millions of records. No access to source DB. Only access to REST End points.
What are the options I have?
What is the programming language to use?(I am thinking of Python or any scripting language)?
Since I will have to migrate millions of records, I would like to process the jobs concurrently.
What are the challenges?
Thanks for the time and help.
--GK. | Data Migration to Cassandra using REST End points | 0.197375 | 1 | 1 | 129 |
31,738,875 | 2015-07-31T05:45:00.000 | 0 | 1 | 0 | 1 | python-2.7,cron | 31,787,179 | 1 | true | 0 | 0 | The best way out is to create this daemon as a child thread so it automatically gets killed when parent process is killed | 1 | 0 | 0 | I have a python file which starts 2 threads - 1 is daemon process 2 is to do other stuff. Now what I want is to check if my 2 thread is stopped then 1 one also should stop. I was suggested to do so by cron job/runnit.. I am completely new to these so can you please help me achieve the goal
Thanks | Killing a daemon process through cron job/runnit | 1.2 | 0 | 0 | 48 |
31,740,079 | 2015-07-31T07:07:00.000 | 0 | 0 | 0 | 0 | python,django,authentication,django-allauth | 31,740,408 | 1 | false | 1 | 0 | In django admin, update your Site object's domain to your server's ip or your domain name. | 1 | 0 | 0 | I'm using Django-Allauth, but when I upload my project in the server and click on the button to login via Google or Facebook, I redirect to http://127.0.0.1:8001/accounts/google/login/callback/?state=*****
instead of http://example.com/accounts/google/login/callback/?state=*****
I am newbie, so please help me in-depth step by step. | Using Allauth and Redirecting to irrelevent URL | 0 | 0 | 0 | 54 |
31,740,127 | 2015-07-31T07:09:00.000 | 0 | 0 | 0 | 1 | python,sql,django,celery | 31,741,461 | 2 | false | 1 | 0 | The only time when you are going to run into issues while using db with celery is when you use the database as backend for celery because it will continuously poll the db for tasks. If you use a normal broker you should not have issues. | 2 | 0 | 0 | In my django project, I am using celery to run a periodic task that will check a URL that responds with a json and updating my database with some elements from that json.
Since requesting from the URL is limited, the total process of updating the whole database with my task will take about 40 minutes and I will run the task every 2 hours.
If I check a view of my django project, which also requests information from the database while the task is asynchronously running in the background, will I run into any problems? | Will my database connections have problems? | 0 | 0 | 0 | 49 |
31,740,127 | 2015-07-31T07:09:00.000 | 0 | 0 | 0 | 1 | python,sql,django,celery | 31,740,391 | 2 | false | 1 | 0 | While requesting information from your database you are reading your database. And in your celery task your are writing data into your database. You can write only once at a time but read as many times as you want as there is no lock permission on database while reading. | 2 | 0 | 0 | In my django project, I am using celery to run a periodic task that will check a URL that responds with a json and updating my database with some elements from that json.
Since requesting from the URL is limited, the total process of updating the whole database with my task will take about 40 minutes and I will run the task every 2 hours.
If I check a view of my django project, which also requests information from the database while the task is asynchronously running in the background, will I run into any problems? | Will my database connections have problems? | 0 | 0 | 0 | 49 |
31,740,332 | 2015-07-31T07:20:00.000 | 0 | 0 | 0 | 0 | python,opencv | 37,935,771 | 1 | false | 0 | 0 | This is happening because SIFT (which is considered patent-encumbered or non-free) has been moved from the opencv package to the opencv "contrib" repo. You need a version of cv2 that has been compiled specifically with the contrib included.
Alternatively in cv2, use ORB instead of SIFT. | 1 | 0 | 1 | Currently I am using windows 8.1 64 bit machine and anaconda as IDE. I am getting the error as shown below. please help me how to update module. Import cv2 is working fine but not with sift features.
File "C:\Users\conquistador\Anaconda\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 71, in execfile exec(compile(scripttext, filename, 'exec'), glob, loc)
File "C:/Users/conquistador/Documents/opencv/test8.py", line 15, in sift = cv2.xfeatures2d.SIFT()
AttributeError: 'module' object has no attribute 'xfeatures2d' | Regarding opencv 3.0.0 and updating sift feature module in anaconda IDE | 0 | 0 | 0 | 366 |
31,740,561 | 2015-07-31T07:32:00.000 | 2 | 0 | 1 | 0 | python,random,parallel-processing,joblib | 33,232,261 | 1 | false | 0 | 0 | This is expected, although unfortunate.
The reason is that joblib (based on the standard multiprocessing Python tool) relies on forking under Unix. Forking creates the exact same processes and thus the same pseudo-random number generation.
The right way to solve this problem is to pass to the function that you are calling in parallel a seed for each call, eg a randomly-generated integer. That seed is then used inside the function to seed the local random number generation. | 1 | 2 | 1 | I need to generate random number in a function which is paralleled using Joblib. However, the random number generated from the cores are exactly the same.
Currently I solved the problem by assigning random seeds for different cores. Is there any simple way to solve this problem? | Random number generator using Joblib | 0.379949 | 0 | 0 | 591 |
31,740,878 | 2015-07-31T07:50:00.000 | 3 | 0 | 1 | 1 | java,python,ubuntu | 31,741,486 | 2 | false | 0 | 0 | When you have downloaded a package from Oracle site, unpack it and copy its contents into for example /usr/lib/jvm/jdk1.8.0_51/.
Then, type following commands:
sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.8.0_51/bin/java" 1
sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/jdk1.8.0_51/bin/javac" 1
sudo update-alternatives --install "/usr/bin/javaws" "javaws" "/usr/lib/jvm/jdk1.8.0_51/bin/javaws" 1
and in the end:
sudo update-alternatives --config java
and choose the the number of your Oracle Java installation. | 1 | 0 | 0 | I am going to develop some functionality using python and I need to setup pycharm but it depends on some dependencies like open JDK of oracle.
How can setup these two. | How to setup Pycharm and JDK on ubuntu | 0.291313 | 0 | 0 | 3,954 |
31,742,679 | 2015-07-31T09:26:00.000 | 5 | 0 | 1 | 0 | python | 31,742,882 | 1 | true | 0 | 0 | len() is read-only so I'm not sure what you mean by thread-safe. It will not make your program crash if the set is being updated by a different thread, at least.
If you are waiting for a set to reach N items, before you start doing something in the thread, you might end up with >N items, since by the time you start your work, new items might have been added. No guarantees there, obviously.
Also, if you are removing things from the set in the second thread, you don't have any guarantee that you have N items, even if that's what len() returned.
Finally, if you want to post a new question describing which problem you are trying to solve using this pattern you might get more constructive answers. | 1 | 2 | 0 | Python 2.7. Set imported using "import sets"
Is it safe for one thread to fill the set with objects using the add function and another thread to wait until the set has reached a required size by calling the len function on the set. No protection is in place.
EDIT: "until the set has reached at least a specified size" | Is python len function on set thread safe? | 1.2 | 0 | 0 | 1,026 |
31,745,699 | 2015-07-31T12:01:00.000 | 0 | 0 | 0 | 1 | python,python-2.7,command-line,subprocess | 31,745,847 | 2 | false | 1 | 0 | give absolute path of java location
in my system path is C:\Program Files\Java\jdk1.8.0_45\bin\java.exe | 2 | 1 | 0 | I am trying to get Python to call a Java program using a command that works when I enter it into the command line.
When I have Python try it with subprocess or os.system, it says:
'java' is not recognized as an internal or external command, operable
program or batch file.
From searching, I believe it is because when executing through Python, it will not be able to find java.exe like a normal command would. | Python will not execute Java program: 'java' is not recognized | 0 | 0 | 0 | 1,290 |
31,745,699 | 2015-07-31T12:01:00.000 | 0 | 0 | 0 | 1 | python,python-2.7,command-line,subprocess | 61,620,608 | 2 | false | 1 | 0 | You have to set the PATH variable to point to the java location.
import os
os.environ["PATH"] += os.pathsep + os.pathsep.join([java_env])
java_env will be a string containing the directory to java.
(tested on python 3.7) | 2 | 1 | 0 | I am trying to get Python to call a Java program using a command that works when I enter it into the command line.
When I have Python try it with subprocess or os.system, it says:
'java' is not recognized as an internal or external command, operable
program or batch file.
From searching, I believe it is because when executing through Python, it will not be able to find java.exe like a normal command would. | Python will not execute Java program: 'java' is not recognized | 0 | 0 | 0 | 1,290 |
31,746,829 | 2015-07-31T12:59:00.000 | 1 | 0 | 0 | 0 | python,sqlalchemy | 31,967,070 | 1 | true | 0 | 0 | One of the mapped class's fields had an onupdate attribute, which caused it to expire whenever the object is changed.
The solution is to call session.refresh(myobj) between the flush and the call to session.expunge(). | 1 | 1 | 0 | I'm using a session with autocommit=True and expire_on_commit=False. I use the session to get an object A with a foreign key that points to an object B. I then call session.expunge(a.b); session.expunge(a).
Later, when trying to read the value of b.some_datetime, SQLAlchemy raises a DetachedInstanceError. No attribute has been configured for lazy-loading. The error happens randomly.
How is this possible? I assumed that all scalar attributes would be eagerly loaded and available after the object is expunged.
For what it's worth, the objects get expunged so they can be used in another thread, after all interactions with the database are over. | DetachedInstanceError: SQLAlchemy wants to refresh the DateTime attribute of an expunged instance | 1.2 | 1 | 0 | 573 |
31,748,596 | 2015-07-31T14:26:00.000 | 1 | 1 | 0 | 0 | python,panda3d | 32,128,178 | 1 | false | 1 | 0 | This depends on your operating system. Panda3D uses the system's python on OS X and Linux and should "just work".
For Windows Panda3D installs its own copy of Python into Panda3D's install directory (defaults to C:\Panda3D I think), and renames the executable to ppython to prevent name collisions with any other python installs you might have. In your editor you have to change which interpreter it uses to the ppython.exe in the panda3d directory. | 1 | 0 | 0 | I just installed Panda3D, and I can run the example programs by double clicking them, but I can't run them from IDLE or Sublime.
I get errors like ImportError: No module named direct.showbase.ShowBase
I some people bring this up before and the responses suggested using ppython, I can't figure out how run that from Sublime, and I really the auto complete function there.
How can I either configure the Python 2.7 version that I already have to run Panda3D programs or run ppython from SUblime? | What do I need to do to be able to use Panda3D from my Python text editors? | 0.197375 | 0 | 0 | 155 |
31,748,654 | 2015-07-31T14:29:00.000 | 0 | 0 | 0 | 0 | python,sql,sqlite,sql-update | 31,748,808 | 2 | false | 0 | 0 | You have 2 approaches:
Update current rows inside main_table with data from temp_table. The relation will be based by ID.
Add a column to temp_table to mark all rows that have to be transferred to main_table or add aditional table to store IDs that have to be transferred. Then delete all rows that have to be transferred from table main_table and insert corresponding rows from temp_table using column with marks or new table. | 1 | 1 | 0 | I have a question about SQL, especially SQLite3. I have two tables, let's name them main_table and temp_table. These tables are based on the same relational schema so they have the same columns but different rows (values).
Now what I want to do:
For each row of the main_table I want to replace it if there is a row in a temp_table with the same ID. Otherwise I want to keep the old row in the table.
I was thinking about using some joins but it does not provides the thing I want.
Would you give me an advice?
EDIT: ADITIONAL INFO:
I would like to avoid writing all columns because those tables conains tens of attributes and since I have to update all columns it couldn't be necessary to write out all of them. | SQL - update main table using temp table | 0 | 1 | 0 | 334 |
31,748,722 | 2015-07-31T14:32:00.000 | 3 | 0 | 1 | 0 | python,python-3.x | 31,748,774 | 4 | true | 0 | 0 | The easiest way (I find) to identify if a script is created for Python 3 vs Python 2 is to try to find a print statement, if the print statement has parentheses around the argument your script is made for Python 3 if not, it is made for Python 2.
E.g.
Python 2 code would look like
print "Hello World"
Where Python 3 code would look like print("Hello World") | 1 | 1 | 0 | I am new to Python, and decided to install Anaconda and Python v3. When I try to run examples I find on-line, they often don't work even when unchanged. I assume that the reason is sometimes that I try to run a version 2 script. What are some easy markers to look for in the code to tell if that is the case? | What to look after to decide if Python code is v2 or v3 | 1.2 | 0 | 0 | 213 |
31,754,740 | 2015-07-31T20:34:00.000 | 2 | 0 | 1 | 0 | python,nginx,include,virtualenv,uwsgi | 31,776,895 | 1 | true | 0 | 0 | I solved the problem quite easily by symlinking the package of interest in .env/lib/python2.7/site-packages. I originally tried to symlink the entire project folder but that didn't work as it couldn't find the package.
It seems that my uWSGI/Nginx just follows the virtualenv's version of pythonpath, so whatever I configure there is used.
It will be a bit of a pain to have to remember to symlink every package, but at least I only have to do it once for each package.
I'm using PyDev, and it was masking the issue because I was using the default Python interpreter, not the one in virtualenv. Once I changed that, it was easier to solve. | 1 | 4 | 0 | I have a somewhat intricate project setup consisting of several components that work together. Each component is a separate Python project that is hosted as a uWSGI application behind an Nginx proxy. The components interact with each other and with the outside world through the proxy.
I noticed myself about to cut-and-paste some code from one component to another, as they perform similar functions, but interact with different services. Obviously, I want to avoid this, so I am going to pull out common functionality and put it into a separate 'library' project, to be referenced by the different components.
I am running these apps in a virtual environment (using virtualenv), so it should theoretically be easy to simple drop the library project into .env/includes.
However, I have a bit of a strange setup. First of all, I am running the project from /var/www (i.e. uWSGI hosts the apps from here), but the projects actually are present in another source controlled directory. For various reasons, I don't want to move them, so I created symlinks for the project directories in /var/www. This works fine. However, now I have a potential problem, namely, where do I put the library project (which is currently in the same directory as the other components), which I also want to symlink?
Do I symlink it in .env/includes? And if so, how should I reference the library from my other components? Do I reference it from sys.path or as a sibling directory? Is Nginx/uWSGI with virtualenv following the symlinks and taking into account the actual directory or is it blindly assuming that everything is in /var/www?
I have not tried either approach because there seems to be a massive scope for problems, so I wanted to get some input first. Needless to say, I am more than a little confused. | Virtualenv: How to make a custom Python include shared by multicomponents hosted by uWSGI | 1.2 | 0 | 0 | 70 |
31,755,078 | 2015-07-31T20:57:00.000 | 1 | 0 | 0 | 0 | python,abaqus,odb | 31,808,849 | 1 | true | 0 | 0 | Indeed it's possible. You should check the frequency of your output in the field output section inside the step module. You can configure it in terms of step intervals of time, number of increments, exact amount of outputs, etc.
If you're running your analysis from a inp file, you can add FREQ = X after the *STEP command. This way Abaqus will write on the ODB file every X increments. | 1 | 0 | 0 | I'm writing an application for topology optimization within the ABAQUS PDE. As I have quite some iterations, in each of which FEM is performed, a lot of data is written to the system -- and thus a lot of time is lost on I/O.
Is it possible to limit the amount of information that gets written into the ODB file? | Limited ODB output in ABAQUS | 1.2 | 1 | 0 | 571 |
31,755,276 | 2015-07-31T21:14:00.000 | 0 | 0 | 0 | 0 | python,mongodb,flask,pymongo | 31,865,825 | 1 | false | 1 | 0 | Well, it ended up being an issue with the String specifying the working directory. Once it was resolved I was able to connect to the database. | 1 | 0 | 0 | I have a web application that uses flask and mongodb. I recently downloaded a clone of it from github onto a new Linux machine, then proceeded to run it. It starts and runs without any errors, but when I use a function that needs access to the database, I get this error:
File "/usr/local/lib/python2.7/dist-packages/pymongo/cursor.py", line 533, in __ getitem__
raise IndexError("no such item for Cursor instance")
IndexError: no such item for Cursor instance
This isn't happening on any of the other computers running this same application. Does anybody know what's going on? | Cursor Instance Error when connecting to mongo db? | 0 | 1 | 0 | 1,948 |
31,759,266 | 2015-08-01T07:05:00.000 | 1 | 0 | 0 | 0 | python,sqlalchemy | 31,759,707 | 1 | false | 1 | 0 | I figured it out. Basically one needs to use the like command and or_(
carl | 1 | 0 | 0 | I have a string of categories stored in a table. The categories are separated by a ',', so that I can turn the string into a list of strings as
category_string.split(',')
I now want to select all elements of a sql table which have one of the the following categories [catergory1, catagory2].
I have many such comparisons and the list of categories to compare with is not necessarily 2 elements long, so I would need a comparison of elements of two lists. I know that list comparisons are done as
Table.categories.in_(category_list)
in sql-alchemy but I also need to convert a table string element in a list and do the comparison of list elements.
any ideas?
thanks
carl | sql alchemy filter: string split and comparison of list elements | 0.197375 | 1 | 0 | 1,109 |
31,760,059 | 2015-08-01T08:53:00.000 | 0 | 0 | 0 | 0 | android,python,ios,django | 31,760,094 | 2 | false | 1 | 0 | Sure. I've done this for my first app and others then. The backend technology is totally up to you, so feel free to take whatever you like.
The connection between backend and your apps should (but don't have to be) be something JSON-based. Standard REST works fine, Websockets also but have some issues on iOS. | 1 | 1 | 0 | I want to develop a online mobile app. I am thinking about using native languages to develop the front-ends, so Java for Android and Objective-C for iOS. However, for the back-end, can I use something like Django?
I have used django for a while, but the tutorials are really lacking, so can anyone point me to something that will help me understand how to show data handled by Django models on a front-end developed by Java for an android device (that is, by using XML I suppose). | Is it possible to develop the back-end of a native mobile app using the python powered framework Django? | 0 | 0 | 0 | 3,574 |
31,762,577 | 2015-08-01T13:50:00.000 | 0 | 0 | 0 | 0 | python,tkinter | 31,762,715 | 1 | false | 0 | 1 | That sounds right. There may be another format or two that native Tkinter supports, but it's very limited. There's a more up-to-date version of PIL named Pillow that you might want to look into. It doesn't seem like PIL is being actively maintained, last I looked. If you want to work with JPEG for example, you need PIL (or Pillow). | 1 | 0 | 0 | I appreciate this is a VERY novice question but I just want to check in regard to Tkinter Photoimage class, is it only GIF/PGM/PPM images that it can read from files and nothing else unless I download the Python Image Library.
If thats the case I now know exactly where I went wrong in the code I'm writing. IE: wrong file format | Python Tkinter PhotoImage file formats supported? | 0 | 0 | 0 | 2,941 |
31,762,911 | 2015-08-01T14:27:00.000 | 4 | 1 | 1 | 0 | vim,python-mode | 31,763,008 | 1 | true | 0 | 0 | <C-c> and <C-C> both mean Ctrl+C.
I'm sure you can infer how to type the others.
See :help key-notation. | 1 | 2 | 0 | I have come across the following key combinations(...i assume) in vim pymode documentation. <C-c>, <C-C>, <C-X><C-O>, <C-P>/<C-N>. How are they to be interpreted? | Vim pymode: meaning of key combination | 1.2 | 0 | 0 | 1,336 |
31,763,800 | 2015-08-01T16:33:00.000 | 5 | 0 | 0 | 0 | python,pandas,dataframe | 31,763,839 | 1 | true | 0 | 0 | df.reset_index(drop=True, inplace=True) | 1 | 3 | 1 | I'm fairly sure this is a duplicate, but suppose I have a pandas DataFrame and I've sorted the rows based on the values of some column. Originally the indices were the integers 0, 1, …, n-1 but now they're out of order. How do I reassign these indices to be in the proper order for the new sorted DataFrame? | Reassigning index in pandas DataFrame | 1.2 | 0 | 0 | 3,763 |
31,764,006 | 2015-08-01T16:54:00.000 | 0 | 0 | 1 | 0 | output,ipython-notebook | 33,114,002 | 2 | false | 0 | 0 | This isn't exactly what you're looking for, but if you just want to see the output of multiple variables, you can list the expressions with comma in between: a, b, c would display 1, 2, 4 | 1 | 7 | 0 | I would like IPython Notebook to display every line output without explicitly using the print command. Example:
a, b, c = 1, 2, 4
a
b
c
would only display 4 in the output cell, but I would like it to display
1
2
4
Is there a way to do this? I would also be able to selectively suppress some lines (by using ;?) | IPython Notebook display every line output without print | 0 | 0 | 0 | 6,196 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.