Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
24,627,525 | 2014-07-08T08:51:00.000 | 0 | 0 | 1 | 1 | python,pip | 49,551,968 | 28 | false | 0 | 0 | For me this problem appeared when I changed the environment path to point to v2.7 which was initially pointing to v3.6. After that, to run pip or virtualenv commands, I had to python -m pip install XXX as mentioned in the answers below.
So, in order to get rid of this, I ran the v2.7 installer again, chose change option and made sure that, add to path option was enabled, and let the installer run. After that everything works as it should. | 15 | 231 | 0 | Searching the net this seems to be a problem caused by spaces in the Python installation path.
How do I get pip to work without having to reinstall everything in a path without spaces ? | Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe"" | 0 | 0 | 0 | 338,555 |
24,627,525 | 2014-07-08T08:51:00.000 | 1 | 0 | 1 | 1 | python,pip | 49,562,184 | 28 | false | 0 | 0 | I have chosen to install Python for Windows (64bit) not for all users, but just for me.
Reinstalling Python-x64 and checking the advanced option "for all users" solved the pip problem for me. | 15 | 231 | 0 | Searching the net this seems to be a problem caused by spaces in the Python installation path.
How do I get pip to work without having to reinstall everything in a path without spaces ? | Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe"" | 0.007143 | 0 | 0 | 338,555 |
24,627,525 | 2014-07-08T08:51:00.000 | 3 | 0 | 1 | 1 | python,pip | 51,133,921 | 28 | false | 0 | 0 | i had same issue and did a pip upgrade using following and now it works fine.
python -m pip install --upgrade pip | 15 | 231 | 0 | Searching the net this seems to be a problem caused by spaces in the Python installation path.
How do I get pip to work without having to reinstall everything in a path without spaces ? | Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe"" | 0.021425 | 0 | 0 | 338,555 |
24,627,525 | 2014-07-08T08:51:00.000 | 0 | 0 | 1 | 1 | python,pip | 51,287,625 | 28 | false | 0 | 0 | I had this issue and the other fixes on this page didn't fully solve the problem.
What did solve the problem was going in to my system environment variables and looking at the PATH - I had uninstalled Python 3 but the old path to the Python 3 folder was still there. I'm running only Python 2 on my PC and used Python 2 to install pip.
Deleting the references to the nonexistent Python 3 folders from PATH in addition to upgrading to the latest version of pip fixed the issue. | 15 | 231 | 0 | Searching the net this seems to be a problem caused by spaces in the Python installation path.
How do I get pip to work without having to reinstall everything in a path without spaces ? | Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe"" | 0 | 0 | 0 | 338,555 |
24,627,525 | 2014-07-08T08:51:00.000 | 0 | 0 | 1 | 1 | python,pip | 53,663,298 | 28 | false | 0 | 0 | I had a simpler solution. Using @apple way but rename main.py to pip.py then put it in your python version scripts folder and add scripts folder to your path access it globally. if you don't want to add it to path you have to cd to scripts and then run pip command. | 15 | 231 | 0 | Searching the net this seems to be a problem caused by spaces in the Python installation path.
How do I get pip to work without having to reinstall everything in a path without spaces ? | Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe"" | 0 | 0 | 0 | 338,555 |
24,627,525 | 2014-07-08T08:51:00.000 | 1 | 0 | 1 | 1 | python,pip | 38,163,927 | 28 | false | 0 | 0 | My exact problem was (Fatal error in launcher: Unable to create process using '"') on windows 10. So I navigated to the "C:\Python33\Lib\site-packages" and deleted django folder and pip folders then reinstalled django using pip and my problem was solved. | 15 | 231 | 0 | Searching the net this seems to be a problem caused by spaces in the Python installation path.
How do I get pip to work without having to reinstall everything in a path without spaces ? | Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe"" | 0.007143 | 0 | 0 | 338,555 |
24,627,525 | 2014-07-08T08:51:00.000 | 0 | 0 | 1 | 1 | python,pip | 58,481,723 | 28 | false | 0 | 0 | I have similar problem when I reinstall my python, by uninstalling python3.7 and installing python3.8. But I solved it by removing the previous version of python directory. For me it was located here,
C:\Users\your-username\AppData\Local\Programs\Python
I deleted the folder named Python37 (for previous version) and keep Python38 (for updated version). This worked because python itself seems having a trouble on finding the right directory for your python scripts. | 15 | 231 | 0 | Searching the net this seems to be a problem caused by spaces in the Python installation path.
How do I get pip to work without having to reinstall everything in a path without spaces ? | Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe"" | 0 | 0 | 0 | 338,555 |
24,627,919 | 2014-07-08T09:10:00.000 | 0 | 0 | 0 | 0 | python,django,django-models,import | 24,628,844 | 3 | false | 1 | 0 | You are not meant to put models directly on a project level in Django. Every model have to be associated with a particular app. On the other hand you can import models between apps.
If you feel a need for a project level models it just means you haven't partitioned your functionality into apps properly. There shouldn't be any reason to have "project level models" (or "project level views" for that matter). You just need to split the functionality into separate apps.
Let's say you are designing an intranet website for a school. You would have one app that deals with students' accounts, and another app generating timetables, and yet another one for an internal message board, etc.. Every app defines its own models (there are no "project level models"), but apps can import each others models (so message board posts can have a ForeignKey field pointing at student from the "students" app). | 1 | 2 | 0 | I would like to know if there is a way to include/import the models.py from the project directory to multiple apps without copying the model in each app. Thank you! | Django include models.py from project to multiple apps | 0 | 0 | 0 | 2,218 |
24,629,867 | 2014-07-08T10:48:00.000 | 1 | 0 | 1 | 0 | python,regex | 24,629,914 | 4 | false | 0 | 0 | Clarification: I'm assuming you want to parse the numbers, not just match them.
Why use regexes when a simple split will work just fine?
'1.3.4.*'.split('.')
# => ['1', '3', '4', '*']
If you want to ensure that there is at least one dot in the string, check the array length to ensure it is larger than 1. | 1 | 0 | 0 | How can I write a regex for parsing version numbers. I want to match numbers like: 1.000, 1.0.00, 1.0.0.000 but not integers 1, 10,100 | Regex for parsing version number | 0.049958 | 0 | 0 | 2,778 |
24,635,660 | 2014-07-08T15:20:00.000 | 2 | 1 | 1 | 0 | python,ironpython | 24,636,635 | 1 | true | 0 | 1 | You don't need to know any other languages - modulo a few implementation differences, Python is Python is Python. You will, however, need to know the Microsoft windowing library, with which I believe you will have to interface to build a GUI. | 1 | 2 | 0 | I am planning on using IronPython to develop a GUI interface for some python code. Do I need to know any other programming languages other than python. Also if not are there any other GUI packages/addon's to python that only use python to implement and get the final product working? | Does IronPython just use Python or to use IronPython do I need to know other programming languages other than python? | 1.2 | 0 | 0 | 134 |
24,638,043 | 2014-07-08T17:26:00.000 | 2 | 0 | 1 | 0 | python,matplotlib,pycharm | 28,782,077 | 1 | true | 0 | 0 | It doesn't look like you can do it: PyCharm does not use the 'qtconsole' of ipython, but either a plain text console (when you open the "Python console" tab in PyCharm) or ipython notebook (when you open a *.ipynb file). Moreover, PyCharm is done in Java, while to have an interactive plot Matplotlib needs to have a direct connection/knowledge/understanding of the underlying graphic toolkit used... Matplotlib doesn't support any Java based backend, so i guess Pycharm would need to "bridge" the native underlying toolkit... | 1 | 8 | 1 | Is there a way to allow embedded Matplotlib charts in the IPython console that is activated within PyCharm? I'm looking for similar behavior to what can be done with the QT console version of IPython, i.e. ipython qtconsole --matplotlib inline | Embedded charts in PyCharm IPython console | 1.2 | 0 | 0 | 2,490 |
24,639,577 | 2014-07-08T18:52:00.000 | 2 | 0 | 1 | 0 | wxpython,pygame | 24,644,067 | 1 | false | 0 | 1 | The PyEmbeddedImage class has a GetData method (or Data property) that can be used to fetch the raw data of the embedded image, in PNG format. | 1 | 1 | 0 | I have used img2py to convert an image into a .py file. But how to use that converted file in pygame. Is there any specific code for it? | How to use/decompress the file made by img2py | 0.379949 | 0 | 0 | 748 |
24,643,474 | 2014-07-09T00:08:00.000 | 13 | 0 | 1 | 0 | python,spyder | 42,282,994 | 4 | false | 0 | 0 | This is a variation based on Jose's solution of creating a .bat file that eventually worked for me.
Create a spyder.bat file with the following content:
start C:\YourPath\Anaconda2\pythonw.exe C:\YourPath\Anaconda2\cwp.py C:\YourPath\Anaconda2 "C:/YourPath/Anaconda2/pythonw.exe" "C:/YourPath/Anaconda2/Scripts/spyder-script.py" %1
Change YourPath to your actual Anaconda path. Other solutions didn't work for me, and the code here is inspired by Spyder shortcut's target. | 2 | 21 | 0 | I've recently installed Anaconda (using the default settings) on Windows 7. When I try to open a .py file by double-clicking it, I get the Open with... option. How can I set the default program as Spyder? | Set Spyder as default Python | 1 | 0 | 0 | 30,052 |
24,643,474 | 2014-07-09T00:08:00.000 | 3 | 0 | 1 | 0 | python,spyder | 36,169,520 | 4 | false | 0 | 0 | Use the "Default Programs" interface and select the executable for your Spyder,try the "Set Associations" menu and use browse to select your executable.
The key is the executable path(set the path where the anaconda is installed):
C:\path\Anaconda2\Scripts\spyder.exe
just need to be patient, to wait for a few seconds. | 2 | 21 | 0 | I've recently installed Anaconda (using the default settings) on Windows 7. When I try to open a .py file by double-clicking it, I get the Open with... option. How can I set the default program as Spyder? | Set Spyder as default Python | 0.148885 | 0 | 0 | 30,052 |
24,647,400 | 2014-07-09T07:12:00.000 | 5 | 0 | 0 | 0 | python,nltk,stemming | 54,384,472 | 7 | false | 0 | 0 | Stemming is all about removing suffixes(usually only suffixes, as far as I have tried none of the nltk stemmers could remove a prefix, forget about infixes).
So we can clearly call stemming as a dumb/ not so intelligent program. It doesn't check if a word has a meaning before or after stemming.
For eg. If u try to stem "xqaing", although not a word, it will remove "-ing" and give u "xqa".
So, in order to use a smarter system, one can use lemmatizers.
Lemmatizers uses well-formed lemmas (words) in form of wordnet and dictionaries.
So it always returns and takes a proper word. However, it is slow because it goes through all words in order to find the relevant one. | 1 | 43 | 1 | I tried all the nltk methods for stemming but it gives me weird results with some words.
Examples
It often cut end of words when it shouldn't do it :
poodle => poodl
article articl
or doesn't stem very good :
easily and easy are not stemmed in the same word
leaves, grows, fairly are not stemmed
Do you know other stemming libs in python, or a good dictionary?
Thank you | What is the best stemming method in Python? | 0.141893 | 0 | 0 | 74,285 |
24,649,084 | 2014-07-09T08:46:00.000 | 3 | 1 | 0 | 0 | python,performance,raspberry-pi,floating-point-precision | 24,649,933 | 2 | false | 0 | 0 | You can force single precision floating point calculations using numpy.
However, I would be very surprised if using single precision floating point worked out any faster than double precision: the raspberry pi has hardware floating point support so I would expect that all calculations are done at full 80 bit precision and then rounded for 32 bit or 64 bit results when saving to memory. The only possible gain would be slightly less memory bandwidth used when saving the values. | 2 | 2 | 0 | I'm working with python on raspberry pi. I'm using complementary filter to get better values from gyroscope, but it eats too much raspberry's power - it's about 70%. I thought I could increase performance by reducing floating point precision. Now, results have about 12 decimal places, it's way more than I need. Is there any way to set maximum precision? Just rounding the number doesn't meet my needs, since it's just another calculation. Thanks!
Edit: I have tried to use Decimal module and with precision set to 6 it was nearly 6 times slower than float! Is there any other way to work with fixed-point numbers than Decimal (it looks to be created for higher precision than for performance) | Lower the floating-point precision in python to increase performance | 0.291313 | 0 | 0 | 2,081 |
24,649,084 | 2014-07-09T08:46:00.000 | 3 | 1 | 0 | 0 | python,performance,raspberry-pi,floating-point-precision | 24,650,318 | 2 | false | 0 | 0 | It may be that you have the wrong end of the stick.
The data flow form a gyroscope is rather slow, so you should have ample time to filter it with any reasonable filter. Even a Kalman filter should be usable (though probably unnecessary). How often do you sample the gyroscope and accelerometer data? Reasonable maximum values are few hundred Hertz, not more.
The complementary filter for accelerometer and gyroscope measurement is very lightweight, and it by itself should consume very little processing power. It can be implemented on a slow 8-bit processor, so Raspberry is way too fast for it.
Depending on what you do with the complementary filter, the filter itself needs a few floating point operations. If you calculate arcus tangents or equivalent functions, that'll require hundreds of FLOPs. If you do that at a rate of 1 kHz, you'll consume maybe a few hundred kFLOPS (FLoating-point OPerations per Second). The FP throughput of a RPi is approximately 100 MLFOPS, so there is a lot of margin.
Reducing the FP precision will thus not help significantly, the problem is elsewhere. Maybe if you show a bit more of your code, it could be determined where the problem is! | 2 | 2 | 0 | I'm working with python on raspberry pi. I'm using complementary filter to get better values from gyroscope, but it eats too much raspberry's power - it's about 70%. I thought I could increase performance by reducing floating point precision. Now, results have about 12 decimal places, it's way more than I need. Is there any way to set maximum precision? Just rounding the number doesn't meet my needs, since it's just another calculation. Thanks!
Edit: I have tried to use Decimal module and with precision set to 6 it was nearly 6 times slower than float! Is there any other way to work with fixed-point numbers than Decimal (it looks to be created for higher precision than for performance) | Lower the floating-point precision in python to increase performance | 0.291313 | 0 | 0 | 2,081 |
24,650,785 | 2014-07-09T10:07:00.000 | 5 | 1 | 0 | 0 | python,c++,dll,static-libraries | 24,650,884 | 3 | true | 0 | 1 | It depends on your desired deployment. If you use dynamic linking will need to carefully manage the libraries (.so, .dll) on your path and ensure that the correct version is loaded. This can be helped if you include the version number in the filename, but then that has its own problems (security... displaying version numbers of your code is a bad idea).
Another benefit is that you can swap your library functionality without a re-compile as long as the interface does not change.
Statically linking is conceptually simpler and practically simpler. You only have to deploy one artefact (an .exe for example). I recommend you start with that until you need to move to the more complicated shared library setup.
Edit: I don't understand your "extra credit" question. What do you mean by "edit values"? If you mean can you modify variables that were declared in your C++ code, then yes you can as long as you use part of the public interface to do it.
BTW this advice is for the general decision. If you are linking from Python to C/C++ I think you need to use a shared library. Not sure as I haven't done it myself.
EDIT: To expand on "public interface". When you create a C++ library of whatever kind, you specify what functions are available to outside classes (look up how to to that). This is what I mean by public interface. Parts of your library are inaccessible but others (that you specify) are able to be called from client code (i.e. your python script). This allows you to modify the values that are stored in memory.
If you DO mean that you want to edit the actual C++ code from within your python I would suggest that you should re-design your application. You should be able to customise the run-time behaviour of your C++ library by providing the appropriate configuration.
If you give a solid example of what you mean by that we'll be able to give you better advice. | 1 | 3 | 0 | I have a Python code that needs to be able to execute a C++ code. I'm new to the idea of creating libraries but from what I have learned so far I need to know whether I need to use static or dynamic linking.
I have read up on the pros and cons of both but there is a lot of jargon thrown around that I do not understand yet and since I need to do this ASAP I was wondering if some light can be shed on this from somebody who can explain it simply to me.
So here's the situation. My C++ code generates some text files that have data. My Python code then uses those text files to plot the data. As a starter, I need to be able to run the C++ code directly from Python. Is DLL more suitable than SL? Or am I barking up the completely wrong tree?
Extra: is it possible to edit variables in my C++ code, compile it and execute it, all directly from Python? | Advise needed for Static vs Dynamic linking | 1.2 | 0 | 0 | 2,081 |
24,653,225 | 2014-07-09T12:05:00.000 | 0 | 1 | 0 | 0 | python-2.7,twitter | 24,654,111 | 1 | true | 0 | 0 | If you want the latest tweets from specific users, Twitter offers the Streaming API.
The Streaming API is the real-time sample of the Twitter Firehose. This API is for those developers with data intensive needs. If you're looking to build a data mining product or are interested in analytics research, the Streaming API is most suited for such things.
If you're trying to access old information, the REST API with its severe request limits is the only way to go. | 1 | 0 | 0 | How can i get twitter information (number of followers, following, etc.) about a set of twitter handles using the Twitter API?
i have already used Python-Twitter library but this only gives me information about my own twitter account, but i need the same for other twitter users (i have a list).
Can you please guide me in the right direction? or refer to some good blogs/articles | Twitter API access using Python (newbie:Help Needed) | 1.2 | 0 | 1 | 58 |
24,655,713 | 2014-07-09T13:58:00.000 | 0 | 0 | 0 | 0 | python,gunicorn | 24,656,069 | 2 | false | 1 | 0 | Yes and no. Depends. Restarting workers eats resources. But the price is not too high. On the other hand if you have memory leaks then it will allow you to save memory. Thus effectively increasing performance. | 2 | 8 | 0 | Is there any effect on performance if I use gunicorn max_requests setting for production server? | Gunicorn max_requests for Production | 0 | 0 | 0 | 3,830 |
24,655,713 | 2014-07-09T13:58:00.000 | 0 | 0 | 0 | 0 | python,gunicorn | 26,990,224 | 2 | false | 1 | 0 | I've just found that this setting is the cause of a response time issue I have having. Baseline response time of one of my sites, measured locally, was about 20ms. This goes up to about 300ms on a worker restart, so yes, there is a performance impact.
As a result of this, I've upped my setting from a super-safe 10 to 100. | 2 | 8 | 0 | Is there any effect on performance if I use gunicorn max_requests setting for production server? | Gunicorn max_requests for Production | 0 | 0 | 0 | 3,830 |
24,655,877 | 2014-07-09T14:05:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,cordova,google-cloud-storage | 24,657,475 | 2 | false | 1 | 0 | Yes, that is a fine use for GAE and GCS. You do not need an <input type=file>, per se. You can just set up POST parameters in your call to your GAE url. Make sure you send a hidden key as well, and work from SSL-secured urls, to prevent spammers from posting to your app. | 1 | 1 | 0 | Goal: Take/attach pictures in a PhoneGap application and send a public URL for each picture to a Google Cloud SQL database.
Question 1: Is there a way to create a Google Cloud Storage object from a base64 encoded image (in Python), then upload that object to a bucket and return a public link?
I'm looking to use PhoneGap to send images to a Python Google App Engine application, then have that application send the images to a Google Cloud Storage bucket I have set up, then return a public link back to the PhoneGap app. These images can either be taken directly from the app, or attached from existing photo's on the user's device.
I use PhoneGap's FileTransfer plugin to upload the images to GAE, which are sent as base64 encoded images (this isn't something I can control).
Based on what I've found in Google Docs, I can upload the images to Blobstore; however, it requires <input type='file'> elements in a form. I don't have 'file' input elements; I just take the image URI returned from PhoneGap's camera object and display a thumbnail of the picture that was taken (or attached).
Question 2: Is it possible to have an <input type='file'> element and control it's value? As in, is it possible to set it's value based on whether the user chooses a file, or takes a picture?
Thanks in advance! | Using PhoneGap + Google App Engine to Upload and Save Images | 0 | 1 | 0 | 620 |
24,661,533 | 2014-07-09T18:46:00.000 | 3 | 0 | 0 | 1 | python,apache-kafka,kafka-consumer-api,kafka-python | 27,436,961 | 5 | false | 0 | 0 | kafka-python stores offsets with the kafka server, not on a separate zookeeper connection. Unfortunately, the kafka server apis to support commit/fetching offsets were not fully functional until apache kafka 0.8.1.1. If you upgrade your kafka server, your setup should work. I'd also suggest upgrading kafka-python to 0.9.4.
[kafka-python maintainer] | 1 | 8 | 0 | I am using Kafka 0.8.1 and Kafka python-0.9.0. In my setup, I have 2 kafka brokers setup. When I run my kafka consumer, I can see it retrieving messages from the queue and keeping track of offsets for both the brokers. Everything works great!
My issue is that when I restart the consumer, it starts consuming messages from the beginning. What I was expecting was that upon restart, the consumer would start consuming messages from where it left off before it died.
I did try keeping track of the message offsets in Redis and then calling consumer.seek before reading a message from the queue to ensure that I was only getting the messages that I hadn't seen before. While this worked, before deploying this solution, I wanted to check with y'all ... perhaps there is something I am misunderstanding about Kafka or the python-Kafka client. Seems like the consumer being able to restart reading from where it left off is pretty basic functionality.
Thanks! | Kafka Consumer: How to start consuming from the last message in Python | 0.119427 | 0 | 0 | 17,187 |
24,663,772 | 2014-07-09T21:03:00.000 | 0 | 0 | 1 | 1 | python,input,command-line,command,command-prompt | 24,663,846 | 2 | false | 0 | 0 | if you pasted the code here that would help but
the answer you are most likely looking for is commandline arguements.
If I were to guess, in the command line the input would look something like:
python name_of_script.py "c:\thefilepath\totheinputfile" {enter}
{enter} being the actually key pressed on the keyboard and not typed in as the word
Hopefully this starts you on the right answer :) | 2 | 0 | 0 | I'm new to python and I'm attempting to run a script provided to me that requires to input the name of a text file to run. I changed my pathing to include the Python directory and my input in the command line - "python name_of_script.py" - is seemingly working. However, I'm getting the error: "the following arguments are required: --input". This makes sense, as I need this other text file for the program to run, but I don't know how to input it on the command line, as I'm never prompted to enter any input. I tried just adding it to the end of my command prompt line, but to no avail.
Does anybody know how this could be achieved?
Thanks tons | Python script requires input in command line | 0 | 0 | 0 | 352 |
24,663,772 | 2014-07-09T21:03:00.000 | 0 | 0 | 1 | 1 | python,input,command-line,command,command-prompt | 24,665,527 | 2 | false | 0 | 0 | Without reading your code, I guess if
I tried just adding it to the end of my command prompt line, but to no avail.
it means that you need to make your code aware the command line argument. Unless you do some fancy command line processing, for which you need to import optparse or argparse, try:
import sys
# do something with sys.argv[-1] (ie, the last argument) | 2 | 0 | 0 | I'm new to python and I'm attempting to run a script provided to me that requires to input the name of a text file to run. I changed my pathing to include the Python directory and my input in the command line - "python name_of_script.py" - is seemingly working. However, I'm getting the error: "the following arguments are required: --input". This makes sense, as I need this other text file for the program to run, but I don't know how to input it on the command line, as I'm never prompted to enter any input. I tried just adding it to the end of my command prompt line, but to no avail.
Does anybody know how this could be achieved?
Thanks tons | Python script requires input in command line | 0 | 0 | 0 | 352 |
24,663,825 | 2014-07-09T21:07:00.000 | 1 | 0 | 0 | 0 | python,opencv,image-processing | 24,663,901 | 1 | false | 0 | 0 | filter out greyscale or filter in the allowed colors
Idk if the range of colors or range of greyscale is larger but maybe whitelisting instead of blacklisting is helpful here | 1 | 1 | 1 | I am working on a project which involves using a thermal video camera to detect objects of a certain temperature. The output I am receiving from the camera is an image where the pixels of interest (within the specified temperature range) are colored yellow-orange depending on intensity, and all other pixels are grayscale. I have tried using cv2.inRange() to filter for the colored pixels, but results have been spotty, as the pixel colors I have been provided with in the color lookup tables do not match those actually output by the camera.
I figured then that it would be easiest for me to just filter out all grayscale pixels, as then I will be left with only colored pixels (pixels of interest). I have tried looping through each pixel of each frame of the video and checking to see if each channel has the same intensity, but this takes far too long to do. Is there a better way to filter out all grayscale pixels than this? | How to filter out all grayscale pixels in an image? | 0.197375 | 0 | 0 | 1,335 |
24,664,072 | 2014-07-09T21:26:00.000 | -3 | 0 | 1 | 0 | python,conda | 57,263,796 | 2 | false | 0 | 0 | I tried this and had a lot of issues with plot.ly being updated to v. 4. My code was written on 3.10 and didn't feel like changing it. I had to copy the whole environment (every single file from \envs) from one machine to next, it works. | 1 | 31 | 0 | I have a python 2.7 conda environment and would like to create an equivalent environment with python 3.4. I am aware of the --clone option when creating environments, but it won't accept additional arguments, like python=3.4. Is there a way to do this automatically? I thought about trying to use the output from conda list --export, but that also encodes the python release. | How do I clone a conda environment from one python release to another? | -0.291313 | 0 | 0 | 24,068 |
24,664,413 | 2014-07-09T21:55:00.000 | 0 | 0 | 0 | 0 | python,mysql,sql | 24,666,569 | 1 | true | 0 | 0 | For anyone that cares, the ON DUPLICATE KEY UPDATE SQL command was what I ended up using. | 1 | 0 | 0 | So I have a string in Python that contains like 500 SQL INSERT queries, separated by ;. This is purely for performance reasons, otherwise I would execute individual queries and I wouldn't have this problem.
When I run my SQL query, Python throws: IntegrityError: (1062, "Duplicate entry 'http://domain.com' for key 'PRIMARY'")
Lets say on the first query of 500, the error is thrown. How can I make sure those other 499 queries are executed on my database?
If I used a try and except, sure the exception wouldn't be raised, but the rest of my statement wouldn't be executed. Or would it, since Python sends it all in 1 big combined string to MySQL?
Any ideas? | Python Ignore MySQL IntegrityError when trying to add duplicate entry with a Primary key | 1.2 | 1 | 0 | 828 |
24,665,515 | 2014-07-09T23:58:00.000 | 2 | 1 | 0 | 1 | python,sonarqube,filenames,executable | 24,698,064 | 1 | false | 0 | 0 | It is not possible to do so.
Empty string as value of property "sonar.python.file.suffixes" is ignored. | 1 | 1 | 0 | When I specify a Python executable script file that does not end in .py suffx, Sonar runs successfully but the report has no content. I have tried specifying -Dsonar.python.file.suffixes="" but that makes no difference.
sonar-runner -Dsonar.sources=/users/av/bin -Dsonar.inclusions=gsave -Dsonar.issuesReport.html.location=/ws/av-rcd/SA_Reports/PY-SA_report-2014-7-2-15-13-44.html -Dsonar.language=py -Dsonar.python.file.suffixes=""
How can I make sonar analyze a python executable script that does nothave a .py suffix? | Sonar python files without .py suffix | 0.379949 | 0 | 0 | 561 |
24,665,738 | 2014-07-10T00:28:00.000 | 0 | 0 | 0 | 0 | python,text,widget,undo | 24,665,859 | 2 | true | 0 | 0 | Monitor every keystroke in the text box and save the content of the text box after each change in a stack structure. When you encounter a word delimiting character replace the individual character changes for that word that are already on the stack with a new entry that is the whole word just typed. When the user presses undo you just pop the last value off the stack and put that in the text box.
This will allow you to undo individual character changes one at a time when the user is in the middle of typing a word and will undo the last whole word if the user completed typing it. | 1 | 0 | 0 | If the user clicks a menu button it would trigger a function that would undo what the user last typed. How would I go about doing this? | How to undo the last thing a user typed in a text field when he/she presses a button? | 1.2 | 0 | 0 | 169 |
24,666,602 | 2014-07-10T02:33:00.000 | 3 | 0 | 1 | 0 | python,performance,sorting,heap,complexity-theory | 52,453,710 | 3 | false | 0 | 0 | The nlargest() and nsmallest() functions of heapq are most appropriate if you are trying to find a relatively small number of items. If you want to find simply single smallest or largest number , min() and max() are most suitable, because it's faster and uses sorted and then slicing. If you are looking for the N smallest or largest items and N is small compared to the overall size of the collection, these functions provide superior performance. Although it's not necessary to use heapq in your code, it's just an interesting topic and a worthwhile subject of study. | 1 | 15 | 0 | I'm relatively new to python (using v3.x syntax) and would appreciate notes regarding complexity and performance of heapq vs. sorted.
I've already implemented a heapq based solution for a greedy 'find the best job schedule' algorithm. But then I've learned about the possibility of using 'sorted' together with operator.itemgetter() and reverse=True.
Sadly, I could not find any explanation on expected complexity and/or performance of 'sorted' vs. heapq. | Python heapq vs. sorted complexity and performance | 0.197375 | 0 | 0 | 11,537 |
24,673,386 | 2014-07-10T09:49:00.000 | 0 | 1 | 0 | 1 | python,ssh,fabric | 24,673,514 | 1 | false | 0 | 0 | You can call your functions with importing your fabfile.py. At the end, fabfile is just another python script you can import. I saw a case, where a django project has an api call to a function from fabfile. Just import and call, simple as python :) | 1 | 0 | 0 | I do not want to use the fab command and don't use the fabfile & command line arguments. I want to make automated remote ssh using fab api by writing a python script.Can I automated this by writig python script? | without using fab commandline argument can I use fabric api automated | 0 | 0 | 0 | 74 |
24,673,772 | 2014-07-10T10:06:00.000 | 4 | 0 | 0 | 1 | python,google-app-engine | 35,254,560 | 3 | false | 1 | 0 | You can store your keys in datastore. Later when you need them in the code, you can fetch them from datastore and cache them by memcache. | 1 | 2 | 0 | I deploy my project to GAE over Github. There is some foreign API-key which I don't want to save in repository and make them public. Is it possible to set an environment variable for a project in GAE control panel so I can catch it in my application? | Set environment variables in GAE control panel | 0.26052 | 0 | 0 | 2,796 |
24,681,509 | 2014-07-10T16:14:00.000 | 0 | 0 | 1 | 0 | python,ipython-notebook | 30,565,768 | 1 | true | 0 | 0 | Maybe some combination of cPickle and bash magic for scp? | 1 | 3 | 1 | I'm using an ipython notebook that is running on a remote server. I want to save data from the notebook (e.g. a pandas dataframe) locally.
Currently I'm saving the data as a .csv file on the remote server and then move it over to my local machine via scp. Is there a more elegant way directly from the notebook? | Locally save data from remote iPython notebook | 1.2 | 0 | 0 | 671 |
24,684,316 | 2014-07-10T19:00:00.000 | 1 | 1 | 0 | 0 | python,pdf,export,latex,pdflatex | 24,684,691 | 3 | false | 1 | 0 | generate a Latex file.tex with a Python script
f= open("file.tex", 'w')
f.write('\documentclass[12pt]{article}\n')
f.write('\usepackage{multicol}\n')
f.write('\n\begin{document}\n\n')
...
f.write('\end{document}')
f.close()
run pdflatex on the LaTex file from the Python script as a subprocess
subprocess.call('latex file.tex')
As an alternative to 1. you can generate a LaTex template and just substitute the variable stuff using Python regular expressions and string substitutions. | 1 | 0 | 0 | I have a GUI program in Python which calculates graphs of certain functions. These functions are mathematical like say, cos(theta) etc. At present I save the graphs of these functions and compile them to PDF in Latex and write down the equation manually in Latex.
But now I wish to simplify this process by creating a template in Latex that arranges, The Function Name, Graph, Equation and Table and complies them to a single PDF format with just a click.
Can this be done? And how do I do it?
Thank you. | Python Export Program to PDF using Latex format | 0.066568 | 0 | 0 | 2,206 |
24,684,821 | 2014-07-10T19:32:00.000 | 0 | 0 | 0 | 0 | python,django,rest,permissions,django-rest-framework | 27,932,256 | 1 | false | 1 | 0 | One of the arguments to has_permission is view, which has an attribute .action, which is one of the five "LCRUD" actions ("list"/"create"/"retrieve"/"update"/"destroy"). So I think you could use that to check, in has_permission, whether the action being performed is a list or a retrieve, and deny or allow it accordingly. | 1 | 1 | 0 | I'm using Django Rest Framework and I'm having some trouble with permissions. I know how to use has_permission and has_object_permission, but I have a number of cases where someone needs to be able to access retrieve but not list--e.g., a user has access to their own profile, but not to the full list of them. The problem is, has_permission is always called before has_object_permission, so has_object_permission can only be more restrictive, not less.
So far, the only way I've been able to do this is to have more permissive permissions and then overwrite list() directly in the ViewSet and include the permission check in the logic, but I'd be able to actually store all of this logic in a Permissions class rather than in each individual viewset.
Is there any way to do this? Right now I feel like I'm goign to have to write a ViewSet metaclass to automatically apply permissions as I want to each viewset method, which isn't really something I want to do. | Django Rest Framework--deny access to list but not to retrieve | 0 | 0 | 0 | 302 |
24,686,448 | 2014-07-10T21:16:00.000 | 1 | 0 | 0 | 1 | python,c,caching | 24,686,774 | 1 | false | 0 | 0 | Since no one has really proposed anything I'll drop my idea here. If you need an example let me know and I'll include one.
The easiest thing to do would be to serialize a dictionary that contains the system health and last time.time() it was checked. At the beginning of your program unpickle the dictionary, check the time, if it's less then your 60 second time interval, quit. Otherwise check the health like normal and cache it (with the time). | 1 | 0 | 0 | I have a "healthchecker" program, that calls a "prober" every 10 seconds to check if a service is running. If the prober exits with return code 0, the healthchecker considers the tested service fine. Otherwise, it considers it's not working.
I can't change the healthchecker (I can't make it check with a bigger interval, or using a better communication protocol than spawning a process and checking its exit code).
That said, I don't want to really probe the service every 10 seconds because it's overkill. I just wanna probe it every minute.
My solution to that is to make the prober keep a "cache" of the last answer valid for 1 minute, and then just really probe when this cache expires.
That seems fine, but I'm having trouble thinking on a decent approach to do that, considering the program must exit (to return an exit code). My best bet so far would be to transform my prober in a daemon (that will keep the cache in memory) and create a client to just query it and exit with its response, but it seems too much work (and dealing with threads, and so on).
Another approach would be to use SQLite/memcached/redis.
Any other ideas? | exiting a program with a cached exit code | 0.197375 | 0 | 0 | 56 |
24,687,248 | 2014-07-10T22:19:00.000 | 0 | 0 | 0 | 0 | python,memory,local-storage,large-files | 24,687,460 | 1 | false | 0 | 0 | Copying a file is sequentially reading it and saving in another place.
The performance of application might vary depending on the data access patterns, computation to I/O time, network latency and network bandwidth.
If you execute your script once, and read through it sequentially it's the same as copying the file, except you perform computations on it instead of saving. Even if you process small chunks of data it probably gets buffered. In fact in case of a single execution, if you copy 1st you actually read the same data twice, once for copy, and once the copy for computation.
If you execute your script multiple times, then you need to check what is your data troughput. For example, if you have a gigabit ethernet then 1GBit/s is 125MByte/s, if you process data slower, then you are not limited by the bandwidth.
Network latency comes into play when you send multiple requests for small chunks of data. Upon a read request you send a network request and get data back in a finite time, this is the latency. If you make a request for one big chunk of data you "pay" the latency limit once, if you ask 1000 times for 1/1000 of the big chunk you will need to "pay" it 1000 times. However, this is probably abstracted from you by the network file system and in case of a sequential read it will get buffered. It would manifest itself in randon jumping over file and reading small chunks of it.
You can check what you are limited by checking how much bytes you process per second and compare it to limits of the hardware. If it's close to HDD speed (which in your case i bet is not) you are bound by I/O - HDD. If it's lower, close to network bandwidth, you are limited by I/O - network. If it's even lower, it's either you are bound by processing speed of data or I/O network latency. However, if you were bound by I/O you should see difference between the two approaches, so if you are seeing the same results it's computation. | 1 | 0 | 1 | Sorry if the topic was already approached, I didn't find it.
I am trying to read with Python a bench of large csv files (>300 MB) that are not located in a local drive.
I am not an expert in programming but I know that if you copy it into a local drive first it should take less time that reading it (or am I wrong?).
The thing is that I tested both methods and the computation times are similar.
Am I missing something? Can someone explain / give me a good method to read those file as fast as possible?
For copying to local drive I am using: shutil.copy2
For reading the file: for each line in MyFile
Thanks a lot for your help,
Christophe | Reading Large File from non local disk in Python | 0 | 0 | 0 | 214 |
24,687,736 | 2014-07-10T23:06:00.000 | 1 | 0 | 0 | 0 | python,ajax,angularjs,flask | 56,367,083 | 3 | false | 1 | 0 | There isn't any way to be certain whether a request is made by ajax.
What I found that worked for me, was to simply include a get parameter for xhr requests and simply omit the parameter on non-xhr requests.
For example:
XHR Request: example.com/search?q=Boots&api=1
Other Requests: example.com/search?q=Boots | 1 | 15 | 0 | I'd like to detect if the browser made a request via AJAX (AngularJS) so that I can return a JSON array, or if I have to render the template. How can I do this? | How can I identify requests made via AJAX in Python's Flask? | 0.066568 | 0 | 1 | 5,005 |
24,688,388 | 2014-07-11T00:29:00.000 | 0 | 0 | 0 | 0 | python,mysql,django,mongodb,database | 24,690,665 | 2 | false | 1 | 0 | In a Django project you've got 4 alternatives for this kind of problem, in no particular order:
using PostgreSQL, you can use the hstore field type, that's basically a pickled python dictionary. It's not very helpful in terms of querying it, but does its job saving your data.
using Django-NoRel with mongodb you get the ListField field type that does the same thing and can be queried just like anything in mongo. (option 2)
using Django-eav to create an entity attribute value store with your data. Elegant solution but painfully slow queries. (option 1)
storing your data as a json string in a long enough TextField and creating your own functions to serializing and deserializing the data, without thinking on being able to make a query over it.
In my own experience, if you by any chance need to query over the data, your option two is by far the best choice. EAV in Django, without composite keys, is painful. | 1 | 0 | 0 | I apologize if this has been asked already, or if this is answered somewhere else.
Anyways, I'm working on a project that, in short, stores image metadata and then allows the user to search said metadata (which resembles a long list of key-value pairs). This wouldn't be too big of an issue if the metadata was standardized. However, the problem is that for any given image in the database, there is any number of key/values in its metadata. Also there is no standard list of what keys there are.
Basically, I need to find a way to store a dictionary for each model, but with arbitrary key/value pairs. And I need to be able to query them. And the organization I'm working for is planning on uploading thousands of images to this program, so it has to query reasonably fast.
I have one model in my database, an image model, with a filefield.
So, I'm in between two options, and I could really use some help from people with more experience on choosing the best one (or any other solutions that would work better)
Using a traditional relational database like MySql, and creating a separate model with a foreignkey to the image model, a key field, and a value field. Then, when I need to query the data, I'll ask for every instance of this separate table that relates to an image, and then query those rows for the key/value combination I need.
Using something like MongoDB, with django-toolbox and its DictField to store the metadata. Then, when I need to query, I'll access the dict and search it for the key/value combination I need.
While I feel like 1 would be much better in terms of query time, each image may have up to 40 key/values of metadata, and that makes me worry about that separate "dictionary" table growing far too large if there's thousands of images.
Any advice would be much appreciated. Thanks! | Django: storing/querying a dictionary-like data set? | 0 | 1 | 0 | 992 |
24,690,101 | 2014-07-11T04:26:00.000 | 0 | 0 | 0 | 0 | c#,python,sql,tsql | 24,690,183 | 1 | true | 0 | 0 | You can run a trace in SQL Profiler to see the queries being executed on the server. | 1 | 0 | 0 | I faced with problem:
There is a big old database on microsoft sql server (with triggers, functions etc.). I am writing C# app on top of this db. Most of work is a "experiments" like this:
Write a part of functionality and see if it works in old Delphi app (i.e. inserted data in C# loaded correctly in Delphi).
So I need tool, that can determine which fields of each table is used or not (used in my queries). I think to write python script with sql syntax analyser or just using regular expressions.
What solution would you recommend? | Analyse sql queries text | 1.2 | 1 | 0 | 70 |
24,695,174 | 2014-07-11T10:03:00.000 | 48 | 0 | 0 | 0 | python,r,scipy | 42,065,440 | 2 | false | 0 | 0 | The equivalent of the R pnorm() function is: scipy.stats.norm.cdf() with python
The equivalent of the R qnorm() function is: scipy.stats.norm.ppf() with python | 1 | 31 | 1 | I need the quantile of some distributions in python. In r it is possible to compute these values using the qf, qnorm and qchi2 functions.
Is there any python equivalent of these R functions?
I have been looking on scipy but I did non find anything. | python equivalent of qnorm, qf and qchi2 of R | 1 | 0 | 0 | 32,670 |
24,697,420 | 2014-07-11T12:11:00.000 | 0 | 0 | 0 | 0 | python,django,postgresql,heroku | 24,698,874 | 3 | false | 1 | 0 | I presume that you have created a migration to add mainsite_message.spam to the schema. Have you made sure that this migration is in your git repository?
If you type git status you should see untracked files. If the migration is untracked you need to git add path_to_migration and then push it to Heroku before you can run it there. | 2 | 1 | 0 | So, locally I've changed my models a few times and used South to get everything working. I have a postgres database to power my live site, and one model keeps triggering a column mainsite_message.spam does not exist error. But when I run heroku run python manage.py migrate mainsite from the terminal, I get Nothing to migrate. All my migrations have been pushed. | Add a column to heroku postgres database | 0 | 1 | 0 | 1,178 |
24,697,420 | 2014-07-11T12:11:00.000 | 0 | 0 | 0 | 0 | python,django,postgresql,heroku | 24,697,852 | 3 | false | 1 | 0 | Did you run schemamigration before? If yes, go to your database and take a look at your table "south_migrationhistory" there you can see what happened.
If you already did the steps above you should try to open your migration file and take a look as well, there you can find if the creation column is specified or not! | 2 | 1 | 0 | So, locally I've changed my models a few times and used South to get everything working. I have a postgres database to power my live site, and one model keeps triggering a column mainsite_message.spam does not exist error. But when I run heroku run python manage.py migrate mainsite from the terminal, I get Nothing to migrate. All my migrations have been pushed. | Add a column to heroku postgres database | 0 | 1 | 0 | 1,178 |
24,700,966 | 2014-07-11T15:10:00.000 | 0 | 1 | 0 | 1 | python,bash,arduino | 24,715,910 | 3 | false | 0 | 0 | Just save the output from the arduino to a temporary variable and compare it to another that is the last value written to a file. If it is different, change the last value written to the new temperature and write it to the file. | 1 | 0 | 0 | I have the temperature coming from my arduino through the serial port on my mac. I need to write the data to a file, i don't want my script to write the data from /dev/tty.usbserial-A5025XZE (serial port) if the data is the same or if it is nothing. The temperature is the the format "12.32" and is sent every 5s. | Check to see if data is the same before writing over it | 0 | 0 | 0 | 51 |
24,702,818 | 2014-07-11T16:53:00.000 | 1 | 0 | 0 | 0 | python,boto | 41,159,662 | 3 | false | 1 | 0 | The simplest would be to use traffic shaping tools under linux, like tc. These tools let you control bandwidth and even simulate network packet loss or even long distance communication issues. Easy to write a python script to control the port behavior via a shell. | 1 | 0 | 0 | I'm using boto to upload and download files to S3 & Glacier.
How can I ratelimit/throttle the uploading and downloading speeds? | How to throttle S3 & Glacier upload/download speeds with boto? | 0.066568 | 0 | 1 | 1,618 |
24,706,850 | 2014-07-11T21:29:00.000 | 0 | 0 | 0 | 0 | python,flask,soundcloud | 24,741,514 | 1 | true | 0 | 0 | This might be impossible to do portably. For example, if Firefox is already running on Linux, the second invocation of firefox http://url will find out that an instance using the same profile, will send a message to the other process to open that URL in a tab, then exits immediately.
However, you could accomplish the same thing by sending the authentication tokens to a server, and simultaneously polling the server for credentials in the python script. | 1 | 0 | 0 | I want to implement a Python script to act as my OAuth2 endpoint, since I'm trying to write a Soundcloud app. Part of the authentication process involves visiting a Soundcloud page where you can sign in and grant access to the given Soundcloud application.
I'd like to be able to open that webpage in a browser using Python 3, which you can do with the webbrowser object. You can see on the documentation that launching a text-based browser blocks execution; I want to block execution whilst the webpage is open in a GUI-based browser.
Does anyone know whether this is possible? | How can I make Python's 'webbrowser' block execution? | 1.2 | 0 | 1 | 225 |
24,707,471 | 2014-07-11T22:20:00.000 | 0 | 0 | 0 | 0 | python,sqlite | 24,707,793 | 3 | false | 0 | 0 | If you explicitly need to commit multiple times throughout the code, and you are worried about the performance times of transactions, you could always build the database in memory db=sqlite3.connect(':memory:') and then dump it's contents to disk when all the time-critical aspects of the program have been completed. I.e the end of the script. | 1 | 1 | 0 | I'm not sure how to best phrase this question:
I would like to UPDATE, ADD, or DELETE information in an SQLite3 Table, but I don't want this data to be written to disk yet. I would still like to be able to
SELECT the data, and get the updated information, but then I want to choose to either rollback or commit.
Is this possible? Will the SELECT get the data before the UPDATE or after? Or must I rollback or commit before the next statement? | Can I Stage data to memory SELECT, then choose to rollback, or commit in sqlite3? python 2.7 | 0 | 1 | 0 | 91 |
24,707,635 | 2014-07-11T22:36:00.000 | 4 | 0 | 1 | 0 | python,python-2.7,interpreter,running-other-programs | 24,707,656 | 1 | true | 0 | 0 | The processes know nothing about each other.
It wouldn't matter if they were identical or not. Each process is allocated resources by the OS, so each process has its own resources, and they will not overlap. In fact, it is very common to use multiple similar Python processes to do multiprocessing when you have processing that can be done in parallel and logically allocated per process.
Unless they are both using a shared resource like a file, you have nothing to worry about. | 1 | 4 | 0 | I am running two python codes edited by two different text editors (Eclipse and Spyder), and from task manager I saw two python.exe processes. Will these two processes interfere with each other? I am worried because I used almost the same set of variable names across these two scripts and both codes are working on the same data input with very similar data structure. | Running two python processes | 1.2 | 0 | 0 | 267 |
24,707,836 | 2014-07-11T22:58:00.000 | 3 | 0 | 0 | 0 | python,scikit-learn | 24,708,214 | 1 | true | 0 | 0 | This a known limitation of the current implementation of scikit-learn's SGD classifier, there is currently no automated convergence check on that model. You can set verbose=1 to get some feedback when running though. | 1 | 1 | 1 | Is there any automated way to evaluate convergence of the SGDClassifier?
I'm trying to run an elastic net logit in python and am using scikit learn's SGDClassifier with log loss and elastic net penalty. When I fit the model in python, I get all zeros for my coefficients. When I run glmnet in R, I get significant non-zero coefficients.
After some twiddling I found that the scikit learn coefficients approach the R coefficients after around 1000 iterations.
Is there any method that I'm missing in scikit learn to iterate until the change in coefficients is relatively small (or a max amount of iterations has been performed), or do I need to do this myself via cross-validation. | Evaluating convergence of SGD classifier in scikit learn | 1.2 | 0 | 0 | 822 |
24,708,697 | 2014-07-12T01:20:00.000 | 0 | 0 | 0 | 0 | python-3.x,pyqt4 | 24,714,113 | 1 | true | 0 | 1 | C have to be child of A than you can close B. B can call function in parent A to open C and then B can close itself. | 1 | 0 | 0 | Alright so this is a specific question about data transfer using different windows in pyqt4. Basically I have 3 windows, each with its own class definition, that I have designed for a project but I'm confused about how to properly arrange these windows.
Ideal Functionality:
Let's say I have 3 windows; A, B, and C. Window A gives me two lists. When I click a button in Window A, window B pops up and gets me a number. After it gives me this number I want window C to open and window B to close but I want window C to have the two lists and the number.
Problems I have:
Currently I make a function in the class for window A to open window B and once I get the number from window B, then window C is created with the information, but it closes since I close window B. Previously I tried keeping window A opening window B and C but it opens the windows at the same time but I need window B to open for its data to then create window C. | Controlling Windows/Information PYQT4 | 1.2 | 0 | 0 | 33 |
24,709,551 | 2014-07-12T04:08:00.000 | 1 | 0 | 0 | 0 | python,django,django-templates | 24,710,778 | 3 | true | 1 | 0 | Unless you want to write your own template loader function that looks to your settings for the default and monkey-patch it in, then "no, there isn't a way to do that" is accurate.
At least it's only one line per file.
Plus, being a long-standing Django convention, other devs will immediately be able to see which base template is used (in more complex projects you may find you use different base templates depending on, say, the type of user).
Finally, as the Zen of Python goes, explicit is better than implicit. :-) | 2 | 1 | 0 | I'm 2 hours into Django and am wondering if there is a way to specify a default base template that will automatically be loaded for all templates so that you don't have to repeat yourself and specify {% extends "foo.html" %} at the top of every page template.
For example, at the project or app level some metadata(settings) could specify a default template that was used unless either the call to render or the template itself specified that you shouldn't include the template.
i. e. render(... other args..., layout=null) or {% noextends "foo.html" %}
This effectively turns the opt-in style into an opt-out, which in my experience is preferred. Given that it's specified in something like settings it's not "magic" and breaking with the spirit of Django.
I've looked over the documentation and this doesn't seem to be available by default. I suppose I could do something like override the built-in render routine to try and concatenate the {% extends "foo.html" %} call on to every template, but I am hoping the internet can just tell me there is already a solution that I'm missing. | Is there a way to specify a default base-template for all templates in django? | 1.2 | 0 | 0 | 273 |
24,709,551 | 2014-07-12T04:08:00.000 | 1 | 0 | 0 | 0 | python,django,django-templates | 24,711,278 | 3 | false | 1 | 0 | Don't forget, it's a guiding principle in Python - and therefore also in Django - that explicit is better than implicit. So whereas Rails, for example, emphasises convention over configuration and has all sorts of things happen automatically, this is very much alien to the Python way of doing things.
So no, there is no way to get Django to make templates inherit from a base automatically. And while you probably could get that working by creating a custom template loader, it's not a very good idea as it would confuse everyone else looking at your project. | 2 | 1 | 0 | I'm 2 hours into Django and am wondering if there is a way to specify a default base template that will automatically be loaded for all templates so that you don't have to repeat yourself and specify {% extends "foo.html" %} at the top of every page template.
For example, at the project or app level some metadata(settings) could specify a default template that was used unless either the call to render or the template itself specified that you shouldn't include the template.
i. e. render(... other args..., layout=null) or {% noextends "foo.html" %}
This effectively turns the opt-in style into an opt-out, which in my experience is preferred. Given that it's specified in something like settings it's not "magic" and breaking with the spirit of Django.
I've looked over the documentation and this doesn't seem to be available by default. I suppose I could do something like override the built-in render routine to try and concatenate the {% extends "foo.html" %} call on to every template, but I am hoping the internet can just tell me there is already a solution that I'm missing. | Is there a way to specify a default base-template for all templates in django? | 0.066568 | 0 | 0 | 273 |
24,710,900 | 2014-07-12T07:53:00.000 | 2 | 0 | 0 | 1 | python,linux,process | 24,713,362 | 2 | false | 0 | 0 | Just fork and before exec of the shell you call ptrace() with PTRACE_TRACEME so the exec doesn't start immediately, giving the parent all the time it needs to prepare before it tells the child to continue (PTRACE_CONT, PTRACE_SYSCALL, or PTRACE_SINGLESTEP).
When using subprocess.Popen() you may use the preexec_fn argument mentioned by @RuiSilva to do the PTRACE_TRACEME call. | 2 | 0 | 0 | My goal is to be able to start shell script in separate process and inspect it by linux ptrace syscall.
The problem is that I need to get process PID before it even starts. Stuff like subprocess.Popen(['ls', '-l']) or python-sh runs command immediately, so in a time I am trying to inspect this process by its PID it is likely finished.
On the other hand I cant use os.fork + exec because bash command I start overrides python code. | python subprocess popen starts immediately | 0.197375 | 0 | 0 | 524 |
24,710,900 | 2014-07-12T07:53:00.000 | 3 | 0 | 0 | 1 | python,linux,process | 24,711,374 | 2 | true | 0 | 0 | If you're using Unix, I think that you can use the preexec_fn argument in the Popen constructor.
According to the documentation of subprocess:
If preexec_fn is set to a callable object, this object will be called in the child process just before the child is executed. (Unix only)
As it runs in the child process, you can use os.getpid() to get the child pid. | 2 | 0 | 0 | My goal is to be able to start shell script in separate process and inspect it by linux ptrace syscall.
The problem is that I need to get process PID before it even starts. Stuff like subprocess.Popen(['ls', '-l']) or python-sh runs command immediately, so in a time I am trying to inspect this process by its PID it is likely finished.
On the other hand I cant use os.fork + exec because bash command I start overrides python code. | python subprocess popen starts immediately | 1.2 | 0 | 0 | 524 |
24,713,228 | 2014-07-12T13:04:00.000 | 1 | 0 | 0 | 0 | python,tkinter | 24,713,651 | 2 | true | 0 | 1 | Use Pygame in place of Tkinter (eventually in place of Canvas) - there are functions to check collisions.
To check collision you have to get position both elements and check distance between them:
a2 + b2 = c2
a = x1 - x2 , b = y1 - y2 , c = distance between objects A(x1,y1) and B(x2,y2)
If distance is smaller then some value then you have collision - distance doesn't have to be zero to have collision. This way you check collision in circle around object.
But you can check collision in square area around the object - it wil be easer for you to calculate it
Object A (x1,y1) has square area x1-10 ... x1+10, y1-10 ... y1+10. You have to check whether object B (x2,y2) is in that square. | 1 | 0 | 0 | Hello I am developing easy space invaders clone game and i need to figure out a way how to detect collision of the bullet and the alien when i shoot. Any suggestions ? Thanks | How to detect collisions of two canvas object Tkinter | 1.2 | 0 | 0 | 6,110 |
24,714,038 | 2014-07-12T14:44:00.000 | 0 | 0 | 0 | 0 | python-2.7 | 24,714,152 | 1 | false | 0 | 0 | Assign logger1 to some variable in module1 and let functions in that module use that variable to call correct logger. And remeber to check in functions whether variable is not None. | 1 | 0 | 0 | I have created two loggers in my logging module like logger1,logger2 and my application has two submodules module1 and module2.I want to configure/tell module1 should use logger1 and module2 should use logger2 only? | How to tell child modules to use specific logger | 0 | 0 | 0 | 8 |
24,715,230 | 2014-07-12T16:54:00.000 | 1 | 0 | 0 | 0 | python,scikit-learn,random-forest,one-hot-encoding | 66,810,359 | 5 | false | 0 | 0 | Maybe you can use 1~4 to replace these four color, that is, it is the number rather than the color name in that column. And then the column with number can be used in the models | 2 | 71 | 1 | Say I have a categorical feature, color, which takes the values
['red', 'blue', 'green', 'orange'],
and I want to use it to predict something in a random forest. If I one-hot encode it (i.e. I change it to four dummy variables), how do I tell sklearn that the four dummy variables are really one variable? Specifically, when sklearn is randomly selecting features to use at different nodes, it should either include the red, blue, green and orange dummies together, or it shouldn't include any of them.
I've heard that there's no way to do this, but I'd imagine there must be a way to deal with categorical variables without arbitrarily coding them as numbers or something like that. | Can sklearn random forest directly handle categorical features? | 0.039979 | 0 | 0 | 72,780 |
24,715,230 | 2014-07-12T16:54:00.000 | 16 | 0 | 0 | 0 | python,scikit-learn,random-forest,one-hot-encoding | 35,471,754 | 5 | false | 0 | 0 | You have to make the categorical variable into a series of dummy variables. Yes I know its annoying and seems unnecessary but that is how sklearn works.
if you are using pandas. use pd.get_dummies, it works really well. | 2 | 71 | 1 | Say I have a categorical feature, color, which takes the values
['red', 'blue', 'green', 'orange'],
and I want to use it to predict something in a random forest. If I one-hot encode it (i.e. I change it to four dummy variables), how do I tell sklearn that the four dummy variables are really one variable? Specifically, when sklearn is randomly selecting features to use at different nodes, it should either include the red, blue, green and orange dummies together, or it shouldn't include any of them.
I've heard that there's no way to do this, but I'd imagine there must be a way to deal with categorical variables without arbitrarily coding them as numbers or something like that. | Can sklearn random forest directly handle categorical features? | 1 | 0 | 0 | 72,780 |
24,717,307 | 2014-07-12T21:04:00.000 | 0 | 0 | 0 | 0 | python,django,django-users,python-3.4 | 24,718,066 | 2 | false | 1 | 0 | I was just talking to an advanced developer friend of mine about this. He said using djangos users is frowned upon and to build it out separately. I don't know much more on this but it's something I will be doing in the future. | 1 | 0 | 0 | I have a site that - other than the signup process - will be only used by logged in users. It's my first Django site, and I'm wondering whether I can use the Django user model (slightly extended) to work with all my users, or should it only be used for administrative users such as myself?
Apologies if this is a stupid question. Additionally, and either way, what's the best way to manage user registrations? It'd be awesome if this were built into Django, but it's not, and I read django-registration is relatively abandoned. Any recommendations welcome. | Using Django Users for all logged in users, and registering them | 0 | 0 | 0 | 48 |
24,717,941 | 2014-07-12T22:39:00.000 | 0 | 0 | 0 | 0 | python,django,upgrade | 24,718,048 | 2 | false | 1 | 0 | You can always crate a dump of your database if you are afraid of losing data. | 2 | 0 | 0 | What is the best way of writing a unit test in Django that tests the validity of current database information when Django is upgraded?
My unit tests create new information in the databases when they are run, but this isn't helpful for safely upgrading. | How do I test that Django upgrades don't break the current database? | 0 | 0 | 0 | 50 |
24,717,941 | 2014-07-12T22:39:00.000 | 0 | 0 | 0 | 0 | python,django,upgrade | 24,731,271 | 2 | false | 1 | 0 | What do you think could happen when you upgrade Django? Django updated your files not your database without permission from you. So you could upgrade your Django and run all your tests (local) to see if nothing is broken. | 2 | 0 | 0 | What is the best way of writing a unit test in Django that tests the validity of current database information when Django is upgraded?
My unit tests create new information in the databases when they are run, but this isn't helpful for safely upgrading. | How do I test that Django upgrades don't break the current database? | 0 | 0 | 0 | 50 |
24,718,142 | 2014-07-12T23:15:00.000 | 0 | 0 | 0 | 0 | python,webkit,gtk,pygtk | 24,722,441 | 1 | false | 1 | 1 | why do you want sync those scrollbars? You can achieve this by using the same Gtk.Adjustment (number of pages sets to 0).
I haven't use much of webkit but it essentialy a widget. so maybe a workaround would be disconnect a signal "value-changed" from Gtk.Adjustment until "load-status" signal from WebKitView reached Webkit.LoadStatus.FINISHED (if that's the correct syntax).
If that doesn't work, maybe you use WebKitView.move_cursor () (if i remember the function properly) based on Gtk.Adjustment on your text view (we use 2 adjustments this time) | 1 | 0 | 0 | I've got a text view and a web view, each inside a scrolled window of their own and I'm trying to achieve synchronized scrolling between the two but I can't seem to get it to work.
The web view is basically taking the text from the text view and rendering it as marked up HTML via webview.load_html_string(). I think the problem could be the delay in loading the HTML as every time the web view is refreshed it is scrolled back to the very start.
Right now I call a function every time the content of the text view is changed and then modify the vadjustment.value of the scrolled window containing the web view.
But this doesn't work. Is it because of the delay? I can't think of any way to solve this issue. | pygtk TextView and WebKit.WebView synchronized scrolling | 0 | 0 | 0 | 221 |
24,718,274 | 2014-07-12T23:42:00.000 | 0 | 0 | 1 | 0 | python | 24,718,290 | 1 | false | 0 | 0 | The only dangers are the typical concurrency issues you'd face in this situation. Be sure to either use Lock objects inside your logging method, or use them in bSoupProcessor before calling it. | 1 | 0 | 0 | In the main code, I have an instance of a class called "debugPrinterObject".
After instantiating it, I pass one of it's functions as an argument to another class called "bSoupProcessor" which processes text. Any logging information is saved to a text file using the function passed into the constructor of the bSoupProcessor class.
This is done so that the file is held open by the debugPrinterObject, and editable through the function passed as an argument. the text file is only closed at the end of the program.
It is working so far. I am going to implement multi threading, where there will be multiple "bSoupProcessor" classes, and they will all be using the same function of the "debugPrinterObject". Is this possible? are there any problems/risks? | Python: What are the dangers of passing a specific object's function as an argument | 0 | 0 | 0 | 42 |
24,718,697 | 2014-07-13T01:08:00.000 | 1 | 0 | 0 | 0 | python,apache-spark,pyspark | 24,736,966 | 6 | false | 0 | 0 | Personally I think just using a filter to get rid of this stuff is the easiest way. But per your comment I have another approach. Glom the RDD so each partition is an array (I'm assuming you have 1 file per partition, and each file has the offending row on top) and then just skip the first element (this is with the scala api).
data.glom().map(x => for (elem <- x.drop(1){/*do stuff*/}) //x is an array so just skip the 0th index
Keep in mind one of the big features of RDD's is that they are immutable, so naturally removing a row is a tricky thing to do
UPDATE:
Better solution.
rdd.mapPartions(x => for (elem <- x.drop(1){/*do stuff*/} )
Same as the glom but doesn't have the overhead of putting everything into an array, since x is an iterator in this case | 1 | 28 | 1 | how do you drop rows from an RDD in PySpark? Particularly the first row, since that tends to contain column names in my datasets. From perusing the API, I can't seem to find an easy way to do this. Of course I could do this via Bash / HDFS, but I just want to know if this can be done from within PySpark. | PySpark Drop Rows | 0.033321 | 0 | 0 | 49,327 |
24,719,421 | 2014-07-13T03:59:00.000 | 1 | 0 | 1 | 1 | python,windows,logging,logrotate,log-rotation | 24,719,986 | 4 | false | 0 | 0 | Firstly the issue is that, if you use a config file to initialise logging with file and console handlers, then it does not populate logging.handlers list, so you can not iterate over it and close+flush the streams prior to opening new one with a new logging file name.
If you want to use TimeRotatingFileHandler or RotatingFileHandler, it sits under logging/handler.py and when it tries to do a roll over, it only closes its own stream, as it has no idea what streams the parent logging (mostly singleton) class may have open. And so when you do a roll over, there is a file lock (file filehandler) and boom it all fails.
So the solution (for me) is to initialise logging programatically and use addHandlers on logging, which also populates logging.handlers [], which I then use to iterate over my console/file handler and close them prior to manually rotating the file.
It to me looks like an obvious bug with the logging class, and if its working on unix - it really shouldn't.
Thanks everyone, especially @falsetru for your help. | 2 | 4 | 0 | So I do logging.config.fileConfig to setup my logging from a file config that has console and file handler. Then I do logging.getLogger(name) to get my logger and log. At certain times I want the filehandler's filename to change i.e. log rotate (I can't use time rotator because of some issues with Windows platform) so to do that I call logger.handlers - it shows an empty list, so I cant close them!! However when I step through the debugger, its clearly not empty (well of course without it I wouldn't be able to log right)
Not sure whats going on here, any gotchas that I'm missing?
Appreciate any help. Thanks. | Logging Handlers Empty - Why Logging TimeRoatingFileHandler doesn't work | 0.049958 | 0 | 0 | 3,512 |
24,719,421 | 2014-07-13T03:59:00.000 | 0 | 0 | 1 | 1 | python,windows,logging,logrotate,log-rotation | 54,827,449 | 4 | false | 0 | 0 | Maybe there is no such name as 'TimeRoatingFileHandler' because you missed 'd' in word 'Timed'. So it must be: 'TimedRoatingFileHandler' | 2 | 4 | 0 | So I do logging.config.fileConfig to setup my logging from a file config that has console and file handler. Then I do logging.getLogger(name) to get my logger and log. At certain times I want the filehandler's filename to change i.e. log rotate (I can't use time rotator because of some issues with Windows platform) so to do that I call logger.handlers - it shows an empty list, so I cant close them!! However when I step through the debugger, its clearly not empty (well of course without it I wouldn't be able to log right)
Not sure whats going on here, any gotchas that I'm missing?
Appreciate any help. Thanks. | Logging Handlers Empty - Why Logging TimeRoatingFileHandler doesn't work | 0 | 0 | 0 | 3,512 |
24,722,087 | 2014-07-13T11:16:00.000 | 0 | 0 | 0 | 0 | python,django,forms,recursion,django-forms | 25,321,619 | 4 | false | 1 | 0 | In first glance, your idea seems to be complex. And the immediate question is - "Why do need such feature?". 99% - of tasks can be solved with built-in Django "bricks", another 1% - "Please hardcode". Also I may assume that the problem behind such an idea is complex too, and will be hard understandable by end user. | 2 | 2 | 0 | I like Django forms library, but it would be even better if a form could contain forms.
My dream looks like this:
I have a form which behaves like normal form: for example a class called SuperForm
SuperForm can contain several normal forms, or even (recursive) SuperForms
You can bind it to data (to make it bound), call is_valid() ...
Is this possible with django or an external app?
Update
I see many people did not understand what I want. My fault, I did not give a use case.
Use case: A page should allow the user to update his email and his telephone number. The email is from django.contrib.auth and the phone number is from our custom model.
Both inputs should be in a single <form> tag. Since ModelForm is easy to use, I don't want to create a form myself by hand.
I want a container which contains the ModelForm of django.contrib.auth.models.User and the ModelForm of our custom model.
I don't want to loop over both forms (in other use cases there could be much more forms) myself, and check whether they are valid or not.
If I call is_valid() or save() on the container the matching method of the forms gets called. | Django: Put a Form into a Form (recursive ...) | 0 | 0 | 0 | 564 |
24,722,087 | 2014-07-13T11:16:00.000 | -1 | 0 | 0 | 0 | python,django,forms,recursion,django-forms | 25,411,066 | 4 | false | 1 | 0 | I think you are thinking about forms in a different way than what you should.
Forms are simply there for input. How they are processed is up to you.
Sure you can run is_valid on the form, and that would check against the modelform that you have assigned. The awesome thing is you can check multiple model forms on the same form. The data that is needed for the model form will be processed the extra data would be ignored. If the is_valid fails you still pass the data back like you normally would to be corrected. | 2 | 2 | 0 | I like Django forms library, but it would be even better if a form could contain forms.
My dream looks like this:
I have a form which behaves like normal form: for example a class called SuperForm
SuperForm can contain several normal forms, or even (recursive) SuperForms
You can bind it to data (to make it bound), call is_valid() ...
Is this possible with django or an external app?
Update
I see many people did not understand what I want. My fault, I did not give a use case.
Use case: A page should allow the user to update his email and his telephone number. The email is from django.contrib.auth and the phone number is from our custom model.
Both inputs should be in a single <form> tag. Since ModelForm is easy to use, I don't want to create a form myself by hand.
I want a container which contains the ModelForm of django.contrib.auth.models.User and the ModelForm of our custom model.
I don't want to loop over both forms (in other use cases there could be much more forms) myself, and check whether they are valid or not.
If I call is_valid() or save() on the container the matching method of the forms gets called. | Django: Put a Form into a Form (recursive ...) | -0.049958 | 0 | 0 | 564 |
24,723,547 | 2014-07-13T14:10:00.000 | 4 | 0 | 1 | 1 | python,python-2.7,command-line,packaging | 61,834,365 | 4 | false | 0 | 0 | Just change the name __init__.py file to __main__.py | 2 | 19 | 0 | I'm trying to release my first Python package in the wild and I was successful in setting it up on PyPi and able to do a pip install. When I try to run the package via the command line ($ python etlTest), I receive the following error:
/usr/bin/python: can't find '__main__' module in 'etlTest'
When I run the code directly from my IDE, it works without issue. I am using Python 2.7 and have __init__.py scripts where required. What do I need to do to get this working? | Received 'can't find '__main__' module in '' with python package | 0.197375 | 0 | 0 | 65,827 |
24,723,547 | 2014-07-13T14:10:00.000 | 1 | 0 | 1 | 1 | python,python-2.7,command-line,packaging | 67,251,553 | 4 | false | 0 | 0 | I had the same problem and solved it by making sure I'm in the correct directory of the package you are trying to run.
For Windows, type dir in the console, while on Linux/macOS - ls to see your current directory | 2 | 19 | 0 | I'm trying to release my first Python package in the wild and I was successful in setting it up on PyPi and able to do a pip install. When I try to run the package via the command line ($ python etlTest), I receive the following error:
/usr/bin/python: can't find '__main__' module in 'etlTest'
When I run the code directly from my IDE, it works without issue. I am using Python 2.7 and have __init__.py scripts where required. What do I need to do to get this working? | Received 'can't find '__main__' module in '' with python package | 0.049958 | 0 | 0 | 65,827 |
24,727,096 | 2014-07-13T21:21:00.000 | 1 | 0 | 0 | 0 | python,web | 24,727,209 | 4 | false | 0 | 0 | Python comes bundled with sqlite3 module that gives access to SQLite databases. The only downside is that it is pretty much possible for just one thread can have write locks to it at any given moment. | 2 | 0 | 0 | I don't have access PHP server nor database like Mysql on machine I'll be working on. Would it be feasible to use Python instead of PHP and flat file database instead of Mysql? I'm not too concerned about performance or scalability. It's not like I'm going to create next facebook. I just want to load data from server and show it on webpage and possibly handle some input forms. Also is there any major flaw with my reasoning? Or is there any other way to circumvent lack of PHP and database on server? | Using Python and flat file database for server-side | 0.049958 | 1 | 0 | 1,786 |
24,727,096 | 2014-07-13T21:21:00.000 | 1 | 0 | 0 | 0 | python,web | 24,727,365 | 4 | false | 0 | 0 | There are many ways to serve Python applications, but you should probably look at something that does this using the WSGI standard. Many frameworks will let you do this e.g: Pyramid, Pylons, Django, .....
If you haven't picked one then it would be worth looking at your long term requirements and also what you already know.
In terms of DB, there are many choices. SQLlite has been mentioned, but there are many other DBs that don't require a server process. If you're only storing a small amount of data then flat files may work for you, but anything bigger or more relational, then look at SQLlite. | 2 | 0 | 0 | I don't have access PHP server nor database like Mysql on machine I'll be working on. Would it be feasible to use Python instead of PHP and flat file database instead of Mysql? I'm not too concerned about performance or scalability. It's not like I'm going to create next facebook. I just want to load data from server and show it on webpage and possibly handle some input forms. Also is there any major flaw with my reasoning? Or is there any other way to circumvent lack of PHP and database on server? | Using Python and flat file database for server-side | 0.049958 | 1 | 0 | 1,786 |
24,728,191 | 2014-07-14T00:41:00.000 | 0 | 0 | 1 | 0 | python,mongodb | 24,729,803 | 1 | false | 0 | 0 | For me you have to store specific values you'll search on, and index them.
For example, alongside with the date, you may store "year", "month", and "day", index on "month" and "day", and do your queries on it.
You may want to store them as "y", "m", and "d" to gain some bytes (That's sad, I know). | 1 | 0 | 0 | I need to be able to query documents that have a date field between some range, but sometimes in my dataset the year doesn't matter (this is represented with a boolean flag in the mongo document).
So, for example, I might have a document for Christmas (12/25-- year doesn't matter) and another document for 2014 World Cup Final Match (8/13/2014). If the user searches for dates between 8/1/2014 and 12/31/2014, both of those documents should match, but another document for 2010 World Cup Final Match would not.
All approaches I've gotten to work so far have used a complicated nesting of $and and $or statements, which ends up being too slow for production, even with indexes set appropriately. Is there a simple or ideal way to handle this kind of conditional date searching in mongo? | Mongo query on custom date system | 0 | 1 | 0 | 61 |
24,728,678 | 2014-07-14T02:08:00.000 | 1 | 1 | 0 | 0 | python-2.7,ssid,wifi | 24,738,477 | 1 | false | 0 | 0 | aircrack-ng suite use airbase-ng to broadcast or hostapd (if you want to do more than just broadcast). In terms of python not really, you could use subprocess and execute airbase-ng through your script. If you want pure python, best to get Scapy and do it through there. | 1 | 1 | 0 | Hi all I am trying to write a python code to broadcast an SSID created by using Python.
Is there a library Written for something like that which i could
install?
Is it really possible to write such a code to cause my wifi
card broad cast an SSID i created? | Wireless SSID broadcast using Python | 0.197375 | 0 | 0 | 633 |
24,729,427 | 2014-07-14T04:13:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine | 24,765,014 | 1 | false | 1 | 0 | I think I have found the answer to my own question.
I have a small app I have written to backup my stuff to Google Drive, this app would appear to have an error in it that does not stop it from running but does cause it to make a file called
C:\Usera\myname\Google
Therefore GAE can not create a directory called C:\Usera\myname/Google nor a file called C:\Usera\myname/Google\google_appengine_launcher.ini
I deleted the file Google, made a directory called Google and ran the GAE, saved pereferences and all working | 1 | 1 | 0 | Just installed Google Apps Engine and am getting "could not save" errors.
Specifically if I go in to preferences I get
Could not save into preference file
C:\Usera\myname/Google\google_appengine_launcher.ini:No such file or directory.
So some how I have a weird path, would like to know where and how to change this. I have search but found nothing, I have done a repair reinstall of GAE
Can find nothing in the registry for google_appengine_launcher.ini
I first saw the error when I created my first Application
Called hellowd
Parent Directory: C:\Users\myname\workspace
Runtime 2.7 (PATH has this path)
Port 8080
Admin port 8080
click create
Error:
Could not save into project file
C:\Users\myname/Google\google_appengine_launcher.ini:No such file or directory.
Thanks | could not save preference file google-apps-engine | 0 | 0 | 0 | 53 |
24,729,475 | 2014-07-14T04:19:00.000 | 0 | 0 | 1 | 0 | python | 24,729,519 | 2 | false | 0 | 0 | Dictionaries are fine if you have indices which are strings, and you don't have to make them up. So if you work by name, or something, a dictionary is easy to use.
If you just use numbers, I'd make a list of lists. You can rapidly configure a list of N elements by doing eg. list_of_lists = N * []. You can the populate each list with the data. Manage a small second list in parallel, with the max values of each of the sub-lists.
Doing this manually can be a little tedious, but if you define a class for what you are doing, you can hide the complexity in the class, and easily reuse it. | 1 | 1 | 0 | My problem has to do with statistics and the creation of a dynamic number of variables in Python.
I have a data set of values from 0-100.
If the user enters 5 upper limits, 20, 40, 60, 80, 100, the program should sort the values into 5 classes, list1, list2, list3, list4, list5. If the user enters 4 upper limits, 25, 50, 75, 100, the program should sort the values into 4 classes, list1, list2, list3, list4.
Then, I need to find the average for each list, eg, list1Average, list2Average, list3Average, list4Average and store these average values in another list, averagesList.
Finally, I need to subtract the average for each respective class (list1Average, list2Average, list3Average, list4Average) from each value in the dataset. i.e. subtract list1Average from each value in list1, subtract list2Average from each value in list2, etc, and store those derived values in yet another list, classVarianceList.
I've managed to do all of this quite easily, but only when the number of class upper limits is static (I can just do class1.append(i), class2.append(i), etc). But now I'm going insane trying to figure out how to do this when the number of classes can be determined by the user. The main issue is the creation of a dynamic number of variables (lists) to store values and run calculations upon.
I've also read up on Python's dictionaries (because everyone says to use dictionaries to dynamically create variables), and while I get that it ties a key to a value, I just can't for the life of me, figure out how to incorporate this into what I want to do.
Thank you very much for any help! | How can I create a dynamic number of variables? | 0 | 0 | 0 | 77 |
24,729,658 | 2014-07-14T04:46:00.000 | 0 | 0 | 0 | 0 | python,webkit,gtk,pygtk,webkitgtk | 24,751,754 | 1 | true | 0 | 1 | you have to put the Gtk.WebKitView inside a Gtk.ScrolledWindow. since it implement a Gtk.Scrollable you need not using a Gtk.Viewport. and put the window inside your Gtk.Box | 1 | 0 | 0 | When I open a site with the webkit webview, the entire window resizes to fit the page not allowing scrollbars. The window's height exceeds my screen height. Also, when a webview is in the window, I can resize the window outward, but I can't resize it inward. A webview won't show at all in a VBox if I do: MyVBox.pack_start(MyWebview, True, True, 0) | PyWebkitGTK loads a website fully vertical into window and will not resize inward | 1.2 | 0 | 0 | 118 |
24,732,112 | 2014-07-14T08:09:00.000 | 0 | 0 | 0 | 0 | python,opencv,camera,detection,hsv | 24,732,678 | 1 | false | 0 | 0 | You can use a while loop and check if the blob region is not null and then find contours!
it would be helpful if you posted your code. We can explain the answer in a better way then. | 1 | 0 | 1 | I recently started using Python and I've been working on an Open CV based project for over a month now.
I am using Simple Thresholding to detect a coloured blob and I have thresholded the HSV values to detect the blob. All works well, but when the blob goes out of the FOV of the camera, the program gets stuck. I was wondering if there could be a while/if condition that I can add at the top of the loop in order to skip the whole loop in case the blob goes outside FOV of the camera and then enter the loop when the blob returns.
Would really appreciate your help on this one! Cheers. | Program gets stuck at finding the Contour while using Open CV | 0 | 0 | 0 | 111 |
24,735,926 | 2014-07-14T11:49:00.000 | 0 | 0 | 1 | 0 | python | 24,736,026 | 4 | false | 0 | 0 | You can split you string to a list using
list1=s.split()
And then check wether each of them is a integer or not. | 1 | 0 | 0 | for example if my input was "1 2 3", how do I check if each part is a integer and not anything else and if there was something else, be able to input the string again so its correct otherwise it wont move on | In Python, If I split a string up, how do i check if each part of it is an integer | 0 | 0 | 0 | 917 |
24,736,316 | 2014-07-14T12:13:00.000 | 0 | 0 | 1 | 0 | python,installation,pip,package | 24,736,486 | 7 | false | 0 | 0 | pip freeze gives you all the installed packages. Assuming you know the folder:
time.ctime(os.path.getctime(file))
should give you the creation time of a file, i.e. date of when the package has been installed or updated. | 1 | 41 | 0 | I know how to see installed Python packages using pip, just use pip freeze. But is there any way to see the date and time when package is installed or updated with pip? | See when packages were installed / updated using pip | 0 | 0 | 0 | 36,060 |
24,736,440 | 2014-07-14T12:21:00.000 | 0 | 0 | 1 | 0 | c#,python,file,download,directory | 24,736,474 | 2 | false | 0 | 0 | When you download, you get file size. You can check file size before writing to file. If file size is same download size then allow writing. | 2 | 0 | 0 | I have a mix python-C# code that scans list of directories and manipulate it files in a loop.
Sometime there is a download directly to the income directory and the program start manipulating the file before the download completed.
Is there any way to detect if the file finish downloading? | How to detect if file is downloading in c# or python | 0 | 0 | 0 | 400 |
24,736,440 | 2014-07-14T12:21:00.000 | 0 | 0 | 1 | 0 | c#,python,file,download,directory | 24,737,525 | 2 | true | 0 | 0 | A simple way to detect if the file is done downloading is to compare file size. If you always keep a previous "snapshot" of the files in the current directory you will be able to see which files exist and which don't at a given moment in time. Once you see an new file you know that the file has started to download. From this point you can compare the file size of that file and once the previous file size is equal to the current file size you know the file has finished downloading. Each time you would take a new "snapshot" it would be, for example 1ms after the previous. This may not be simple to implement depending on your knowledge of python or C# but I think this algorithm would get you what you want. | 2 | 0 | 0 | I have a mix python-C# code that scans list of directories and manipulate it files in a loop.
Sometime there is a download directly to the income directory and the program start manipulating the file before the download completed.
Is there any way to detect if the file finish downloading? | How to detect if file is downloading in c# or python | 1.2 | 0 | 0 | 400 |
24,737,909 | 2014-07-14T13:43:00.000 | -2 | 1 | 1 | 0 | python | 24,739,463 | 3 | false | 0 | 0 | Don't.
Python is not C++ and using patterns that worked before are silly in Python. In particular, Python is not a "Bondage and Domination" language where phrases like "thereby strictly controlling creation" don't apply.
"If you didn't want to instantiate a UsefulClass then why did you?" — me.
If you can't trust yourself or your colleagues to read and follow the code's internal documentation, you're screwed regardless of the implementation language. | 1 | 0 | 0 | C++ programmer here.
In Python, how do you make sure that a particular class (e.g. UsefulClass) can only be created through its related factory class (e.g. FactoryClass)? But, at the same time the public methods of UsefulClass are callable directly?
In C++ this can be easily achieved by making the relevant methods of UsefulClass public, and by making its default constructor (and any other constructors) private. The related FactoryClass (which can be a "friend" of the UsefulClass) can return instances of UsefulClass and thereby strictly controlling creation, while allowing the user to directly call the public methods of UsefulClass.
Thanks. | Only creating object through factory class in Python - factory class related | -0.132549 | 0 | 0 | 136 |
24,738,503 | 2014-07-14T14:10:00.000 | 0 | 0 | 0 | 1 | python,gstreamer,playbin2 | 24,807,444 | 1 | true | 0 | 0 | The best way to do it really synchronized with the video would be to use something like the cairooverlay element and do the rendering yourself directly inside the pipeline, based on the actual timestamps of the frames. Or alternatively write your own element for doing that.
The easiest solution if timing is not needed to be super accurate would be to use the pipeline clock. You can get it from the pipeline once it started, and then could create single shot (or periodic) clock ids for the time or interval you want. And then use the async_wait() method on the clock.
To get the clock time that corresponds to e.g. the position 1 second of the pipeline you would add 1 second (i.e. 1000000000) to the pipeline's base time. You can use that value then when creating the clock ids. | 1 | 0 | 0 | In my Python program I use GStreamer's playbin in combination with a textoverlay to play a video file and show some text on top of it.
This works fine: If I change the text property of the textoverlay then the new text is shown.
But now I want to set the text based on the video's current position/time (like subtitles).
I read about a pipeline's clock, buffer's timestamps, segment-events and external timers which query the current time every x millisecs. But what is the best practice to get informed about time-changes so that I can show the correct text as soon as possible? | GStreamer timing in Python | 1.2 | 0 | 0 | 739 |
24,739,390 | 2014-07-14T14:52:00.000 | 10 | 0 | 0 | 0 | python,plot,bokeh | 24,967,653 | 2 | false | 0 | 0 | as of 0.5.1 there is now bokeh.plotting.reset_output that will clear all output_modes and state. This is especially useful in situations where a new interpreter is not started in between executions (e.g., Spyder and the notebook) | 1 | 9 | 1 | Before I updated, I would run my script and output the html file. There would be my one plot in the window. I would make changes to my script, run it, output the html file, look at the new plot. Then I installed the library again to update it using conda. I made some changes to my script, ran it again, and the output file included both the plot before I made some changes AND a plot including the changes. I ran the script again out of curiosity. Three plots in the one file! Ran it again. Four! Deleted the html file (instead of overwriting). Five! Changed the name of the output html file. Six! I even tried changing the name of the script. The plots just keep piling up.
What's going on? Why is it plotting every version of the graph I've ever made? | Updated Bokeh to 0.5.0, now plots all previous versions of graph in one window | 1 | 0 | 0 | 2,333 |
24,741,712 | 2014-07-14T16:54:00.000 | 0 | 0 | 0 | 0 | python,django,rabbitmq,celery | 24,742,679 | 2 | false | 1 | 0 | I don't know why I didn't think of this sooner, I added in a unique_together clause which will prevent another like_object from being created. | 1 | 0 | 0 | I'm currently looking for a solution that will prevent a user from making multiple requests at the same time. I would like the first request to finish before the I process the second request from the user. For example, lets say user adam liked and un-liked suzy's photo. Both unliking and liking of the photo happens in the same view.
Currently the problem that I've been having is that I'm processing both requests at the same time. In the case of the like view, when a user likes something, I create a like_object. When the user decides to unlike something, I check for the existence of the like_object in the database and then delete it. HOWEVER! If the first request hasn't finished yet, the check for the like_object in the second request will come back saying there is no like object and it will create a second like_object.
Once all of this is finished processing, I will end up having 2 like_objects for the same photo from the same user. This is bad.
To give you more information, I use Gunicorn as my HTTP server. I run 3 regular workers, which is why each request is processed at the same time.
So what do you think I could do? I mean, I was thinking for using Celery and RabbitMQ for this. Each request will be submitted into a queue and be processed asynchronously. That's one option. But I feel like that might be overkill in a situation like this. I'm looking for something that can be done directly within Django. Hmm, let me know of the possible solutions.
Thanks | Django: Finish processing one request from a user before proceeding to the next | 0 | 0 | 0 | 259 |
24,743,340 | 2014-07-14T18:31:00.000 | 0 | 1 | 1 | 0 | python,performance,file,io | 24,743,892 | 2 | false | 0 | 0 | Such a question can be answered only by real measurement.
You shall create simple test scenario, which will do reading and writing files of similar type and size without the actual calculation.
You can do profiling and check, how much time you spend on I/O operations and how much on processing the content. It might turn out, that even with I/O running at speed of light you will not improve the performance remarkably.
Without measuring the time one can only guess and my estimation is:
if you use default buffering, you will not see big differences.
in case, reading more lines at once would speed up the processing, you could play with setting up larger buffer for file operations. This could speed up the process even with processing line by line keeping your processing code simple.
Personally, I would prefer to preserve current simple line by line processing, unless performance and gains would be really significant. | 1 | 0 | 0 | I have a Python script that reads a line of data from a source file, performs a set of calculations on that data and writes the results of those calculations to the output file. The script is currently coded to read one line at a time from the source file until the end of the source file is reached.
Can I improve the execution time of the script by reading multiple lines from the source file, performing the calculations and writing the results to the output file?
Do I take a performance hit by having large numbers of read / write instances?
I ask the question rather than perform a test due to the difficulty of changing the code. | Performance Tradeoff Reading From One File, Perfoming An Action and Writing To Another File | 0 | 0 | 0 | 41 |
24,743,758 | 2014-07-14T18:57:00.000 | 2 | 0 | 0 | 0 | python,graphics,pygame | 24,744,048 | 2 | false | 0 | 1 | I dont fully understand your question, but to attempt to answer it here is the following.
No you should not fully draw to the screen then scale it. This is the wrong approach. You should tile very large surfaces and only draw the relevant tiles. If you need a very large view, you should use a scaled down image (pre-scaled). Probably because the amount of memory required to draw an extremely large surface is prohibitive, and scaling it will be slow.
Convert the coordinates to the tiled version using some sort of global matrix that scales everything to the size you expect. So you should also filter out sprites that are not visible by testing their inclusion inside the bounding box of your view port. Keep track of your view port position. You will be able to calculate where in the view port each sprite should be located based on its "world" coordinates. | 2 | 1 | 0 | I'm drawing a map of a real world floor with dimensions roughly 100,000mm x 200,000mm.
My initial code contained a function that converted any millimeter based position to screen positioning using the window size of my pygame map, but after digging through some of the pygame functions, I realized that the pygame transformation functions are quite powerful.
Instead, I'd like to create a surface that is 1:1 scale of real world and then scale it right before i blit it to the screen.
Is this the right way to be doing this? I get an error that says Width or Height too large. Is this a limit of pygame? | Pygame Large Surfaces | 0.197375 | 0 | 0 | 1,634 |
24,743,758 | 2014-07-14T18:57:00.000 | 0 | 0 | 0 | 0 | python,graphics,pygame | 24,744,442 | 2 | false | 0 | 1 | If your map is not dynamic, I would suggest draw a map outside the game and load it in game.
If you plan on converting the game environment into a map, It might be difficult for a large environment. 100,000mm x 200,000mm is a very large area when converting into a pixels. I would suggest you to scale it down before loading.
As for scaling in-game, you can use pygame.transform.rotozoom or pygame.transform.smoothscale.
Also like the first answer mentions, scaling can take significant memory and time for very large images. Scaling a very large image to a very small image can make the image incomprehensible. | 2 | 1 | 0 | I'm drawing a map of a real world floor with dimensions roughly 100,000mm x 200,000mm.
My initial code contained a function that converted any millimeter based position to screen positioning using the window size of my pygame map, but after digging through some of the pygame functions, I realized that the pygame transformation functions are quite powerful.
Instead, I'd like to create a surface that is 1:1 scale of real world and then scale it right before i blit it to the screen.
Is this the right way to be doing this? I get an error that says Width or Height too large. Is this a limit of pygame? | Pygame Large Surfaces | 0 | 0 | 0 | 1,634 |
24,744,409 | 2014-07-14T19:33:00.000 | 2 | 0 | 0 | 0 | python,scikit-learn | 24,757,540 | 1 | true | 0 | 0 | Use the partial_fit method on the naive Bayes estimator. | 1 | 0 | 1 | I'm building a NaiveBayes classifier using scikit-learn, and so far things are going well if I have a set body of data to train. However, for the particular project I'm working on, there will be new data coming in every day that ideally would be part of the training set.
I'm aware that you can pickle the classifier to store it for later use, but is there any way to "update" the classifier with new data?
Re-training the classifier from scratch every day is obviously an option, but that would require drawing a lot of historical data each time, for a growing time period. | Updating a NaiveBayes Classifier (in scikit-learn) over time | 1.2 | 0 | 0 | 346 |
24,744,701 | 2014-07-14T19:49:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,neural-network,gpu,theano | 24,765,554 | 1 | true | 0 | 0 | For a plain CudaNdarray variable, something like this should work:
'''x = CudaNdarray... x_new=theano.tensor.TensorVariable(CudaNdarrayType([False] * tensor_dim))
f = theano.function([x_new], x_new)
converted_x = f(x)
''' | 1 | 1 | 1 | I'm trying to convert a pylearn2 GPU model to a CPU compatible version for prediction on a remote server -- how can I convert CudaNdarraySharedVariable's to TensorVariable's to avoid an error calling cuda code on a GPU-less machine? The experimental theano flag unpickle_gpu_to_cpu seems to have left a few CudaNdarraySharedVariable's hanging around (specifically model.layers[n].transformer._W). | Convert CudaNdarraySharedVariable to TensorVariable | 1.2 | 0 | 0 | 505 |
24,749,764 | 2014-07-15T04:15:00.000 | 2 | 0 | 1 | 1 | ipython,tornado,ipython-notebook | 27,664,732 | 2 | false | 0 | 0 | Errno 5 is a low level error usually reported when your disk has bad sectors.
I don't think the error is related to the file or ipython, check your disk with an appropriate tool (fsck if you are using Linux). | 1 | 2 | 0 | I have a git folder with several ipython notebook files in it. I've just got a new comp and installed ipython. When I open some files, it works fine, others, however, display this error:
Error loading notebook, bad request.
The log looks like:
2014-07-16 00:20:11.523 [NotebookApp] WARNING | Unreadable Notebook: /nas-6000/wclab/Ahmed/Notebook/01 - Boundary Layer.ipynb [Errno 5] Input/output error
WARNING:tornado.access:400 GET /api/notebooks/01%20-%20Boundary%20Layer.ipynb?_=1405434011080 (127.0.0.1) 3.00ms referer=linktofile
The read/write and owner permissions are the same for each of the files. The files open fine on my other computers, it's just this new one. Any ideas?
Cheers,
James | iPython notebook won't open some files | 0.197375 | 0 | 0 | 1,542 |
24,749,992 | 2014-07-15T04:44:00.000 | 0 | 0 | 0 | 0 | python,google-drive-api | 24,760,552 | 1 | true | 0 | 0 | Not possible as you already found out since its not on the docs. | 1 | 2 | 0 | I have a very large file hierarchy in Google Drive which follows a standard naming convention.
Is it possible to issue a list query that returns Folder Name and ID for all folders matching a specific pattern?
I can't seem to find this in the documentation, so I'm thinking that it may not be possible. | Is it possible to query Google Drive via SDK using a regular expression? | 1.2 | 0 | 0 | 87 |
24,754,321 | 2014-07-15T09:17:00.000 | 0 | 0 | 0 | 1 | python,mongodb | 45,618,082 | 1 | false | 0 | 0 | You need to make sure you're running mongod in another terminal tab first. | 1 | 1 | 0 | I am a newbie in Python and has installed MongoDB but each time I try to run mongo.exe from command prompt C:\Program Files\MongoDB 2.6 Standard\bin>mongo.exe, it issues the following:
MongoDB shell version: 2.6.3
connecting to: test
2014-07-15T10:02:02.670+0100 warning: Failed to connect to 127.0.0.1:27017, reas
on: errno:10061 No connection could be made because the target machine actively
refused it.
2014-07-15T10:02:02.672+0100 Error: couldn't connect to server 127.0.0.1:27017 (
127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
exception: connect failed
How can I resolve this? Thank you. | MongoDB Not Running from Command Prompt | 0 | 0 | 0 | 254 |
24,755,246 | 2014-07-15T10:00:00.000 | 3 | 0 | 1 | 0 | python,console,sublimetext2,shortcut | 37,505,351 | 3 | false | 0 | 0 | A new window will always have a clean console. You can use this to open a new window, close the old one and then reopen your project (assuming it is a saved project and not anonymous). This requires the hot_exit setting to be true, which is the default. | 2 | 47 | 0 | How to clear console in sublime text editor.
I have searched on internet too..But can't find proper shortcut for that.
Please provide info | How to clear console in sublime text editor | 0.197375 | 0 | 0 | 24,547 |
24,755,246 | 2014-07-15T10:00:00.000 | 18 | 0 | 1 | 0 | python,console,sublimetext2,shortcut | 42,431,629 | 3 | false | 0 | 0 | I installed ClearConsole package, then type alt+k to clear then console. | 2 | 47 | 0 | How to clear console in sublime text editor.
I have searched on internet too..But can't find proper shortcut for that.
Please provide info | How to clear console in sublime text editor | 1 | 0 | 0 | 24,547 |
24,760,322 | 2014-07-15T14:03:00.000 | 1 | 0 | 0 | 0 | python,pyqtgraph | 25,269,345 | 2 | false | 0 | 1 | While pyqtgraph is awesome, for my use case I found a much better tool to do this.
graphviz is a nice tool to develop Control Flow Graphs quite conveniently, and has a large number of features for this particular problem. | 2 | 0 | 0 | I am trying to visualize a Control Flow Graph in Python using pyqtgraph. I have the following two problems.
How can I visualize the edges with a direction?
How can I visualize a self edge?
I tried looking into the documentation, but couldn't find. Obviously, I didn't get time to read it all! | Edges with Direction in pyqtgraph GraphItem | 0.099668 | 0 | 0 | 375 |
24,760,322 | 2014-07-15T14:03:00.000 | 0 | 0 | 0 | 0 | python,pyqtgraph | 24,763,543 | 2 | true | 0 | 1 | For direction, you might add a pg.ArrowItem at the end of each line (although this could have poor performance for large networks), and for self connections, QtGui.QGraphicsEllipseItem combined with an arrow. | 2 | 0 | 0 | I am trying to visualize a Control Flow Graph in Python using pyqtgraph. I have the following two problems.
How can I visualize the edges with a direction?
How can I visualize a self edge?
I tried looking into the documentation, but couldn't find. Obviously, I didn't get time to read it all! | Edges with Direction in pyqtgraph GraphItem | 1.2 | 0 | 0 | 375 |
24,761,787 | 2014-07-15T15:09:00.000 | 0 | 0 | 0 | 0 | python,image | 24,762,074 | 2 | false | 0 | 0 | It is possible, if you us NumPy and especially numpy.memmap to store the image data. That way the image data looks as if it were in memory but is on the disk using the mmap mechanism. The nice thing is that the numpy.memmap arrays are not more difficult to handle than ordinary arrays.
There is some performance overhead as all memmap arrays are saved to disk. The arrays could be described as "disk-backed arrays", i.e. the data is also kept in RAM as long as possible. This means that if you access some data array very often, it is most likely in memory, and there is no disk read overhead.
So, keep your metadata in a dict in memory, but memmap your bigger arrays.
This is probably the easiest way. However, make sure you have a 64-bit Python in use, as the 32-bit one runs out of addresses ad 2 GiB.
Of course, there are a lot of ways to compress the image data. If your data may be compressed, then you might consider using compression to save memory. | 1 | 0 | 1 | I have a large number of images of different categories, e.g. "Cat", "Dog", "Bird". The images have some hierarchical structure, like a dict. So for example the key is the animal name and the value is a list of animal images, e.g. animalPictures[animal][index].
I want to manipulate each image (e.g. compute histogram) and then save the manipulated data in an identical corresponding structure, e.g. animalPictures['bird'][0] has its histogram stored in animalHistograms['bird'][0].
The only issue is I do not have enough memory to load all the images, perform all the manipulations, and create an additional structure of the transformed images.
Is it possible to load an image from disk, manipulate the image, and stream the data to a dict on disk? This way I could work on a per-image basis and not worry about loading everything into memory at once.
Thanks! | Manipulating Large Amounts of Image Data in Python | 0 | 0 | 0 | 258 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.