Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
28,927,247 | 2015-03-08T13:57:00.000 | 0 | 0 | 0 | 0 | python,django,python-3.4,django-1.7,photologue | 32,932,624 | 3 | false | 1 | 0 | I guess your problem is solved by now, but just in case.. I had the same problem. Looking around in the logs, I found it was caused by me not having consolidated the static files from sortedm2m with the rest of my static files (hence the widget was not working properly). | 2 | 0 | 0 | after I put the photologue on the server, I have no issue with uploading photos.
the issue is when I am creating a Gallery from the admin site, I can choose only one photo to be attached to the Gallery. even if I selected many photos, one of them will be linked to the Gallery only.
The only way to add photos to a gallery is by adding them manually to photologue_gallery_photos table in the database :(
anyone kows how to solve it? | Gallery in Photologue can have only one Photo | 0 | 1 | 0 | 184 |
28,931,533 | 2015-03-08T20:30:00.000 | 1 | 0 | 0 | 0 | user-interface,wxpython | 28,946,737 | 1 | false | 0 | 1 | It sounds like you are trying to do this at the time of the creation of the window, before the initial layout and sizing is done, so everything still has its initial size. There are some alternative ways to deal with this situation. You could delay creating the content of the panel until the first size event. Or you could design your sizer layout to be adaptable to any (resonable) size. Or you could adjust/redo the sizer layout later if needed.
Or perhaps the easiest would be to send a fake size event to the parent or top-level parent so they will do what their normal layout resizing the child windows, which will cascade to your panel. This may not work in all situations, but it is common enough that there is a window method for it, SendSizeEvent. | 1 | 0 | 0 | I have a Notebook and add two panels to it. When I try to get the usable area of one of the panels, it returns (20,20) -- I imagine this is its size before stuff is put in. I want to get this size before I put stuff in because some proportions depend on it and I'd prefer not using absolutes.
I haven't set an actual size, I just sized everything with a boxsizer and set proportions. I'd like to get the actual pixel width of the area allotted to the panel (it doesn't take up the entire program window area).
I've tried calling Layout before GetSize, I've tried GetVirtualSize and Get ClientSize and any other available size returns, still (20, 20) or sometimes (10,40). I've tried getting the parent size, no go. I get the grandparent size and it shows correctly, but this is the size of the entire program window. Is there a way to get the potentially usable area of a panel? If need be, I can post the layout, but it's long. | wxPython -- GetSize doesn't return actual size | 0.197375 | 0 | 0 | 172 |
28,933,388 | 2015-03-09T00:01:00.000 | 0 | 0 | 0 | 0 | python,algorithm | 28,933,536 | 1 | false | 0 | 0 | It seems that optimal selling indices are those i such that
price[i-1] < price[i] and price[i+1] <= price[i] and for some j > i, price[i] - price[j] > 2. I don't know about names for an algorithm like that, but list comprehensions and the function any should be enough. | 1 | 1 | 1 | If I have a list of prices, say [4,2,5,6,9,3,1,2,5] and I have a transaction cost of $2 and I am able to buy and short sell then the optimal strategy is buy at 2 switch positions at 9 and and switch again at 1. So the optimal buy indices are [1,6] and the optimal sell indices are [4]. How can this be solved programmatically? Specifically I'm looking to be pointed in the right direction (i.e. This is a perfect case for A* ... or whatever) rather than a solution. | How can I find the optimal buy and sell points for a stock if I have a transaction cost? | 0 | 0 | 0 | 717 |
28,936,333 | 2015-03-09T06:32:00.000 | 9 | 1 | 1 | 0 | python,evdev | 28,936,564 | 1 | true | 0 | 0 | Try to install it with pip3
sudo pip3 install evdev | 1 | 2 | 0 | I'm working with Python on a Raspberry Pi using the Raspian operating system. I installed evdev-0.4.7 and it works fine for Python 2.7. But when I try it for Python 3.3 I get an error. Apparently it only installed on Python 2.7.
How can I install evdev on Python 3.3 as well? | How can I install evdev on both Python 2.7 and Python 3.3? | 1.2 | 0 | 0 | 11,168 |
28,938,162 | 2015-03-09T08:59:00.000 | 5 | 0 | 0 | 0 | python,pydub | 28,943,471 | 1 | true | 0 | 0 | Your best bet is to grab chunks from the stream (I'd advise 50 millisecond chunks since one complete wave form at 20Hz is 50ms), and construct an AudioSegment using this data.
Once you've done that you'll be able to use the AudioSegment().dBFS property to get a rough measure of the average loudness of that chunk. Once you get a sense for where the highs and lows are you can set a threshold below which will be considered silence.
You can of course determine the silence threshold automatically as well, but that'll probably require keeping track of loudest and quietest signal level in the last X seconds, and probably using some kind of decay as well.
Note: The method I've described above is definitely not the fastest way to do this, but pydub does not natively handle streaming. That said, it's probably the simplest way to accomplish your goal with pydub. | 1 | 3 | 0 | I want to monitor an audio streaming for silences. Any idea how I can do this ? it's a stream, not an audio file. | How to detect silence in an audio stream with pydub? | 1.2 | 0 | 0 | 6,105 |
28,940,711 | 2015-03-09T11:18:00.000 | 5 | 0 | 0 | 0 | python,opencv,colors,python-2.x | 28,940,905 | 1 | true | 0 | 0 | JPEG is a lossy format, you need to save your images as PNG as it is a lossless format. | 1 | 1 | 1 | I am wondering seriously about the effects of cv2.imwrite() function of OpenCV.
I noticed that when I read pictures with cv2.imread() and save them again with cv2.imwrite() function, their quality is not the same any more for the human eyes.
I ask you how can I keep the quality of the image the same as the original after saving it using cv2.imwrite() function.
I ask this question because I have really a serious issue in a larger program and when I checked the quality of the pictures saved by this function, I guesses that my problem comes certainly from this function.
For example, I draw using the mouse movements small red (Red=255) squares on picture . When I save the picture and count the number of pixels that have Red color equal to 255 I get very few of them only even if I draw a lot of them in pure red color. But when I check the image by my eyes, I notice the red pixels I drawed are not correctly saved in the correct red color I chosed (255).
Any one does know how to resolve this problem ? I mean to save the pictures using OpenCV without degrading its quality. | Not losing the quality of pictures saved with cv2.imwrite() | 1.2 | 0 | 0 | 2,752 |
28,946,786 | 2015-03-09T16:16:00.000 | 2 | 0 | 1 | 0 | python,ipython | 28,947,864 | 1 | true | 0 | 0 | Change the type of cell to Raw NBConvert.
The code won't run, nor will it cause any output for that cell. | 1 | 1 | 0 | Image you have to run an ipython notebook over and over again but you want to exclude specific cells every now and then. At the moment I'm handling this by commenting out the cells I want to exclude. It works but it is quite tedious.
Is there an easier way to mute cells? | Is it possible to "mute" cells in ipython? | 1.2 | 0 | 0 | 682 |
28,947,894 | 2015-03-09T17:11:00.000 | 1 | 0 | 0 | 0 | python,asana | 28,948,207 | 1 | false | 1 | 0 | You actually can't filter by assignee_status at all - if you pass the parameter it is silently ignored. We could change it so that unrecognized parameters result in errors, which would help make this clearer. | 1 | 2 | 0 | When Querying just for tasks that are marked for today in python:
client.tasks.find_all({ 'assignee_status':'upcoming','workspace': 000000000,'assignee':'me' ,'completed_since':'now'}, page_size=100)
I get a response of all tasks the same as if I would not have included assignee_status
client.tasks.find_all({'workspace': 000000000,'assignee':'me' ,'completed_since':'now'}, page_size=100)
The workspace space has around 5 task that are marked for today.
Thank you,
Greg | Asana API querying by assignee_status | 0.197375 | 0 | 1 | 269 |
28,949,290 | 2015-03-09T18:31:00.000 | 1 | 1 | 0 | 0 | java,python,intellij-idea,pycharm,ide | 66,222,468 | 8 | false | 1 | 0 | If there are any file watchers active (Preferences>Tools>File Watchers), make sure to check their Advanced Options. Disable any Auto-save files to trigger the watcher toggles.
This option supersedes the Autosave options from Preferences>Appearance & Behaviour>System Settings. | 2 | 78 | 0 | I have done a fair amount of googling about this question and most of the threads I've found are 2+ years old, so I am wondering if anything has changed, or if there is a new method to solve the issue pertaining to this topic.
As you might know when using IntelliJ (I use 14.0.2), it often autosaves files. For me, when making a change in Java or JavaScript files, its about 2 seconds before the changes are saved. There are options that one would think should have an effect on this, such as Settings > Appearance & Behavior > System Settings > Synchronization > Save files automatically if application is idle for X sec. These settings seem to have no effect for me though, and IntelliJ still autosaves, for example if I need to scroll up to remember how a method I am referencing does something.
This is really frustrating when I have auto-makes, which mess up TomCat, or watches on files via Grunt, Karma, etc when doing JS development. Is there a magic setting that they've put in recently? Has anyone figured out how to turn off auto-save, or actually delay it? | Turning off IntelliJ Auto-save | 0.024995 | 0 | 0 | 59,393 |
28,949,290 | 2015-03-09T18:31:00.000 | 8 | 1 | 0 | 0 | java,python,intellij-idea,pycharm,ide | 37,813,276 | 8 | false | 1 | 0 | I think the correct answer was given as a comment from ryanlutgen above:
The beaviour of "auto-saving" your file is not due to the auto-save options mentioned.
IJ saves all changes to your build sources to automatically build the target.
This can be turned of in:
Preferences -> Build,Execution,Deployment -> Compiler -> Make project automatically.
Note: now have to initiate the project build manually (e.g. by using an appropriate key-shortcut)
(All other "auto-save" options just fine-tune the build in auto-save behaviour.) | 2 | 78 | 0 | I have done a fair amount of googling about this question and most of the threads I've found are 2+ years old, so I am wondering if anything has changed, or if there is a new method to solve the issue pertaining to this topic.
As you might know when using IntelliJ (I use 14.0.2), it often autosaves files. For me, when making a change in Java or JavaScript files, its about 2 seconds before the changes are saved. There are options that one would think should have an effect on this, such as Settings > Appearance & Behavior > System Settings > Synchronization > Save files automatically if application is idle for X sec. These settings seem to have no effect for me though, and IntelliJ still autosaves, for example if I need to scroll up to remember how a method I am referencing does something.
This is really frustrating when I have auto-makes, which mess up TomCat, or watches on files via Grunt, Karma, etc when doing JS development. Is there a magic setting that they've put in recently? Has anyone figured out how to turn off auto-save, or actually delay it? | Turning off IntelliJ Auto-save | 1 | 0 | 0 | 59,393 |
28,951,588 | 2015-03-09T20:56:00.000 | 1 | 1 | 0 | 1 | python,linux,distutils,setup.py | 28,951,788 | 1 | false | 0 | 0 | The immediate solution is to invoke setup.py with --prefix=/the/path/you/want.
A better approach would be to include the data as package_data. This way they will be installed along side your python package and you'll find it much easier to manage it (find paths etc). | 1 | 0 | 0 | So I created a setup.py script for my python program with distutils and I think it behaves a bit strange. First off it installs all data_files into /usr/local/my_directory by default which is a bit weird since this isn't a really common place to store data, is it?
I changed the path to /usr/share/my_directory/. But now I'm not able to write to the database inside that directory and I can't set required permission from within setup.py neither since the actual database file has not been created when I run it.
Is my approach wrong? Should I use another tool for distributing?
Because at least for Linux, writing a simple setup sh script seems easier to me at the moment. | distutils setup script under linux - permission issue | 0.197375 | 0 | 0 | 36 |
28,952,282 | 2015-03-09T21:41:00.000 | 3 | 0 | 1 | 1 | python,sublimetext3,sublimerepl | 29,251,327 | 2 | false | 0 | 0 | I had the same problem, when I installed REPL for the first time. Now, that could sound crazy, but the way to solve the problem (at least, the trick worked for me!) is to restart once Sublime Text 3.
Update: As pointed out by Mark in the comments, apparently you could have to restart Sublime more than once to solve the problem. | 1 | 2 | 0 | I'm using REPL with sublime text 3 (latest version as of today) and I'm coding in python 3.4. As far as I understand the documentation on REPL if do: tools>sublimeREPL>python>python-RUN current file
then I should run the code I have typed in using REPL. However when I do this I get an error pop up saying:
FileNotFoundError(2, 'The system cannot find the file specified.',None,2)
I get this error whatever the code I typed in is (I tried print ("Hello World") on its own and also big long programs I've made before)
Can someone please help me with this and explain what the problem is, thanks :) | REPL error with Sublime Text 3 | 0.291313 | 0 | 0 | 15,816 |
28,952,285 | 2015-03-09T21:41:00.000 | 1 | 0 | 1 | 0 | python,flask,virtualenv | 28,953,236 | 2 | true | 1 | 0 | Make a blank virtualenv.
Try to run your program.
If there is an import error, install the relevant package, then go to (2) again.
You now have a virtualenv with just the packages that are required. Freeze that. | 2 | 0 | 0 | How can I have a clean virtualenv for my flask application that contains nothing else than the dependencies of application needs?
I am using Ubuntu and I have a Flask application and when I run command pip freeze > requirements.txt the requirements file gets unnecessary files also
This leads to a problem when uploading it on heroku.
How do I resolve this? | Freezing application-specific dependencies in Python, virtualenv and pip | 1.2 | 0 | 0 | 509 |
28,952,285 | 2015-03-09T21:41:00.000 | 0 | 0 | 1 | 0 | python,flask,virtualenv | 44,601,365 | 2 | false | 1 | 0 | Another easy way of doing this would be using the pipreqs. So what it basically does is it generates pip requirements.txt file based on imports of any project.
Install pipreqs
pip install pipreqs
Then pipreqs /path/to/project
You will have your requirements.txt file generated in your project path. | 2 | 0 | 0 | How can I have a clean virtualenv for my flask application that contains nothing else than the dependencies of application needs?
I am using Ubuntu and I have a Flask application and when I run command pip freeze > requirements.txt the requirements file gets unnecessary files also
This leads to a problem when uploading it on heroku.
How do I resolve this? | Freezing application-specific dependencies in Python, virtualenv and pip | 0 | 0 | 0 | 509 |
28,952,404 | 2015-03-09T21:49:00.000 | 0 | 1 | 0 | 0 | python,django,twitter,django-allauth | 28,975,387 | 1 | false | 1 | 0 | I see now that I simply needed to define my app as "read & write" in the Twitter admin UI. | 1 | 0 | 0 | I'm trying to use django-allauth for Twitter sign-in in my Django app. I notice that when I do the sign-in process, Twitter says that this app will NOT be able to post tweets. I do want to be able to post tweets. How do I add this permission? | Using `django-allauth` to let users sign in with Twitter and tweet through the app | 0 | 0 | 0 | 120 |
28,952,672 | 2015-03-09T22:09:00.000 | 0 | 0 | 1 | 0 | python-3.x | 29,329,339 | 1 | true | 0 | 0 | As commented above, atexit from python's stdlib does it | 1 | 0 | 0 | Is there a way to register a function "logout" to be ran if the python interpreter exists for some reason or another? | How can I register a function to be ran at exit? | 1.2 | 0 | 0 | 29 |
28,954,229 | 2015-03-10T00:33:00.000 | 0 | 0 | 1 | 0 | python,ipython-notebook | 28,976,195 | 1 | true | 0 | 0 | I've asked developers about this in their Help Chat Room and here is the response:
no, export is one-way. There are no plans to support roundtrip
I hope it would help someone to save some time. | 1 | 0 | 0 | While it is no problem to export to and import from python script with v3 notebook format, i have a problem to find a solution to do the same with v4 format . I mean export notebook to python script is an easy win, but it looks like reverse option does not exist. Could anyone push me to the right direction please? | Create notebook from previously saved python script (v4) | 1.2 | 0 | 0 | 34 |
28,955,418 | 2015-03-10T02:59:00.000 | 1 | 0 | 1 | 0 | python-2.7,pygame,pycharm | 28,969,013 | 2 | true | 0 | 0 | I will assume you already have an interpreter for PyCharm. To be able to let PyCharm recognize PyGame, you will need to have download an interpreter that has PyGame installed with it. There is no other way. Maybe go to Google and find the right interpreter that has PyGame included within it. I hope this helps you! | 2 | 2 | 0 | I recently installed pycahrm, I installed PIP and tried to install Pygame. There wasn't a just "Pygame" but there were sound effects and other things for it. What I really want is to install Pygame onto my pycharm. | How can I install Pygame to Pycharm? | 1.2 | 0 | 0 | 2,580 |
28,955,418 | 2015-03-10T02:59:00.000 | 2 | 0 | 1 | 0 | python-2.7,pygame,pycharm | 28,969,782 | 2 | false | 0 | 0 | My friend just showed me what I need to do. Apparently I just need to install it from the Pygame site's installer. Mystery solved. | 2 | 2 | 0 | I recently installed pycahrm, I installed PIP and tried to install Pygame. There wasn't a just "Pygame" but there were sound effects and other things for it. What I really want is to install Pygame onto my pycharm. | How can I install Pygame to Pycharm? | 0.197375 | 0 | 0 | 2,580 |
28,955,873 | 2015-03-10T03:50:00.000 | 3 | 0 | 0 | 0 | web2py,pythonanywhere | 28,963,419 | 1 | false | 1 | 0 | We don't have a good general solution for this. Our timeout is pretty long (3 minutes, I think). In general it's not a good idea to keep your users waiting with a loading page for minutes because they're going to assume that something went wrong. Your best bet is probably to break the big task into smaller chunks and do each of the chunks in a separate request, then you can show your users a progress meter that updates as each request completes. | 1 | 2 | 0 | I am hosting a web2py application on PythonAnywhere. My problem is that the application is bound to take few minutes to respond(because of data processing or non-optimized implementation). During this time the page times out.
I get a message from PythonAnywhere that something went wrong and my application is taking more time than usual.
I want the framework to wait until the web2py function finishes(even if it takes minutes). Is it a setting I need to change in web2py or is it something that I need to change in PythonAnywhere?
Thanks and Regards! | How to change web2py connection timeout on pythonanywhere? | 0.53705 | 0 | 0 | 387 |
28,957,258 | 2015-03-10T06:16:00.000 | 0 | 0 | 0 | 1 | python,twisted,twisted.internet | 31,752,497 | 1 | true | 0 | 0 | use getProcessOutput('/bin/sh', ('-c', cmd)). cmd is your shell command. try it :-) | 1 | 1 | 0 | In twisted, getProcessOutput method could get 'ps' shell command ouput by using getProcessOutupt('ps', 'aux') and return a defer.
my question is how to run command like "ps aux | grep 'some keyword' | awk '{...}'" in getProcessOutput. for example getProcessOutput("ps aux | grep 'some keyword' | awk '{...}'").
any suggestions would be appreciated. | twisted run local shell commands with pipeline | 1.2 | 0 | 0 | 700 |
28,960,249 | 2015-03-10T09:37:00.000 | 2 | 0 | 0 | 0 | python,cryptography | 28,960,426 | 1 | false | 0 | 0 | Doing '\x00\x00\xff' or "0000ff".decode('hex') should work. | 1 | 0 | 0 | I am trying to carry out a padding oracle attack. I am aware that I have to modify the bytes from 00 to the point where it succeeds, to find the correct padding. But, how do I represent 00-FF in python? When I try representing it as a part of the string, 00 is taken as 2 bytes.
P.S - This is a homework problem. | padding oracle attack - how to represent hexadecimal as one byte in python | 0.379949 | 0 | 0 | 289 |
28,960,995 | 2015-03-10T10:14:00.000 | 1 | 0 | 0 | 0 | python,caching,flask,memcached,flask-cache | 28,968,025 | 1 | true | 1 | 0 | It's not supported because Memcache is designed to be a distributed hash. There's no index of keys stored to search in.
Ideally you should know what suffixes a key may have.
If not, you could maintain an index yourself in a special key for the user.
Like user_id + '_keys' which contains a list of keys.
This way you can cycle key by key and delete all the cache for the user.
You can override the .set function to manage this new key. | 1 | 2 | 0 | I'm trying to delete all entries in the cache store that contain (in this case start with) a substring of the cache key, but I don't see any easy way of doing this. I'm using Memcache as backend.
If I understand the code correctly, I need to pass the full cache key when calling delete or delete_many. Is there any other way of doing this?
I'll explain what I'm trying to do in case there is a better way: I need to clear the cache for certain users when they modify their settings. Clearing the cache with clear() will remove the cache entries for all the users, which are some 110K, so I don't want to use that.
I am generating key_prefix with the ID of the user, the request's path, and other variables. The cache keys always start with the ID of the authenticated user. So ideally I would use something like delete_many(user_id + ".*") | Getting or deleting cache entries in Flask with the key starting with (or containing) a substring | 1.2 | 0 | 0 | 811 |
28,961,517 | 2015-03-10T10:39:00.000 | -1 | 0 | 0 | 1 | python,celery,celerybeat | 30,854,981 | 3 | false | 1 | 0 | The best idea is create an implementation which schedules the task itself after completing the task. Also, create an entrance lock so the task cannot be executed multiple times per moment.
Trigger the execution once.
In this case,
you don't need a celerybeat process
the task is guaranteed to execute | 1 | 12 | 0 | If I create a celery beat schedule, using timedelta(days=1), the first task will be carried out after 24 hours, quote celery beat documentation:
Using a timedelta for the schedule means the task will be sent in 30 second intervals (the first task will be sent 30 seconds after celery beat starts, and then every 30 seconds after the last run).
But the fact is that in a lot of situations it's actually important that the the scheduler run the task at launch, But I didn't find an option that allows me to run the task immediately after celery starts, am I not reading carefully, or is celery missing this feature? | celery beat schedule: run task instantly when start celery beat? | -0.066568 | 0 | 0 | 5,063 |
28,961,577 | 2015-03-10T10:41:00.000 | 2 | 0 | 0 | 0 | python,mysql,csv,hbase | 28,963,527 | 1 | false | 0 | 0 | Use long to represent time (milli seconds), so you don't bother about the date formatting/string encoding. It's space efficient and much easier to perform range queries. | 1 | 1 | 0 | I have the following TimeStamp value: Wed Jun 25 09:18:15 +0000 2014.
I am writing a MapReduce program in Python that reads JSON objects from an Amazon S3 location and export it to a local CSV file. The CSV file will then export data to a MySQL and HBase database. I have about 200 million records (1 TB), so I need to optimize every processing step.
What data type should I use to store the TimeStamp value in Python, CSV, MySQL and HBase database? I need to store all aspects of the TimeStamp value. My schema has 4 columns in the CSV file, MySQL and HBase database tables.
Thanks! | Best way to store TimeStamp | 0.379949 | 1 | 0 | 444 |
28,965,230 | 2015-03-10T13:36:00.000 | 2 | 0 | 0 | 1 | python,shell,terminal,command,conemu | 28,966,597 | 1 | true | 0 | 0 | Apps+G groups input for all visible panes. | 1 | 3 | 0 | I am using ConEmu windows emulator and I would like to run one simple command on more terminals at the same time. Is there any way to do that? | How to run one command on multiple terminals? | 1.2 | 0 | 0 | 1,086 |
28,967,747 | 2015-03-10T15:31:00.000 | 0 | 0 | 0 | 0 | python,django,django-models,django-admin | 29,432,774 | 2 | true | 1 | 0 | I was using Django 1.6 which did not support overriding the get_fields method. Updated to 1.7 and this method worked perfectly. | 1 | 1 | 0 | I want to create a dynamic admin site, that based on if the field is blank or not will show that field. So I have a model that has a set number of fields, but for each individual entry will not contain all of the fields in my model and I want to exclude based on if that field is blank.
I have a unique bridge identifier, that correlates to each bridge, and then all of the various different variables that describe the bridge.
I have it set up now that the user will go to a url with the unique bridgekey and then this will create an entry of that bridge. So (as i am testing on my local machine) it would be like localhost/home/brkey and that code in my views.py that corresponds to that url is
However, not every bridge is the same and I have a lot more variables that I would like to include in my model but for now I am just testing on two : prestressed_concrete_deck and reinforced_concrete_coated_bars. What I want is to dynamically create the admin site to not display the prestressed_concrete_deck variable if that field is blank. So instead of displaying all of the variables on the admin site, I want to only display those variables if that bridge has that part, and to not display anything if the field is blank.
Another possible solution to the problem would be to get that unique identifier over to my admins.py. I cant figure out either how to get that individual key over as then I could query in the admins.py. If i knew how to access the bridgekey, I could just query in my admins.py dynamically. So how would I access the brkey for that entry in my admins.py (Something like BridgeModel.brkey ?)
I have tried several different things in my admin.py and have tried the comments suggestion of overwriting the get_fields() method in my admin class, but I am probably syntactically wrong and I am kind of confused what the object it takes exactly is. Is that the actual entry? Or is that the individual field? | Create a dynamic admin site | 1.2 | 0 | 0 | 84 |
28,970,289 | 2015-03-10T17:29:00.000 | 2 | 0 | 0 | 0 | python,boto,priority-queue,amazon-sqs | 57,454,595 | 4 | false | 1 | 0 | By "when a msg fails", if you meant "processing failure" then you could look into Dead Letter Queue (DLQ) feature that comes with SQS. You can set the receive count threshold to move the failed messages to DLQ. Each DLQ is associated with an SQS.
In your case, you could make "max receive count" = 1 and you deal with that message seperately | 2 | 14 | 0 | I have a situation when a msg fails and I would like to replay that msg with the highest priority using python boto package so he will be taken first. If I'm not wrong SQS queue does not support priority queue, so I would like to implement something simple.
Important note: when a msg fails I no longer have the message object, I only persist the receipt_handle so I can delete the message(if there was more than x retries) / change timeout visibility in order to push him back to queue.
Thanks! | How to implement a priority queue using SQS(Amazon simple queue service) | 0.099668 | 0 | 1 | 27,088 |
28,970,289 | 2015-03-10T17:29:00.000 | 20 | 0 | 0 | 0 | python,boto,priority-queue,amazon-sqs | 28,973,859 | 4 | true | 1 | 0 | I don't think there is any way to do this with a single SQS queue. You have no control over delivery of messages and, therefore, no way to impose a priority on messages. If you find a way, I would love to hear about it.
I think you could possibly use two queues (or more generally N queues where N is the number of levels of priority) but even this seems impossible if you don't actually have the message object at the time you determine that it has failed. You would need the message object so that the data could be written to the high-priority queue.
I'm not sure this actually qualifies as an answer 8^) | 2 | 14 | 0 | I have a situation when a msg fails and I would like to replay that msg with the highest priority using python boto package so he will be taken first. If I'm not wrong SQS queue does not support priority queue, so I would like to implement something simple.
Important note: when a msg fails I no longer have the message object, I only persist the receipt_handle so I can delete the message(if there was more than x retries) / change timeout visibility in order to push him back to queue.
Thanks! | How to implement a priority queue using SQS(Amazon simple queue service) | 1.2 | 0 | 1 | 27,088 |
28,971,180 | 2015-03-10T18:17:00.000 | 0 | 1 | 0 | 1 | python,ssh | 28,971,821 | 3 | true | 0 | 0 | If you want the python script to exit, I think your best bet would be to continue doing a similar thing to what you're doing; print the credentials in the form of arguments to the ssh command and run python myscript.py | xargs ssh. As tdelaney pointed out, though, subprocess.call(['ssh', args]) will let you run the ssh shell as a child of your python process, causing python to exit when the connection is closed. | 1 | 1 | 0 | I am writing a little script which picks the best machine out of a few dozen to connect to. It gets a users name and password, and then picks the best machine and gets a hostname. Right now all the script does is print the hostname. What I want is for the script to find a good machine, and open an ssh connection to it with the users provided credentials.
So my question is how do I get the script to open the connection when it exits, so that when the user runs the script, it ends with an open ssh connection.
I am using sshpass. | Open SSH connection on exit Python | 1.2 | 0 | 1 | 1,186 |
28,972,157 | 2015-03-10T19:11:00.000 | 2 | 0 | 0 | 0 | python,html,python-2.7 | 28,973,712 | 2 | true | 1 | 0 | store everything in a database eg: sqlite,mysql,mongodb,redis ...
then query the db every time you want to display the data.
this is good for changing it later from multiple sources.
store everything in a "flat file": sqlite,xml,json,msgpack
again, open and read the file whenever you want to use the data.
or read it in completly on startup
simple and often fast enough.
generate a html file from your list with a template engine eg jinja, save it as html file.
good for simple hosters
There are some good python webframeworks out there some i used:
Flask, Bottle, Django, Twisted, Tornado
They all more or less output html.
Feel free to use HTML5/DHTML/Java Script.
You could use a webframework to create/use an "api" on the backend, which serves json or xml.
Then your java script callback will display it on your site. | 1 | 1 | 0 | I wrote a script that scrapes various things from around the web and stores them in a python list and have a few questions about the best way to get it into a HTML table to display on a web page.
First off should my data be in a list? It will at most be a 25 by 9 list.
I’m assuming I should write the list to a file for the web site to import? Is a text file preferred or something like a CSV, XML file?
Whats the standard way to import a file into a table? In my quick look around the web I didn’t see an obvious answer (Major web design beginner). Is Javascript this best thing to use? Or can python write out something that can easily be read by HTML?
Thanks | Best way to import a Python list into an HTML table? | 1.2 | 0 | 1 | 1,914 |
28,972,266 | 2015-03-10T19:17:00.000 | 0 | 0 | 0 | 0 | fonts,wxpython,label,spacing | 28,973,871 | 1 | false | 0 | 1 | No, I don't believe this is a feature of any of the text controls or their variants. You could set the wx.Font to a monospace font, but that's probably about as good as you can get going the "automatic" way. I would recommend drawing it yourself if that's really what you need. | 1 | 0 | 0 | There's a method to set the "letter spacing" for a wx.TextCtrl? (or another widget where I can show a small phrase)
Or is a property of the wx.Font?
Or is a inherent property of the font face I'm using?
A work-around I could implement is to write my custom control (maybe derived from RichTextCtrl) where I draw each character one by one, and add the selected spacing between them. But then it would be a lot less efficient to calculate (giving an example) the area of the text. This is rather simple with a ClientDC. | Letter spacing in wxpython widgets | 0 | 0 | 0 | 83 |
28,972,637 | 2015-03-10T19:38:00.000 | 0 | 0 | 1 | 0 | c#,python,installshield,patch,software-distribution | 28,975,982 | 2 | false | 0 | 1 | A patch probably won't help you. If the locations are fixed within the install, a minor upgrade could do the trick, if you make all the files that need to stay the same "never overwrite" (unless the custom action ignores this, then things might get difficult).
If the locations are determined during the execution of the custom action, or the locations are based on user input during the install, then you have a problem if you haven't saved the location paths (in the registry, for example). I don't think you want a custom action to scan all the drives of the computer just to find the files.
If the files are put in their new locations by the custom action, windows installer probably won't see them as key files and probably won't "repair" them in any scenario. If the fixes are few, you might be better of distributing the files separately with a clear instruction, in stead of spending many hours on a difficult new custom action.
Cheers, B. | 1 | 0 | 0 | This is the situation I have:
I created an installer. It has python scripts, executable, and other file types. In the installer, I run a C# executable as a custom action, after registering the product. The C# executable moves the files into different locations (i.e. a text document will be moved to My Documents). I understand I can do this without the custom actions, but I was not aware of that when I created the installer.
Now, after I have distributed the software, users are running into small bugs. For instance, there is a bad if check in one of the python scripts.
Question Is there a way to fix the portion of the python script/executable/text document that is broken, and simply update those files (without having to redistribute the software to the users, and having them reinstall it)? | Upgrading Application using InstallShield | 0 | 0 | 0 | 52 |
28,974,896 | 2015-03-10T22:00:00.000 | 1 | 1 | 0 | 0 | python,twitter,tweepy | 28,977,051 | 1 | true | 0 | 0 | Twitter probably has limits on their api and will most likely block your api key if they feel that you are spamming. In fact I would bet there is a maximum number of tweets per day depending on the type of developer account.
For stability and up time concerns running on a 'personal' computer is not a good idea. You probably want to do other things on your personal comp that may interrupt your bot's service (like install programs/updates and restart). As far as load on the cpu, well if its only picking up 10 tweets per 5 min that doesn't seem like any kind of load that you need to worry about. To be sure you could run the top command and check out the cpu and memory usage.
If you have a server somewhere like at digital ocean then I would run it there just to reduce the interruption the program experiences.
I ran a similar program using twitters stream api and collected tweets using a personal computer and the interruptions were annoying and I eventually stopped collecting data.... | 1 | 0 | 0 | I have a python script that scans for new tweets containing specified #hashtags, then it posts them to my "python bot's" twitter account as new tweets.
I tested it from the python console and let it run for 5 minutes. It managed to pick up 10 tweets matching my criteria. It works flawlessly, but I'm concerned about performance issues and leaving the script running for extended amounts of time.
What are the negative effects of leaving this script running on my personal computer for a whole day or more?
Should I be running this on a digital ocean VPS instead?
Twitter offers the API for bot creation, but do they care how much a bot tweets? I don't see how this is any different from retweeting. | Running python tweepy listener | 1.2 | 0 | 1 | 399 |
28,975,648 | 2015-03-10T22:59:00.000 | 1 | 0 | 1 | 0 | python-2.7,abstract-syntax-tree,python-3.4 | 29,026,381 | 1 | true | 0 | 0 | Turns out it is not possible to use different versions of AST parsers in python to the best of my knowledge. (It is still possible to parse them separately by carrying out multiple iterations each time using a different version AST) | 1 | 0 | 0 | I was using ast module of python3.4 to get the imports and function calls within a file.
It works correctly if I run the code on a file which has python3.4 syntax but throws an exception if I try to parse a file of older python2.7 version (for print statements, except statements which have a "," etc).
Is there a way to force ast to use python2.7 compiler while dealing with old files and use python3.4 compiler when dealing with python3.4 file?
Is there any other way to resolve this issue?? | Python ast parsing exception | 1.2 | 0 | 0 | 437 |
28,976,912 | 2015-03-11T01:09:00.000 | 0 | 0 | 1 | 0 | python,nose,coverage.py | 28,999,167 | 4 | false | 0 | 0 | It sounds like you have tests that run your code, and then your code uses argparse which implicitly pulls arguments from sys.argv. This is a bad way to structure your code. Your code under test should be getting arguments passed to it some other way so that you can control what arguments it sees.
This is an example of why global variables are bad. sys.argv is a global, shared by the entire process. You've limited the modularity, and therefore the testability, of your code by relying on that global. | 1 | 5 | 0 | I want to use nose and coverage in my project. When I run nose with --with-coverage argument, my programs argument-parsing module goes nuts because "--with-coverage" isn't a real argument according to it.
How do I turn the argparse off, but during testing only? Nose says all my tests fail because of the bad argument. | How to use nosetests in python while also passing/accepting arguments for argparse? | 0 | 0 | 0 | 916 |
28,979,898 | 2015-03-11T06:21:00.000 | 2 | 0 | 1 | 1 | python,pip | 36,624,566 | 3 | false | 0 | 0 | Use the command prompt shortcut provided from installing the MSI.
This will launch the prompt with VCVarsall.bat activated for the targeted environment.
Depending on your installation, you can find this in the Start Menu under All Program -> Microsoft Visual C++ For Python -> then pick the command prompt based on x64 or x86.
Otherwise, press Windows Key and search for "Microsoft Visual C++ For Python". | 1 | 1 | 0 | I downloaded Microsoft Visual C++ Compiler for Python 2.7 and it installed in
C:\Users\user\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\vcvarsall.bat
However, I am getting the error: Unable to find vcvarsall.bat error when attempting to install "MySQL-python".
I added C:\Users\user\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0; to my Path.
I am using python 2.7.8 | How can i point pip to VCForPython27 in order to prevent "Unable to find vcvarsall.bat" error | 0.132549 | 0 | 0 | 3,358 |
28,980,250 | 2015-03-11T06:49:00.000 | 0 | 0 | 1 | 0 | python,powershell,scripting-language | 28,982,165 | 1 | true | 0 | 0 | The easiest might be to encode the object as a JSON string in Python, and then convert it to a PowerShell object with ConvertFrom-Json. | 1 | 1 | 0 | I want to have an integration between my python scripts and my powershell scripts.
For that I need the option to pass object of class I made so powershell could work with it.
That class contains string type properties and one object property that contains None or a pointer to object of another classes I made that contains only string properties.
What is the best way to do that task? Thanks. | Passing objects from python to powershell | 1.2 | 0 | 0 | 1,345 |
28,981,883 | 2015-03-11T08:41:00.000 | 2 | 0 | 0 | 0 | python,django | 28,981,950 | 1 | true | 1 | 0 | Hm, when context processor introduces variable into context then this variable is available in all project's templates.
So you don't need to add variables to each of context processors, single processor will do the job. | 1 | 2 | 0 | I have a Django project in which I have 4 applications. I am able to use custom processors (of course each app has its own context processor) to pass application level common variables to templates. But, when I need to pass the same context variable to all templates in all apps (variables common to all applications), I am just adding these context variables to each of the context processors individually. Is there any other way to pass a context variable to all templates in all apps, without having need to add it to each context processor? | Django context processors and a Django project with mulitple applications | 1.2 | 0 | 0 | 168 |
28,985,490 | 2015-03-11T11:30:00.000 | 6 | 0 | 0 | 0 | python,opencv,image-processing | 49,022,627 | 3 | false | 0 | 0 | img[x,y]=[255, 255, 255] is wrong because opencv img[a,b] is a matrics then you need to change x,y then you must use img[y,x]
actualy mistake in the order of x,y
if you want to change color of point x,y use this >> img[y,x] = color | 1 | 4 | 1 | I need to color a pixel in an image. I use opencv and python.
I tried img[x,y]=[255 255 255] to color a pixel(x,y) but it wont work :(
Is there is any mistake in this?
Can you suggest any method?
Thanks in advance. | Color a pixel in python opencv | 1 | 0 | 0 | 24,622 |
28,990,639 | 2015-03-11T15:24:00.000 | 10 | 0 | 1 | 1 | python,homebrew | 29,003,811 | 1 | false | 0 | 0 | Use pip3. The "caveats" text you see when you run brew info python3 was printed for you after python3 was installed; that text is frequently helpful! It reads:
You can install Python packages with
pip3 install <package>
They will install into the site-package directory
/usr/local/lib/python3.4/site-packages | 1 | 8 | 0 | I just finished installing the latest stable version of python via Homebrew.
$ brew install python3
Everything works fine. I would like to install packages, for example PyMongo.
I don't have pip.
$ pip
-bash: pip: command not found
and there is no Homebrew formulae for it:
$ brew install PyMongo
brew install PyMongo
Error: No available formula for pymongo
Searching formulae...
Searching taps...
Any idea what's the best way to install PyMongo on OS X when Python was installed via Homebrew. Thank you! | how to install python packages for brew installed pythons | 1 | 0 | 0 | 12,370 |
28,991,015 | 2015-03-11T15:39:00.000 | 8 | 1 | 1 | 0 | python,python-3.x,python-3.4,delete-file | 58,837,642 | 16 | false | 0 | 0 | From the project directory type the following:
Deleting all .pyc files
find . -path "*/*.pyc" -delete
Deleting all .pyo files:
find . -path "*/*.pyo" -delete
Finally, to delete all '__pycache__', type:
find . -path "*/__pycache__" -type d -exec rm -r {} ';'
If you encounter permission denied error, add sudo at the begining of all the above command. | 3 | 252 | 0 | What is the BEST way to clear out all the __pycache__ folders and .pyc/.pyo files from a python3 project. I have seen multiple users suggest the pyclean script bundled with Debian, but this does not remove the folders. I want a simple way to clean up the project before pushing the files to my DVS. | Python3 project remove __pycache__ folders and .pyc files | 1 | 0 | 0 | 212,604 |
28,991,015 | 2015-03-11T15:39:00.000 | 9 | 1 | 1 | 0 | python,python-3.x,python-3.4,delete-file | 56,165,314 | 16 | false | 0 | 0 | Using PyCharm
To remove Python compiled files
In the Project Tool Window, right-click a project or directory, where Python compiled files should be deleted from.
On the context menu, choose Clean Python compiled files.
The .pyc files residing in the selected directory are silently deleted. | 3 | 252 | 0 | What is the BEST way to clear out all the __pycache__ folders and .pyc/.pyo files from a python3 project. I have seen multiple users suggest the pyclean script bundled with Debian, but this does not remove the folders. I want a simple way to clean up the project before pushing the files to my DVS. | Python3 project remove __pycache__ folders and .pyc files | 1 | 0 | 0 | 212,604 |
28,991,015 | 2015-03-11T15:39:00.000 | 0 | 1 | 1 | 0 | python,python-3.x,python-3.4,delete-file | 48,244,930 | 16 | false | 0 | 0 | Why not just use rm -rf __pycache__? Run git add -A afterwards to remove them from your repository and add __pycache__/ to your .gitignore file. | 3 | 252 | 0 | What is the BEST way to clear out all the __pycache__ folders and .pyc/.pyo files from a python3 project. I have seen multiple users suggest the pyclean script bundled with Debian, but this does not remove the folders. I want a simple way to clean up the project before pushing the files to my DVS. | Python3 project remove __pycache__ folders and .pyc files | 0 | 0 | 0 | 212,604 |
28,992,141 | 2015-03-11T16:28:00.000 | 8 | 0 | 1 | 0 | python,blender | 29,032,639 | 2 | false | 0 | 0 | While most import/export operators can be found in bpy.ops such as bpy.ops.import_mesh.obj() the collada import/export operators are under bpy.ops.wm. The importer is bpy.ops.wm.collada_import(filepath="").
If your automating the import of many files you will want to use bpy.ops.wm.save_mainfile(filepath="") to save each one as you go. | 1 | 4 | 0 | I got a project in which I've to import different files into blender. I am just a starter to blender and it's python API. I am looking for a way to import .dae file into blender using a python script. Until now I've been unsuccessful to find an import module in python for blender.
Can anyone point me in the right direction? | Importing COLLADA(.dae) file into blender using python | 1 | 0 | 0 | 3,909 |
28,993,486 | 2015-03-11T17:34:00.000 | 0 | 0 | 0 | 1 | python,port,popen,kill | 28,993,604 | 1 | false | 0 | 0 | ok I've got it seems like close_fds=True while doing Popen solves the issue. | 1 | 0 | 0 | When my python program is killed with -9 normally it also closes the port it's listening.
BUT when it has some child processes running created with Popen (which I don't really need to kill on killing parent), while killing -9 the parent it seems to leave the port in use.
How can I force to close the port even if there are children? | Close port on killing python the process with children | 0 | 0 | 0 | 352 |
28,994,041 | 2015-03-11T17:59:00.000 | 1 | 0 | 1 | 0 | python-3.x,redhat,ipython-notebook | 55,079,465 | 4 | false | 0 | 0 | I had the same problem right now on my Linux machine and up to date browser and even with a disabled uBlock. I was port-forwarding from a remote machine to my laptop while running Jupyter on the remote machine. Logging out of the ssh port-forwarding session (went to htop and killed the process) helped. | 3 | 16 | 0 | platform: redhat x64, installed ipython notebook 3.0 through pyvenv-3.4
When I open a notebook, it always shows "kernel starting, please wait...".
But I can open IPython console.
Please help, thanks! | IPython notebook always shows "kernel starting, please wait..." | 0.049958 | 0 | 0 | 17,039 |
28,994,041 | 2015-03-11T17:59:00.000 | 8 | 0 | 1 | 0 | python-3.x,redhat,ipython-notebook | 29,863,272 | 4 | false | 0 | 0 | I had this problem on multiple systems, and changing/updating browser solved it (as already written by OP). | 3 | 16 | 0 | platform: redhat x64, installed ipython notebook 3.0 through pyvenv-3.4
When I open a notebook, it always shows "kernel starting, please wait...".
But I can open IPython console.
Please help, thanks! | IPython notebook always shows "kernel starting, please wait..." | 1 | 0 | 0 | 17,039 |
28,994,041 | 2015-03-11T17:59:00.000 | 0 | 0 | 1 | 0 | python-3.x,redhat,ipython-notebook | 67,139,048 | 4 | false | 0 | 0 | For me, the issue was caused by ExpressVPN's whitelist (split-tunnel) feature interfering with Jupyter. I disabled split-tunnel and just used a browser VPN extension instead. FYI, NordVPN has a vaguely working split-tunnel feature whereas SurfShark, ExpressVPN and probably many others are complete fails. | 3 | 16 | 0 | platform: redhat x64, installed ipython notebook 3.0 through pyvenv-3.4
When I open a notebook, it always shows "kernel starting, please wait...".
But I can open IPython console.
Please help, thanks! | IPython notebook always shows "kernel starting, please wait..." | 0 | 0 | 0 | 17,039 |
28,994,857 | 2015-03-11T18:44:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,scikit-learn,cluster-analysis,unsupervised-learning | 28,997,147 | 3 | false | 0 | 0 | For the large matrix after TF/IDF transformation, consider using sparse matrix.
You could try different k values. I am not an expert in unsupervised clustering algorithms, but I bet with such algorithms and different parameters, you could also end up with a varied number of clusters. | 1 | 4 | 1 | I recently started working on Document clustering using SciKit module in python. However I am having a hard time understanding the basics of document clustering.
What I know ?
Document clustering is typically done using TF/IDF. Which essentially
converts the words in the documents to vector space model which is
then input to the algorithm.
There are many algorithms like k-means, neural networks, hierarchical
clustering to accomplish this.
My Data :
I am experimenting with linkedin data, each document would be the
linkedin profile summary, I would like to see if similar job
documents get clustered together.
Current Challenges:
My data has huge summary descriptions, which end up becoming 10000's
of words when I apply TF/IDF. Is there any proper way to handle this
high dimensional data.
K - means and other algorithms requires I specify the no. of clusters
( centroids ), in my case I do not know the number of clusters
upfront. This I believe is a completely unsupervised learning. Are
there algorithms which can determine the no. of clusters themselves?
I've never worked with document clustering before, if you are aware
of tutorials , textbooks or articles which address this issue, please
feel free to suggest.
I went through the code on SciKit webpage, it consists of too many technical words which I donot understand, if you guys have any code with good explanation or comments please share. Thanks in advance. | Document Clustering in python using SciKit | 0 | 0 | 0 | 6,083 |
28,997,501 | 2015-03-11T21:15:00.000 | 0 | 0 | 1 | 0 | python,installation,pygame,pip | 29,056,557 | 2 | true | 0 | 1 | Thankyou so much for your help ventsyv, i finally figured it out. i believe it was an issue with the installer from pygames site. i found another site with a link to pygame and specifically for python 3.4.2. i now have no error messages and its working great. Thanks for putting up with me haha. | 1 | 1 | 0 | Sorry, thought i edited to say i am on windows 64 bit
When trying to install Pygame using pip, I get the following error:
"requirement 'pygame.whl' looks like a file name, but the file does not exist pygame.whl is not a valid wheel filename."
I have my file paths right and pip is working. I have attempted to install 32 bit and 64 bit pygame but neither is working. In the command prompt I enter "pip install pygame.whl" (I renamed the file which I don't think should matter and it is saved under downloads).
How can I resolve this error? | Error while installing pygame using pip for python 3.4 | 1.2 | 0 | 0 | 2,022 |
28,997,764 | 2015-03-11T21:31:00.000 | 1 | 0 | 1 | 1 | python,igraph,python-3.4 | 29,264,918 | 2 | false | 0 | 0 | As suggested by @Tamas, you should download wheel packages from the link and use pip to install them. | 1 | 0 | 0 | I am looking for python-igraph package for windows 64bits. I have installed python 3.4 and it seems that I can not find proper igraph installation package for it. I have crawled all webpages and still could not find what I am looking for.
Can anyone help me please?
Thanks | Python igraph for windows 64bit | 0.099668 | 0 | 0 | 852 |
28,999,934 | 2015-03-12T00:28:00.000 | 0 | 1 | 1 | 0 | python,performance,time | 29,001,172 | 2 | false | 0 | 0 | As far as I know, time Linux command simply asks kernel to provide information on the process that was run using it. Since kernel collects CPU and other information for all processes anyway, running time with a script doesn't impact the performance of the script itself (or its negligible)
The above answer is correct but it may not be idempotent. A script time can be affected due to current load, data size it is processing, network and so on.
I recommend using the Linux time command since it provides lot more information about the process. You can then run it under different loads and compare them. | 1 | 0 | 0 | I would like to run some experiments over night and also find tomorrow morning how long it took for the experiments to finish executing. However, I would like to add as little overhead as possible so that the results from the experiments aren't affected too much by this extra timing.
I read from various resources that time script.py would be a good way to measure that. However I am not sure how time works and how much it can affect my experiments. | What is the best way to find how long it took for a python script to finish executing without adding any extra over head? | 0 | 0 | 0 | 49 |
29,000,873 | 2015-03-12T02:25:00.000 | 0 | 1 | 0 | 0 | python,raspberry-pi2 | 32,802,086 | 1 | false | 0 | 0 | You've written in a high-level language for specific hardware.
You could make your own ARM Cortex-A7 based board but it'll be cheaper to just buy another Pi.
If you want to make small inexpensive devices, then you should use a lower-level language with a microcontroller, such as Atmel AVR, found in Arduinos. | 1 | 0 | 0 | I bought a raspberry pi and created a small remote controlled truck, but I want to work on more projects; is there any way to sore the Python file onto a flashdrive and connect it to some kind of cpu so the truck will still work and I can use the pi for other things and continue buying the small "cpu" uploading the pythons code and moving forward on different projects? | How do I take code from a raspberry pi and store it onto a smaller chip so I don't have to use the pi over and over again | 0 | 0 | 0 | 73 |
29,008,403 | 2015-03-12T11:20:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,openstack,openstack-swift | 29,079,748 | 1 | false | 0 | 0 | Its not a problem of the library but a limitation due to the Openstack Swift configuration where the "Workers" configuration in all Account/Container/Object config of Openstack Swift was set to 1
Regarding the library
When new connections are made using evenlet.green.httplib.HttpConnection
it does not block.
But if requests are using the same connection, subsequent requests are blocked until the response is fully read. | 1 | 0 | 0 | Openstack-Swift is using evenlet.green.httplib for BufferedHttpconnections.
When I do performance benchmark of it for write operations, I could observer that write throughput drops even only one replica node is overloaded.
As I know write quorum is 2 out of 3 replicas, therefore overloading only one replica cannot affect for the throughput.
When I dig deeper what I observed was, the consequent requests are blocked until the responses are reached for the previous requests. Its mainly because of the BufferedHttpConnection which stops issuing new request until the previous response is read.
Why Openstack-swift use such a method?
Is this the usual behaviour of evenlet.green.httplib.HttpConnection?
This does not make sense in write quorum point of view, because its like waiting for all the responses not a quorum.
Any ideas, any workaround to stop this behaviour using the same library? | Why does Openstack Swift requests are blocked by eventlet.green.httplib? | 0 | 0 | 1 | 181 |
29,017,156 | 2015-03-12T18:05:00.000 | 1 | 0 | 0 | 0 | python,openerp,aptana,odoo | 29,017,479 | 1 | true | 1 | 0 | It could happen if you run your server in "run" mode and not "debug" mode.
If you are in "run" mode the breakpoints would be skipped.
In Aptana, go to the "run" -> "debug" to run it in debug mode. | 1 | 0 | 0 | I have created Odoo v8 PyDev project in Aptana. When I run the openerp server from Aptana, and set a breakpoint in my file product_nk.py, the program does not stop at this break point although I navigated to the Odoo web pages where the functionality is linked to the code with breakpoint.
What am I possibly missing in the setup and what I need to do to have the program stop at the set breakpoint in Python code? | OpenERP, Aptana - debugging Python code, breakpoint not working | 1.2 | 0 | 0 | 371 |
29,018,843 | 2015-03-12T19:40:00.000 | 1 | 0 | 1 | 0 | python,numpy,matrix,scipy | 29,026,455 | 1 | true | 0 | 0 | With standard dict methods you can get a list of the keys, and another list of the values. Pass the 2nd to numpy.array and you should get a 100 x 7000 array. The keys list could also be made into array, but it might not be any more useful than the list. The values array could be turned into a sparse matrix. But its size isn't exceptional, and arrays have more methods.
Tomorrow I can add sample code if needed. | 1 | 1 | 1 | I have a very large dictionary of the following format {str: [0, 0, 1, 2.5, 0, 0, 0, ...], str: [0, 0, 0, 1.1, 0, 0, ...], ...}. The number of elements for each str key can be very big so I need an effective way to store and make calculations over this data.
For example right now my dict of str keys has 100 keys. Each key has one value which is a list of 7000 float elements. The length of str keys and values is constant. So, let's say str key is of length 5 and its value (which is a list) is 7000.
After some reading I found that scipy.sparse module has a nice collection of various matrices to store sparse data but scipy documentation is so sparse that I can barely understand what's going on.
Can you provide an example of how to convert the dictionary above to correct matrix type? | How to convert a sparse dict to scipy.sparse matrix in python? | 1.2 | 0 | 0 | 1,646 |
29,018,927 | 2015-03-12T19:45:00.000 | 0 | 0 | 0 | 0 | python,session,authentication,two-factor-authentication | 29,021,959 | 1 | false | 1 | 0 | Fact is that I forget REST is stateless. Can't use session within two web services calls. | 1 | 0 | 0 | I have two applications, one has the web api other and application use it to authenticate it itself.
How 2FA implemented in my application is, first get the username and password then authenticate it. After authenticate it I send the username, session key . If I get the correct mobile passcode , username and session key back, application authenticate it second time.
Now the problem is It works, when I use postman chrome plugin to test the 2FA. However if I use the second application to authenticate it fails.
When I debug through the code I found, it breaks at session variables. I get Key error. I assume that the session is empty when I try to authenticate second time from the application.
I am confused why it works from Postman plugin but not from the second application. | two factor authentication doesn't work when it is access from applicaiton | 0 | 0 | 1 | 88 |
29,019,387 | 2015-03-12T20:15:00.000 | 0 | 0 | 1 | 0 | macos,python-3.x,ipython | 30,279,651 | 2 | false | 0 | 0 | Check in the hidden directory ".ipynb_checkpoints" inside of the directory that used to hold the notebook.
If you had recently been running the notebook prior to deleting it, you may be able to find a recent copy of it saved at the last "checkpoint". | 2 | 1 | 0 | Just installed iPython/Jupyter and accidentally deleted pictures from a file that was living on my desktop. I don't know how to undo what I just deleted and can't seem to find any of the pictures in my trash. Is there anyway I can recover them? My instance of iPython/Jupyter is still open.
Thanks. | Accidentally deleted a folder's contents in iPython/Jupyter and I can recover it from trash on Mac OSX Yosemite. Is there anyway I can get it back? | 0 | 0 | 0 | 3,792 |
29,019,387 | 2015-03-12T20:15:00.000 | 1 | 0 | 1 | 0 | macos,python-3.x,ipython | 29,019,518 | 2 | false | 0 | 0 | No, you can't easily recover the files. The files are gone. Your option is to restore from a backup, or use a data recovery tool of some sort. | 2 | 1 | 0 | Just installed iPython/Jupyter and accidentally deleted pictures from a file that was living on my desktop. I don't know how to undo what I just deleted and can't seem to find any of the pictures in my trash. Is there anyway I can recover them? My instance of iPython/Jupyter is still open.
Thanks. | Accidentally deleted a folder's contents in iPython/Jupyter and I can recover it from trash on Mac OSX Yosemite. Is there anyway I can get it back? | 0.099668 | 0 | 0 | 3,792 |
29,021,532 | 2015-03-12T22:31:00.000 | 2 | 0 | 1 | 0 | python,ide,pycharm,freeze | 29,038,659 | 3 | false | 0 | 0 | I also had problems with the update. I uninstalled Pycharm, then I installed 4.0.5. I haven't had any problems since. | 2 | 3 | 0 | I've been using PyCharm 4.0.4 (community edition) for a while without problems and just updated to PyCharm 4.0.5. When trying to modify a project that I had developed in 4.0.4, the IDE suddenly hangs even though I'm simply typing into the editor/adding a comment/etc. I'm not even attempting to run the program. The IDE hangs after about 2-3 minutes of having PyCharm open. Two questions here:
Any tips/suggestions/insights on why the issue is occurring or a potential solution to the problem?
If no to the above, is there a straightforward way to revert to the previous version?
Thanks in advance for your help. | Pycharm upgrade to 4.0.5 causes issues | 0.132549 | 0 | 0 | 684 |
29,021,532 | 2015-03-12T22:31:00.000 | 1 | 0 | 1 | 0 | python,ide,pycharm,freeze | 29,061,526 | 3 | false | 0 | 0 | I have same trouble with the release 4.0.5 uptdated from 4.0.4 on OSX 10.6.8. I resolved the issue by setting the application "PyCharm CE.app" to run in the 32-bit mode (selecting the file and using "Get info" to enable the check box "Open in 32-bit mode").
It works normaly on 32-bit mode. | 2 | 3 | 0 | I've been using PyCharm 4.0.4 (community edition) for a while without problems and just updated to PyCharm 4.0.5. When trying to modify a project that I had developed in 4.0.4, the IDE suddenly hangs even though I'm simply typing into the editor/adding a comment/etc. I'm not even attempting to run the program. The IDE hangs after about 2-3 minutes of having PyCharm open. Two questions here:
Any tips/suggestions/insights on why the issue is occurring or a potential solution to the problem?
If no to the above, is there a straightforward way to revert to the previous version?
Thanks in advance for your help. | Pycharm upgrade to 4.0.5 causes issues | 0.066568 | 0 | 0 | 684 |
29,026,709 | 2015-03-13T07:13:00.000 | -4 | 1 | 0 | 0 | python,python-2.7,datetime,ftp,ftplib | 29,468,092 | 2 | false | 0 | 0 | When I want to change the file modification time, I use an FTP client on the console.
Log on to a remote FTP ftp ftp.dic.com
cd commands go to the correct directory
SITE command to move the extended command mode
UTIME somefile.txt 20050101123000 20050101123000 20050101123000 UTC
change the access time, modification time, it's time to create a directory on 2005-01-01 12:30:00 somefile.txt
Complete example:
site UTIME somefile.txt 20150331122000 20150331122000 20150331122000 UTC
Please feel free to sit back and wish you a pleasant journey in time :) | 1 | 22 | 0 | I'm trying to load a CSV file to Amazon S3 with Python. I need to know CSV file's modification time. I'm using ftplib to connect FTP with Python (2.7). | How to get FTP file's modify time using Python ftplib | -1 | 0 | 1 | 24,718 |
29,031,089 | 2015-03-13T11:29:00.000 | 1 | 0 | 0 | 0 | python,python-3.x,qpython,qpython3 | 30,599,421 | 1 | false | 0 | 1 | I had that issue on 4.4.4. Hit Run in the editor, then in the console it brings up you can cd to the directory the program is in and use the command 'python file.py' to start the program. It's a little annoying, but it's better than not being able to run programs at all. This problem disappeared for me on 5.1.1. | 1 | 0 | 0 | I'm trying to run scripts on qpython3, android 5.0.1 and when executing the scripts nothing happens. Even the example scripts that are installed with the app. Interactive mode/running commands in console does work, only scripts don't. What could be the reason? | Scripts won't run on QPython3 | 0.197375 | 0 | 0 | 1,329 |
29,031,509 | 2015-03-13T11:50:00.000 | 1 | 0 | 0 | 0 | python,excel | 29,122,089 | 1 | true | 0 | 0 | I am currently making excel file using xlswriter but it seems there is no option for duplex printing in the document.
Duplex printing is a function of the printer and not the document so it cannot be controlled by XlsxWriter. | 1 | 0 | 0 | I am trying to find a way to configure my excel sheet to duplex printing using python.
I am currently making excel file using xlswriter but it seems there is no option for duplex printing in the document.
I could modify the excel file using some other code to allow duplex writer just before I print.
Has anyone has a way to allow duplex printing an existing document using python ? | Is there a way to allow duplex printing in excel using python? | 1.2 | 0 | 0 | 113 |
29,032,051 | 2015-03-13T12:17:00.000 | 0 | 0 | 0 | 1 | python | 29,032,085 | 3 | false | 0 | 0 | You could use an absolute, as opposed to relative, file path to your script. | 1 | 2 | 0 | I am using the subprocess.call like below:
subprocess.call(['sudo ./order_fc_prioritizer/run.sh'])
But its saying no such file or directory | How to run a shell script placed in different folder from python | 0 | 0 | 0 | 1,696 |
29,039,798 | 2015-03-13T18:49:00.000 | 1 | 0 | 0 | 0 | python,django,node.js,security,express | 29,039,868 | 2 | false | 1 | 0 | Try using Python Flask for small projects. I'm assuming it's small because node.js is usually used for real-live-updates and realtime chatting and other javascript based apps. The advantage is polling/broadcasts from server to clients rather than 100s of individual requests being handled.
Security-wise, javascript apps can be abused without even using tools if they spam your servers but that is something you should handle on the web server or simply make sure spams are controlled and blocked if being repeated in an inhumane speed. | 2 | 3 | 0 | I am planning to build a web application which is security critical and I need to decide what technology to use on the back end.
The options I am considering are Python (mostly with the Django framework) and NodeJS (possibly with express.js). From the security point of view I would like to know the pros and cons of using each of these technologies. | Python vs NodeJS web applications from a security point of view | 0.099668 | 0 | 0 | 2,173 |
29,039,798 | 2015-03-13T18:49:00.000 | 1 | 0 | 0 | 0 | python,django,node.js,security,express | 29,040,520 | 2 | false | 1 | 0 | Disclaimer, I'm not super expert on the topic, but I have worked a bit with both Node and Django.
I'd say that it pretty much depends what you're doing and how you set everything up, but with Django you're pretty much forced to set it up on Apache/Gunicorn (w/ NGinx) so you have that extra layer there that you can use to have an additional layer of security, and Django has a lot of built in packages to help with authentication / users / etc.
But honestly it boils down to how well structured your application is. I'd personally prefer python for building a secure application as for me it's easier to wrap my head around OOP logic in python moreso than trying to structure all your callbacks correctly in node. | 2 | 3 | 0 | I am planning to build a web application which is security critical and I need to decide what technology to use on the back end.
The options I am considering are Python (mostly with the Django framework) and NodeJS (possibly with express.js). From the security point of view I would like to know the pros and cons of using each of these technologies. | Python vs NodeJS web applications from a security point of view | 0.099668 | 0 | 0 | 2,173 |
29,041,356 | 2015-03-13T20:26:00.000 | 5 | 0 | 1 | 1 | python,osx-mountain-lion,numba | 29,104,989 | 2 | false | 0 | 0 | Ok, I needed to install llvm first. My problem was that I was installing LLVMLITE not LLVM.
So brew install llvm and then locating llvm-config in the Cellar directory solved my problem. | 1 | 9 | 0 | I am trying to install numba on an OS X Mountain Lion. I had tried the pip install way but didn't work, so I have downloaded from the GIT respositories the zip files. When trying to install numba I realized that I need LLVM first.
I downloaded and unpacked llvm into the Download folder. The README instructions are: "If your LLVM is installed in a non-standard location, first point the LLVM_CONFIG environment variable to the path of the corresponding llvm-config executable."; a message compatible with the RunTimeError I get when running the python setup.py install command.
My problem is that I don't understand what to do in order to make the LLVM_CONFIG environment variable to point to the corresponding llvm-config executable.
Any help? Thanks | How to point LLVM_CONFIG environment variable to the path for llvm-config | 0.462117 | 0 | 0 | 23,428 |
29,041,571 | 2015-03-13T20:40:00.000 | 5 | 1 | 0 | 0 | python,windows,file-extension,file-association | 63,965,668 | 2 | false | 0 | 0 | press the windows key
type cmd
right click the result and choose "run as administrator"
assoc .foo=foofile
ftype foofile="C:\Users\<user>\AppData\Local\Programs\Python\PYTHON~1\python.exe" "C:\<whatever>\fooOpener.py" "%1" %*
Use pythonw.exe if it's a .pyw file (to prevent a cmd window from spawning).
If you want to use an existing file type, you can find its alias by not assigning anything. For example, assoc .txt returns .txt=txtfile. | 1 | 17 | 0 | I want to do the following:
Save numeric data in a CSV-like formatting, with a ".foo" extension;
Associate the ".foo" file extension with some python script, which in turns opens the .foo file, reads its content, and plots something with a plotting library (matplotlib most probably).
The use-case would be: double-click the file, and its respective plot pops up right away.
I wonder how I should write a python script in order to do that.
Besides, the windows "open with" dialog only allows me to choose executables (*.exe). If I choose "fooOpener.py", it doesn't work. | Associate file extension to python script, so that I can open the file by double click, in windows | 0.462117 | 0 | 0 | 8,512 |
29,042,787 | 2015-03-13T22:07:00.000 | 0 | 0 | 0 | 0 | python,django,rest | 29,043,714 | 1 | false | 1 | 0 | I think the key is to use the models differently. If you use onetomany or foreignkey references in your model construction you can more dynamically link different types of data together, then access that from the parent object.
For example, for your user, you could create a basic user model and reference that in many other models such as interests, occupation, and have those models store very dynamic data.
When you have the root user model object, you can access it's foreign key objects by either iterating through the dictionary of fields returned by the object or accessing the foreign key references directly with model.reference_set.all() | 1 | 0 | 0 | Well, I do my first steps with Django and Django REST framework. The problem I face is that all examples throughout the whole Internet are based on hard-coded models. But the whole concept of models frustrates me a little bit, because I'm used to deal with different data which comes from numerous sources (various relational databases and nosql - all that stuff). So, I do not want to stick to a particular model with a fixed number of predefined fields, but I want to specify them just at the moment when a user goes to a particular page of my app.
Let's say I have a table or a collection in one of my databases, which stores information about users - it has any kinds of fields (not just email, name and likewise - all those fields as in all those examples throughout the web). So when a user goes to /users/ I connect to my datebase, get my table, set my cursor and populate my resultant dictionary with all rows and all fields I need. And REST API does all the rest.
So, I need a "first-step" example wich starts from data, not from a model: you have a table "items" in your favorite database, when a user goes to /items/, he or she gets all data from that table. To make such simplistic api, you should do this and this... I need this kind of example. | Simple REST API not based on a particular predefined model | 0 | 0 | 0 | 27 |
29,044,228 | 2015-03-14T00:51:00.000 | 1 | 0 | 0 | 0 | python,treeview,gtk3,treemodel | 29,044,826 | 1 | true | 0 | 1 | You can leave it to Python's garbage collector, the same way it would go if you'd close the application (it will call g_object_unref on both).
That said, remember that the idea behind the separation of models and views, is that you can mix them the way you like, i.e. display the same model in different views or even alternatively displaying different models in the same view. That you need to replace both may indicate problems in the way you are designing your UI. | 1 | 1 | 0 | I'm programming python + Gtk3.
I have a Gtk.TreeView with a Gtk.ListStore as model.
At some point of the program I need to destroy the treeview in order to put a fresh one on it's place.
However I don't know what happens with the model. Should I destroy it, clear it, or just leve it there and let python to eat it?
I've also thinked in recycle the same model to the new treeview, but I'd prefer not: too much trouble...
Thanks! | Should I 'destroy' a liststore (model of treeview) when destroy treeview? | 1.2 | 0 | 0 | 79 |
29,044,322 | 2015-03-14T01:06:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,cron,virtual-machine,google-compute-engine | 29,050,842 | 2 | true | 1 | 0 | The finest resolution of a cron job is 1 minute, so you cannot run a cron job once every 10 seconds.
In your place, I'd run a Python script that starts a new thread every 10 seconds to do your MySQL work, accompanied by a cronjob that runs every minute. If the cronjob finds that the Python script is not running, it would restart it.
(i.e., the crontab line would look like * * * * * /command/to/restart/Python/script).
Worse-case scenario you'd miss 5 runnings of your MySQL worker threads (a 50 seconds' duration). | 1 | 0 | 0 | I have a python script that queries some data from several web APIs and after some processing writes it to MySQL. This process must be repeated every 10 seconds. The data needs to be available to Google Compute instances that read MySQL and perform CPU-intensive work.
For this workflow I thought about using GCloud SQL and running GAppEngine to query the data. NOTE: The python script does not run on GAE directly (imports pandas, scipy) but should run on a properly setup App Engine Managed VM.
Finally the question: is it possible and would it be reasonable to schedule a cron job on a GApp Managed VM to run a command invoking my data collection script every 10 seconds? Any alternatives to this approach? | Cron job on google cloud managed virtual machine | 1.2 | 1 | 0 | 1,243 |
29,045,905 | 2015-03-14T05:49:00.000 | 0 | 0 | 0 | 0 | python,qt,ubuntu,pyqt,themes | 29,109,067 | 1 | false | 0 | 1 | It turns out QMainWindow.menuBar().palette() has colors matching the Unity menu on top of the screen (I just learned it's called appmenu). That makes it possible to use the same color as the menu text.
But there is a problem. The version of the icon for the QtIcon.Active mode is not used for the active menu item. That can be seen in the high contrast mode. The text color changes, but the icon color doesn't. The icon "disappears" when the menu item is selected. That's not good enough for a polished program.
I tried many workarounds, such as adding extra pixmaps to the icon with QIcon.addPixmap(). Nothing works. The appmenu operates in a separate process and doesn't want to our "Active" icon. So I'm going to draw real icons that look good on any reasonable background and don't need to change color with the widget.
Too bad. I expected that the QIcon mode and state were made specifically for such tasks. | 1 | 1 | 0 | I want to add colorless icons to menu items. To look reasonably, the icons should have the same color as the text used in the menu. It works everywhere except Ubuntu. The problem is that the default Ubuntu Unity theme uses different colors for the main menu and for other text (e.g. popup menus) in the application. I need the color specifically used by the main menu.
QApplication.palette().color(QPalette.Text) returns the dark gray color used by text in the application. It's almost invisible on the dark gray menu background.
I tried the palette() method on a QMenu descendant, but it returns the same value as QApplication.palette(). | PyQt: How to find menu text color in that works with Ubuntu Unity? | 0 | 0 | 0 | 332 |
29,048,568 | 2015-03-14T11:49:00.000 | 1 | 1 | 1 | 0 | java,python,c++,c,arrays | 29,048,713 | 2 | false | 0 | 0 | to extend a little the answer:
Python manage lifetime by reference counting. When an object has no references it'll be destructed or finalized and released.
In Java I think that happens exactly the same | 1 | 1 | 0 | I am getting little bit confused with returning arrays and objects created locally in the function call. So I believe -
C - No Objects, only arrays and structures can be created on stack, so will be deleted when function returns. So its not wise to send them as return value to the calling module.
C++ - Objects & structures resides in heap, so objects can be returned but nothing else, i.e. arrays will still be destroyed when returning
Java - I can return arrays as well as Objects, I guess arrays moved to heap here?
Python - Same as Java, Objects and Arrays created locally can be returned to calling module as reference.
Please correct me if I am wrong somewhere. Now why would java/python put arrays in heap? being interpreted languages is that the reason? So would every compiled language will not let me return locally created arrays back to calling module.
Thanks in advance. | returning arrays and objects created in functions in various languages | 0.099668 | 0 | 0 | 87 |
29,049,985 | 2015-03-14T14:23:00.000 | 1 | 0 | 0 | 0 | python,pandas,time-series | 29,050,296 | 3 | false | 0 | 0 | Use this function to create the new column...
DataFrame.shift(periods=1, freq=None, axis=0, **kwds)
Shift index by desired number of periods with an optional time freq | 1 | 1 | 1 | I realize this is a fairly basic question, but I couldn't find what I'm looking for through searching (partly because I'm not sure how to summarize what I want). In any case:
I have a dataframe that has the following columns:
* ID (each one represents a specific college course)
* Year
* Term (0 = fall semester, 1 = spring semester)
* Rating (from 0 to 5)
My goal is to create another column for Previous Rating. This column would be equal to the course's rating the last time the course was held, and would be NaN for the first offering of the course. The goal is to use the course's rating from the last time the course was offered in order to predict the current semester's enrollment. I am struggling to figure out how to find the last offering of each course for a given row.
I'd appreciate any help in performing this operation! I am working in Pandas but could move my data to R if that'd make it easier. Please let me know if I need to clarify my question. | Pandas Time-Series: Find previous value for each ID based on year and semester | 0.066568 | 0 | 0 | 1,352 |
29,055,393 | 2015-03-14T23:17:00.000 | 0 | 0 | 1 | 0 | python,pip,anaconda | 61,361,885 | 7 | false | 0 | 0 | Add script folder location to path, for individual user installation it will be "C:\Users\<user>\Anaconda3\Scripts" and for everyone installation it can be found in Program Files "C:\Program Files\Anaconda3\Scripts" | 3 | 11 | 0 | I'm using Anaconda on Windows x64. I'm trying to install some library using pip. However, the the command line isn't recognizing pip or any other scripts. The folder that they are all in is in both the user and system PATH variable. pip is there and works if I use the entire file path. Is there a way to fix this? | pip is not recognized in Anaconda Prompt | 0 | 0 | 0 | 29,644 |
29,055,393 | 2015-03-14T23:17:00.000 | 2 | 0 | 1 | 0 | python,pip,anaconda | 38,292,562 | 7 | false | 0 | 0 | I worked for me if I start cmd and do cd C:\Users\ComputerName\Python27\Scripts
Then I typed in 'pip install "library"' and it worked!
If you don't know how to access cmd just press Win+R and type in cmd!
Hope it helped! | 3 | 11 | 0 | I'm using Anaconda on Windows x64. I'm trying to install some library using pip. However, the the command line isn't recognizing pip or any other scripts. The folder that they are all in is in both the user and system PATH variable. pip is there and works if I use the entire file path. Is there a way to fix this? | pip is not recognized in Anaconda Prompt | 0.057081 | 0 | 0 | 29,644 |
29,055,393 | 2015-03-14T23:17:00.000 | 0 | 0 | 1 | 0 | python,pip,anaconda | 56,791,254 | 7 | false | 0 | 0 | I had it myself because of my username which uses this character "ï" which causes an encoding problem in the PATH variable. Therefore, scripts cannot be found by anaconda.
I had to install Anaconda For everyone and not just the current user to solve this problem. | 3 | 11 | 0 | I'm using Anaconda on Windows x64. I'm trying to install some library using pip. However, the the command line isn't recognizing pip or any other scripts. The folder that they are all in is in both the user and system PATH variable. pip is there and works if I use the entire file path. Is there a way to fix this? | pip is not recognized in Anaconda Prompt | 0 | 0 | 0 | 29,644 |
29,055,698 | 2015-03-14T23:57:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,function,class,order-of-execution | 29,055,726 | 3 | true | 0 | 0 | The order of class attributes does not matter except in specific cases (e.g. properties when using decorator notation for the accessors). The class object itself will be instantiated once the class block has exited. | 2 | 6 | 0 | When defining variables and functions within a class in python 3.x does it matter in which order you define variables and functions?
Is class code pre-complied before you would call the class in main? | Does the order of variables and functions matter in Class (Python) | 1.2 | 0 | 0 | 4,541 |
29,055,698 | 2015-03-14T23:57:00.000 | 3 | 0 | 1 | 0 | python,python-3.x,function,class,order-of-execution | 29,055,778 | 3 | false | 0 | 0 | By default, all names defined in the block of code right within the class statement become keys in a dict (that's passed to the metaclass to actually instantiate the class when said block is all done). In Python 3 you can change that (the metaclass can tell Python to use another mapping, such as an OrderedDict, if it needs to make definition order significant), but that's not the default. | 2 | 6 | 0 | When defining variables and functions within a class in python 3.x does it matter in which order you define variables and functions?
Is class code pre-complied before you would call the class in main? | Does the order of variables and functions matter in Class (Python) | 0.197375 | 0 | 0 | 4,541 |
29,056,302 | 2015-03-15T01:35:00.000 | 0 | 0 | 0 | 0 | python,numpy,casting | 29,058,672 | 1 | true | 0 | 0 | You can't change the type of parts of an ordinary ndarray. An ndarray requires all elements in the array to have the same numpy type (the dtype), so that mathematical operations can be done efficiently. The only way to do this is to change the dtype to object, which allows you to store arbitrary types in each element. However, this will drastically reduce the speed of most operations, and make some operations impossible or unreliable (such as adding two arrays). | 1 | 1 | 1 | I have an arff file as input. I read the arff file and put the element values in a numpy ndarray.Now my arff file contains some '?' as some of the elements. Basically these are property values of matrices calculated by anamod. Whichever values anamod cannot calculate it plugs in a '?' character for those. I want to do a Naive baiyes, Random Forest etc prediction for my data. So to handle the '?' I want to use an imputer which is like :
Imputer(missing_values='NaN', strategy='mean', axis=0)
The missing_values above is of type string of course. My question is that how to change the type of a few numpy ndarray elements to string from float. I used my_numpy_ndarray.astype('str') == 'NaN' to check for NaN values and I could do it successfully but I am not sure how to change the type of numpyndarray float element to string. | change the type of numpyndarray float element to string | 1.2 | 0 | 0 | 117 |
29,058,419 | 2015-03-15T07:48:00.000 | 2 | 0 | 0 | 0 | android,python,kivy | 29,059,931 | 2 | true | 0 | 1 | Simply put, you can use Activities (starting them using pyjnius), but not really define them, at least, it's not usually the way one work with kivy.
Kivy doesn't adjust its way of working for targetted platform, it uses its own systems, and make them work there. For what i know the advantages of separating Activities on android is just a way to make your code more neatly organized, and doesn't imply performance changes. It can allow you to start your app in various ways (from a share, for example) but you can do that with p4a/buildozer too, by dispatching messages about the intent, if you need to. With kivy, you'll organise your code like you would do for any python project, using modules. | 2 | 5 | 0 | As we all know, when developing an Android app in native Java, we use activities. I was wondering that, in developing Android apps in Python(with Kivy), does Kivy implements activities for the apps in itself or not? because I don't see any activity implementation on the sample codes.
If it doesn't implement activities, Do we lose performance or any functionality in the application compared to coding in native Java? | Python - Does Kivy implements activities in the Android apps? | 1.2 | 0 | 0 | 1,092 |
29,058,419 | 2015-03-15T07:48:00.000 | 1 | 0 | 0 | 0 | android,python,kivy | 29,058,632 | 2 | false | 0 | 1 | Kivy is a great tool for developing Android Apps. The best advantage of using Kivy is that it is cross platform and the same project can be used to publish apps on mutti-platforms.
However , it has some performance related disadvantages(as do most cross-platform tools like unity , cocos etc). If you're developing only for Android , I'd suggest taking a look into development tools which use Java. This will help create a smaller APK file which in turn helps in better user retention.
I guess you are real loyal fan of Python, but I have to tell you about its advantage and disadvantage.
Advantages
Pure python and its almightiness is in your hand.
Relatively simple to deploy with buildozer without any need to dive too deep into the details of particular platform.
You can run your app on desktop also, so there is no need to install some extra emulators/VMs to get it work
Disadvantages
Not that much information in Internet, even on stackoverflow
Pretty messy documentation
No obvious way to test the application
Not obvious machanisms of placing widgets, especially in built in layouts, which causes situations like: you want place widget in the center of it's parent, but kivy places it anywhere but not where you want it to be.
Official examples are quite ugly, so you may get false vision of how your application could look like. | 2 | 5 | 0 | As we all know, when developing an Android app in native Java, we use activities. I was wondering that, in developing Android apps in Python(with Kivy), does Kivy implements activities for the apps in itself or not? because I don't see any activity implementation on the sample codes.
If it doesn't implement activities, Do we lose performance or any functionality in the application compared to coding in native Java? | Python - Does Kivy implements activities in the Android apps? | 0.099668 | 0 | 0 | 1,092 |
29,060,962 | 2015-03-15T13:11:00.000 | 8 | 0 | 0 | 0 | python,numpy,scikit-learn | 29,061,597 | 1 | false | 0 | 0 | Ok I got it. After i used Imputer(missing_values='NaN', strategy='median', axis=1) imp.fit(X2). I also had to write :
X2 = imp.fit_transform(X2). The reason being sklearn.preprocessing.Imputer.fit_transform returns a new array, it doesn't alter the argument array | 1 | 9 | 1 | I'm using numpy for reading an arff file and I'm getting the following error:
ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
I used np.isnan(X2.any()) and np.isfinite(X2.all())to check if it's a nan or infinite case. But it's none of these. This means it's the third case, which is infinity or a value too large for dtype('float64').
I would appreciate if someone could tell me how to take care of this error.
Thanks. | a value too large for dtype('float64') | 1 | 0 | 0 | 26,758 |
29,061,361 | 2015-03-15T13:51:00.000 | 4 | 0 | 1 | 0 | python,git,svn,ipython-notebook | 29,061,641 | 1 | true | 0 | 0 | No, you should not. Checkpoints are temporary snapshots of your notebooks, in case anything goes wrong (e.g. power outage, etc). This of a checkpoint as the result of saving your notebook. Do you commit each time you make a change and save your that change to disk? | 1 | 4 | 0 | Should we be committing the .ipynb_checkpoints directory into Git or SVN version control for an IPython Notebook? | Version control for IPython notebook | 1.2 | 0 | 0 | 501 |
29,062,125 | 2015-03-15T15:04:00.000 | 0 | 1 | 0 | 0 | python,fabric | 29,063,107 | 1 | false | 1 | 0 | We have a local credential YAML file that contains all these, fab read the credentials from it and use them during the deployment only. | 1 | 0 | 0 | I am trying to learn how to quickly spin up a digital ocean / ec2 server to temporarily run a python worker script (for parallel performance gains). I can conceptually grasp how to do everything except how / where to store certain auth credentials. These would be things like:
git username / pass to access private repos
AWS auth credentials to access an SQS queue
database credentials
etc.
Where do I store this stuff when I deploy via a fabric script? A link to a good tutorial would be very helpful. | Best way to store auth credentials on fabric deploys? | 0 | 0 | 0 | 75 |
29,063,439 | 2015-03-15T16:55:00.000 | 0 | 0 | 1 | 0 | python,elasticsearch,elasticsearch-plugin | 29,067,741 | 1 | true | 0 | 0 | There is no existing functionality to do this. If you no longer want the source document in the source index, you'll need to delete the document.
The source isn't stored separately in some kind of blob which indices "point" at. The source is an integral part of the index, so you can't move it...just copy.
Sorry :( | 1 | 0 | 0 | I'm trying to move docs from one index to a different index, without maintaining the source docs.
I'm using scroll to do this now, but, so far, I'm reindexing and it's keeping the source documents on the source index. I don't want this to happen, I want a move not a copy...
Thanks a lot. | Move docs from one index to another | 1.2 | 0 | 0 | 138 |
29,064,875 | 2015-03-15T19:02:00.000 | 0 | 0 | 0 | 0 | python,cgi | 29,064,930 | 1 | true | 0 | 0 | Really not possible with CGI, the original Common Gateway Interface dictates that the program be run from scratch for each request.
You'd want to use WSGI instead (a Python standard), which allows your application be long-lived. WSGI in turn is easiest if you use a Web Framework such as Pyramid, Flask or Django; their integrations with databases like MySQL support connection pooling out of box. | 1 | 0 | 0 | I'm new to python and mysql-python module. Is there any way to reuse db connection so that we may not connect() and close() every time a request comes.
More generally, how can I keep 'status' on server-side? Can somebody give me a tutorial to follow or guide me somehow, lots of thanks! | How to Reuse Database Connection under Python CGI? | 1.2 | 1 | 0 | 101 |
29,068,146 | 2015-03-16T01:00:00.000 | 0 | 0 | 0 | 1 | python,enthought | 31,955,940 | 1 | false | 0 | 0 | I'm not sure this directly answers your question, but you may get what you're after by using the @cached_property decorator to reduce the number of time the property is computed. I think there may be elements of "push" and "pull" style computations with properties. | 1 | 2 | 0 | In the Enthought Traits/UI system, is there another way, other than being included in another Property's depends_on list, that a Property can become a dependency of another Property?
I have a HasTraits subclass, which has a property, chnl_h, which doesn't appear in any other Property's depends_on list, but is behaving as if it were a dependency of another Property. That is, it is recalculating its value, whenever one of its dependencies changes value, as opposed to only when its value is actually requested.
Thanks!
-db | Is there another way, other than the "depends_on" list, for a Property to become a dependency of another Property? | 0 | 0 | 0 | 154 |
29,068,229 | 2015-03-16T01:12:00.000 | 0 | 0 | 0 | 0 | python,opencl,pyopencl | 42,962,569 | 5 | false | 0 | 0 | CodeXL from AMD works very well. | 1 | 5 | 1 | I am trying to optimize a pyOpenCL program. For this reason I was wondering if there is a way to profile the program and see where most of the time is needed for.
Do you have any idea how to approach this problem?
Thanks in advance
Andi
EDIT: For example nvidias nvprof for CUDA would do the trick for pyCuda, however, not for pyOpenCL. | Is there a way to profile an OpenCL or a pyOpenCL program? | 0 | 0 | 0 | 2,168 |
29,068,483 | 2015-03-16T01:50:00.000 | 1 | 0 | 1 | 0 | python | 29,068,533 | 6 | false | 0 | 0 | I would think of this as a set of two distinct elements, a float 1.00 and an int 1.
That is your problem. Python thinks of them as one element, the number one. Whether this number is represented as a float or int doesn't change the fact that it is the same number. | 3 | 1 | 0 | I'm having some trouble creating sets of float and int data types, such as
my_set = {1.00 1}
I would think of this as a set of two distinct elements, a float 1.00 and an int 1. However, it defaults to:
>>> my_set
= set([1])
Here, the float disappears and only the int remains. Things then get confusing since 1.00 in my_set returns True.
I am wondering if anyone can explain what is actually going on? Is there a way around this? | Python automatically simplifies floats to ints during set construction | 0.033321 | 0 | 0 | 50 |
29,068,483 | 2015-03-16T01:50:00.000 | 1 | 0 | 1 | 0 | python | 29,068,571 | 6 | false | 0 | 0 | I am wondering if anyone can explain what is actually going on? Is
there a way around this?
The explanations given should be sufficient. The simplest way around it that I can see is, if you're getting this input from the user, to not convert it to numeric form before storing it in the set.
A more complex way would be to store the entry as a tuple of (value, type): {(1, int), (1, float)}, but this seems completely insane to me.
I would suggest you think long and hard before making a distinction between two numbers which are equal, based on the form of the representation that the user provides. | 3 | 1 | 0 | I'm having some trouble creating sets of float and int data types, such as
my_set = {1.00 1}
I would think of this as a set of two distinct elements, a float 1.00 and an int 1. However, it defaults to:
>>> my_set
= set([1])
Here, the float disappears and only the int remains. Things then get confusing since 1.00 in my_set returns True.
I am wondering if anyone can explain what is actually going on? Is there a way around this? | Python automatically simplifies floats to ints during set construction | 0.033321 | 0 | 0 | 50 |
29,068,483 | 2015-03-16T01:50:00.000 | 0 | 0 | 1 | 0 | python | 29,068,974 | 6 | false | 0 | 0 | since (int)1 and (float)1.0 have the same hash, you can't get a set which has both 1 and 1.0
you can implement you own set instead. use type() to distinguish between int and float | 3 | 1 | 0 | I'm having some trouble creating sets of float and int data types, such as
my_set = {1.00 1}
I would think of this as a set of two distinct elements, a float 1.00 and an int 1. However, it defaults to:
>>> my_set
= set([1])
Here, the float disappears and only the int remains. Things then get confusing since 1.00 in my_set returns True.
I am wondering if anyone can explain what is actually going on? Is there a way around this? | Python automatically simplifies floats to ints during set construction | 0 | 0 | 0 | 50 |
29,069,364 | 2015-03-16T03:47:00.000 | 2 | 0 | 1 | 1 | python,homebrew | 29,091,698 | 2 | false | 0 | 1 | brew reinstall pyqt --with-python3 will get you sorted! | 1 | 0 | 0 | I recently used homebrew to install pyqt (along with qt & sip), but get an import error whenever I try to import PyQt4 in Python 3 (which was also installed using homebrew). To confuse matters more, I am able to import PyQt4 on Python 2 via the terminal.
I'm totally new to working with Python packages and, with that, totally confused. Any thoughts on how I might be able to undo what I did and reinstall so that I can access PyQt via the usr/local/python3 installation?
Thanks in advance! | Python 3 can't find homebrew pyqt installation | 0.197375 | 0 | 0 | 562 |
29,073,061 | 2015-03-16T09:11:00.000 | 0 | 0 | 0 | 0 | module,biopython | 29,073,112 | 1 | false | 0 | 0 | import sys
print(sys.path)
to see if the path of that module is in the path, if not, you may need to update the path to include it. | 1 | 0 | 0 | I am trying to run a script that uses the module Bio.SeqUtils.ProtParam from Biopython. I am on a mac and I do have biopython installed.
Thank you in advance for the help. | ImportError: No module named Bio.SeqUtils.ProtParam | 0 | 0 | 0 | 535 |
29,073,110 | 2015-03-16T09:13:00.000 | 0 | 0 | 1 | 0 | python,function,math,recursion,expression | 70,390,084 | 3 | false | 0 | 0 | yes it's possible in "python recursion"
and the best describe is: "A physical world example would be to place two parallel mirrors facing each other. Any object in between them would be reflected recursively" | 1 | 6 | 0 | The code I already have is for a bot that receives a mathematical expression and calculates it. Right now I have it doing multiply, divide, subtract and add. The problem though is I want to build support for parentheses and parentheses inside parentheses. For that to happen, I need to run the code I wrote for the expressions without parentheses for the expression inside the parentheses first. I was going to check for "(" and append the expression inside it to a list until it reaches a ")" unless it reaches another "(" first in which case I would create a list inside a list. I would subtract, multiply and divide and then the numbers that are left I just add together.
So is it possible to call a definition/function from within itself? | python - calling a function from within itself | 0 | 0 | 0 | 29,439 |
29,073,907 | 2015-03-16T09:57:00.000 | 0 | 1 | 0 | 0 | python,email,smtp | 29,073,968 | 1 | false | 0 | 0 | Short of sending an email and having someone respond to it is impossible to verify an email exists.
You can verify the SMTP server has a whois address, but thats it. | 1 | 0 | 0 | I want to check that the given email id is really exists or not in smtp server. Is it possible to check or not.? If it possible please give me suggestion how can we do it. | How to Check is email exists or not in python | 0 | 0 | 1 | 1,440 |
29,076,384 | 2015-03-16T12:05:00.000 | 0 | 0 | 0 | 0 | python,web2py | 29,313,802 | 1 | false | 1 | 0 | Your comment hints at the answer: When you log into the admin session, when you refresh your website, it is now accessed through the admin session, which has no client user logged in.
One solution is to use different browsers for admin and a different browser for client. | 1 | 0 | 0 | I recently deployed a web2py app, and am going through the debugging phase. Part of the app includes an auth.wiki, which mostly works great. Last night I added several pages to the wiki with no problems.
However, today, whenever I navigate to the wiki or try to edit a page, I'm immediately logged out.
Any suggestions? I can't interact with the wiki if I'm not logged in...
EDIT: It's not just the wiki, I keep getting logged out of the whole site. Other users do not have this problem. It continues even when I select "remember me for 30 days" on login. | Why does my web2py app keep logging me out | 0 | 0 | 0 | 120 |
29,076,975 | 2015-03-16T12:36:00.000 | 5 | 0 | 1 | 0 | python,logging | 29,078,096 | 1 | true | 0 | 0 | Very simple (and undocumented) - modify the 'level' attribute:
myLogger.level = logbook.INFO | 1 | 3 | 0 | I have recently switched from using 'logging' to 'logbook'.So far, so good, but I am missing one critical functionality - ability to change the minimum level during runtime.In 'logging', I can call myLogger.setLevel(logging.INFO), but there is no equivalent method in logbook.
Anyone? | Python logbook - how to dynamically control logging level | 1.2 | 0 | 0 | 1,250 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.