Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
23,703,538 | 2014-05-16T19:53:00.000 | 0 | 0 | 0 | 0 | python,neo4j,flask | 23,739,995 | 1 | false | 1 | 0 | Well, if this is taking too much time, you might want to implement your own REST client that uses a faster parser, or speed up neo4j-rest-client and submit a patch? | 1 | 0 | 0 | I am developing a web application using flask and neo4j. I use noe4j-rest-client for the python side. When I query neo4j using python shell, it takes 78ms. But when I make request within a flask view it takes 0.8seconds. I have profiled and I see that neo4j-rest-client/request.py is the responsible, because it takes 0.5 seconds. What do you think ? | Python neo4j-rest-client takes too long within a flask view | 0 | 0 | 0 | 107 |
23,704,737 | 2014-05-16T21:21:00.000 | 0 | 0 | 1 | 1 | python,executable,py2exe | 23,704,780 | 2 | false | 0 | 0 | On most *nix systems it is sufficient to put #!/usr/bin/python as the first line of the main script and then chmod +x /path/to/script.py. | 1 | 3 | 0 | I've googled and googled, and everything I've seen has directed me to py2exe. I've looked at it and downloaded the latest version of it, but it says I have to have Python 2.6 to use it! Does this mean I have to use Python 2.6 rather than 3.3.3, or is there an alternative to py2exe?
Edit: Thanks! I can now use cxFreeze, but is there a way I can compile it further so I don't have to run it from a different folder? Or should I create a batch file calling the .exe from the command line and convert the batch file to an executable? | Making Python 3.3.3 scripts executable? | 0 | 0 | 0 | 293 |
23,707,917 | 2014-05-17T05:26:00.000 | 1 | 0 | 1 | 0 | python-2.7,imdb,imdbpy | 23,720,109 | 1 | true | 0 | 0 | The IMDbPY objects emulate the behavior of dictionaries, so you can get the list of its keys with: item.keys() and introspect it to get its attributes: dir(item) | 1 | 0 | 0 | Is there a list of keywords that can be used?
I.e.: in the examples in the docs, it has print item['long imdb canonical title'], item.movieID
Where is a list of the keywords for indexes in the data (like "['long imdb canonical title']") and a list of attributes (like ".movieID")? | List of keywords for title/person/character objects? | 1.2 | 0 | 0 | 264 |
23,708,895 | 2014-05-17T07:45:00.000 | 0 | 0 | 1 | 0 | python,django | 69,937,877 | 18 | false | 1 | 0 | pip3 install django -U
this will uninstall django, and then install the latest version of django.
pip3 is if you use python3.
-U is shortcut for --upgrade | 9 | 64 | 0 | My project was running on Django 1.5.4 and I wanted to upgrade it. I did pip install -U -I django and now pip freeze shows Django 1.6.5 (clearly django has upgraded, I'm in virtualenv) but my project is still using Django 1.5.4. How can I use the upgraded version?
UPDATE: Thanks for your comments. I tried everything but unfortunately nothing worked and I had to re-deploy the app.
Hope someone explains why this happened. | How to upgrade django? | 0 | 0 | 0 | 118,680 |
23,708,895 | 2014-05-17T07:45:00.000 | 0 | 0 | 1 | 0 | python,django | 64,373,619 | 18 | false | 1 | 0 | From the Django Docs: if you are using a Virtual Environment and it is a major upgrade, you might want to set up a new environment with the dependencies first.
Or, if you have installed Django using the PIP, then the below is for you:
python3.8 -m pip install -U Django . | 9 | 64 | 0 | My project was running on Django 1.5.4 and I wanted to upgrade it. I did pip install -U -I django and now pip freeze shows Django 1.6.5 (clearly django has upgraded, I'm in virtualenv) but my project is still using Django 1.5.4. How can I use the upgraded version?
UPDATE: Thanks for your comments. I tried everything but unfortunately nothing worked and I had to re-deploy the app.
Hope someone explains why this happened. | How to upgrade django? | 0 | 0 | 0 | 118,680 |
23,708,895 | 2014-05-17T07:45:00.000 | 3 | 0 | 1 | 0 | python,django | 60,089,696 | 18 | false | 1 | 0 | How to upgrade Django Version
python -m pip install -U Django
use cammand on CMD | 9 | 64 | 0 | My project was running on Django 1.5.4 and I wanted to upgrade it. I did pip install -U -I django and now pip freeze shows Django 1.6.5 (clearly django has upgraded, I'm in virtualenv) but my project is still using Django 1.5.4. How can I use the upgraded version?
UPDATE: Thanks for your comments. I tried everything but unfortunately nothing worked and I had to re-deploy the app.
Hope someone explains why this happened. | How to upgrade django? | 0.033321 | 0 | 0 | 118,680 |
23,708,895 | 2014-05-17T07:45:00.000 | 1 | 0 | 1 | 0 | python,django | 59,508,823 | 18 | false | 1 | 0 | I think after updating your project, you have to restart the server. | 9 | 64 | 0 | My project was running on Django 1.5.4 and I wanted to upgrade it. I did pip install -U -I django and now pip freeze shows Django 1.6.5 (clearly django has upgraded, I'm in virtualenv) but my project is still using Django 1.5.4. How can I use the upgraded version?
UPDATE: Thanks for your comments. I tried everything but unfortunately nothing worked and I had to re-deploy the app.
Hope someone explains why this happened. | How to upgrade django? | 0.011111 | 0 | 0 | 118,680 |
23,708,895 | 2014-05-17T07:45:00.000 | 1 | 0 | 1 | 0 | python,django | 23,711,285 | 18 | false | 1 | 0 | You can use the upgraded version after upgrading.
You should check that all your tests pass before deploying :-) | 9 | 64 | 0 | My project was running on Django 1.5.4 and I wanted to upgrade it. I did pip install -U -I django and now pip freeze shows Django 1.6.5 (clearly django has upgraded, I'm in virtualenv) but my project is still using Django 1.5.4. How can I use the upgraded version?
UPDATE: Thanks for your comments. I tried everything but unfortunately nothing worked and I had to re-deploy the app.
Hope someone explains why this happened. | How to upgrade django? | 0.011111 | 0 | 0 | 118,680 |
23,708,895 | 2014-05-17T07:45:00.000 | 0 | 0 | 1 | 0 | python,django | 62,312,728 | 18 | false | 1 | 0 | you must do the following:
1- Update pip
python -m pip install --upgrade pip
2- If you already install Django update by using the following command
pip install --upgrade Django
or you can uninstall it using the following command
pip uninstall Django
3- If you don't install it yet use the following command
python -m pip install Django
4- Type your code
Enjoy | 9 | 64 | 0 | My project was running on Django 1.5.4 and I wanted to upgrade it. I did pip install -U -I django and now pip freeze shows Django 1.6.5 (clearly django has upgraded, I'm in virtualenv) but my project is still using Django 1.5.4. How can I use the upgraded version?
UPDATE: Thanks for your comments. I tried everything but unfortunately nothing worked and I had to re-deploy the app.
Hope someone explains why this happened. | How to upgrade django? | 0 | 0 | 0 | 118,680 |
23,708,895 | 2014-05-17T07:45:00.000 | -2 | 0 | 1 | 0 | python,django | 71,192,619 | 18 | false | 1 | 0 | new version in django :-
pip install Django==4.0.2 | 9 | 64 | 0 | My project was running on Django 1.5.4 and I wanted to upgrade it. I did pip install -U -I django and now pip freeze shows Django 1.6.5 (clearly django has upgraded, I'm in virtualenv) but my project is still using Django 1.5.4. How can I use the upgraded version?
UPDATE: Thanks for your comments. I tried everything but unfortunately nothing worked and I had to re-deploy the app.
Hope someone explains why this happened. | How to upgrade django? | -0.022219 | 0 | 0 | 118,680 |
23,708,895 | 2014-05-17T07:45:00.000 | 13 | 0 | 1 | 0 | python,django | 30,253,720 | 18 | false | 1 | 0 | Use this command to get all available Django versions: yolk -V django
Type pip install -U Django for latest version, or if you want to specify version then use pip install --upgrade django==1.6.5
NOTE: Make sure you test locally with the updated version of Django before updating production. | 9 | 64 | 0 | My project was running on Django 1.5.4 and I wanted to upgrade it. I did pip install -U -I django and now pip freeze shows Django 1.6.5 (clearly django has upgraded, I'm in virtualenv) but my project is still using Django 1.5.4. How can I use the upgraded version?
UPDATE: Thanks for your comments. I tried everything but unfortunately nothing worked and I had to re-deploy the app.
Hope someone explains why this happened. | How to upgrade django? | 1 | 0 | 0 | 118,680 |
23,708,895 | 2014-05-17T07:45:00.000 | 2 | 0 | 1 | 0 | python,django | 41,422,829 | 18 | false | 1 | 0 | sudo pip install --upgrade django
also upgrade the DjangoRestFramework:
sudo pip install --upgrade djangorestframework | 9 | 64 | 0 | My project was running on Django 1.5.4 and I wanted to upgrade it. I did pip install -U -I django and now pip freeze shows Django 1.6.5 (clearly django has upgraded, I'm in virtualenv) but my project is still using Django 1.5.4. How can I use the upgraded version?
UPDATE: Thanks for your comments. I tried everything but unfortunately nothing worked and I had to re-deploy the app.
Hope someone explains why this happened. | How to upgrade django? | 0.022219 | 0 | 0 | 118,680 |
23,713,527 | 2014-05-17T16:14:00.000 | 4 | 1 | 1 | 0 | python,file-io,raspberry-pi | 23,713,635 | 1 | false | 0 | 0 | Yes and no. Data is buffered at different places in the process of writing: the file object of python, the underlying C-functions, the operating system, the disk controller. Even closing the file, does not guarantee, that all these buffers are written physically. Only the first two levels are forced to write their buffers to the next level. The same can be done by flushing the filehandle without closing it.
As long as the power-off can occur anytime, you have to deal with the fact, that some data is lost or partially written.
Closing a file is important to give free limited resources of the operating system, but this is no concern in your setup. | 1 | 6 | 0 | I'm about to write a program for a racecar, that creates a txt and continuously adds new lines to it. Unfortunately I can't close the file, because when the car shuts off the raspberry (which the program is running on) gets also shut down. So I have no chance of closing the txt.
Is this a problem? | What happens if I don't close a txt file | 0.664037 | 0 | 0 | 479 |
23,715,822 | 2014-05-17T20:32:00.000 | 0 | 0 | 0 | 0 | python,compilation,pygame,exe,cx-freeze | 28,030,394 | 1 | false | 0 | 1 | From what I've done, no, you cannot. The best ways imo are as follows:
Making folders (files>images) to make it less desirable to look for them.
(Not sure how well this would work with pygame) but hidden folders might work. I used hidden folders for file writing in python.
Otherwise I think that it wouldn't be so bad if someone were to just play a sound file or look at your artwork, is there a specific problem with it? | 1 | 1 | 0 | Can i compile files like:
- Images,
- sounds,
- fonts,
- etc..
For example if I have "click.wav" and i dont want to let users play it themself. | Compiled my Game made in pygame to .exe. Is there any way to compile all files? | 0 | 0 | 0 | 416 |
23,716,064 | 2014-05-17T21:04:00.000 | 0 | 1 | 0 | 0 | java,android,python,c++ | 31,828,819 | 1 | false | 1 | 1 | You have to point JAVA_HOME to this path:
C:\android\Java\jdk1.8.0_05 | 1 | 1 | 0 | so I trying to build project for cocos2d-x. I'm currently at cmd and when I type python android-build.py -p 19 cpp-tests it start making project but then I get error that build failed. Problem is bescause it can't find javac compiler.
"Perhaps JAVA_HOME does not point to the JDk. It is currently set to
"c:/Program Files/Java/jre7"
Problem is bescause in system variables I made new variable called JAVA_HOME and it is pointed to C:\android\Java\jdk1.8.0_05\bin but still I getting that error. What to do guys? | JAVA_HOME "bug" | 0 | 0 | 0 | 90 |
23,716,724 | 2014-05-17T22:31:00.000 | 1 | 0 | 0 | 0 | android,python,django,localhost,django-rest-framework | 47,569,447 | 3 | false | 1 | 0 | I tried the above, but failed to work in my case. Then with running
python manage.py runserver 0.0.0.0:8000 I also had to add my IP to ALLOWED_HOSTS in settings.py, which solved this issue.
eg.
Add your ip to allowed hosts in settings.py
ALLOWED_HOSTS = ['192.168.XXX.XXX']
Then run the server
python manage.py runserver 0.0.0.0:8000 | 1 | 3 | 0 | So I did a research before posting this and the solutions I found didn't work, more precisely:
-Connecting to my laptop's IPv4 192.168.XXX.XXX - didn't work
-Conecting to 10.0.2.2 (plus the port) - didn't work
I need to test an API i built using Django Rest framework so I can get the json it returns, but i cant access through an Android app i'm building (I'm testing with a real device, not an emulator). Internet permissions are set on Manifest and i can access remote websites normally. Just can't reach my laptop's localhost(they are in the same network)
I'm pretty new to Android and Python and Django as well (used to built Django's Rest Framework API).
EDIT: I use localhost:8000/snippets.json or smth like this to connect on my laptop.
PS: I read something about XAMP server... do I need it in this case?
Thanks in advance | Can't access my laptop's localhost through an Android app | 0.066568 | 0 | 1 | 2,908 |
23,716,904 | 2014-05-17T22:58:00.000 | 1 | 0 | 0 | 0 | python,numpy,scipy,signal-processing,fft | 23,717,381 | 1 | true | 0 | 0 | If your IFFT's length is different from that of the FFT, and the length of the IFFT isn't composed of only very small prime factors (2,3,etc.), then the efficiency can drop off significantly.
Thus, this method of resampling is only efficient if the two sample rates are different by ratios with small prime factors, such as 2, 3 and 7 (hint). | 1 | 0 | 1 | I'm trying to resample a 1-D signal using an FFT method (basically, the one from scipy.signal). However, the code is taking forever to run, even though my input signal is a power of two in length. After looking at profiling, I found the root of the problem.
Basically, this method takes an FFT, then removes part of the fourier spectrum, then takes an IFFT to bring it back to the time domain at a lower sampling rate.
The problem is that that the IFFT is taking far longer to run than the FFT:
ncalls tottime percall cumtime percall filename:lineno(function)
1 6263.996 6263.996 6263.996 6263.996 basic.py:272(ifft)
1 1.076 1.076 1.076 1.076 basic.py:169(fft)
I assume that this has something to do with the amount of fourier points remaining after the cutoff. That said, this is an incredible slowdown so I want to make sure that:
A. This behavior is semi-reasonable and isn't definitely a bug.
B. What could I do to avoid this problem and still downsample effectively.
Right now I can pad my input signal to a power of two in order to make the FFT run really quickly, but not sure how to do the same kind of thing for the reverse operation. I didn't even realize that this was an issue for IFFTs :P | IFFT taking orders of magnitude more than FFT | 1.2 | 0 | 0 | 285 |
23,717,539 | 2014-05-18T00:48:00.000 | 0 | 0 | 0 | 0 | python,selenium | 23,717,600 | 2 | false | 0 | 0 | Sure it is possible, but you have to instruct selenium to enter these links one by one as you are working within one browser.
In case, the pages are not having the links rendered by JavaScript in the browser, it would be much more efficient to fetch these pages by direct http request and process it this way. In this case I would recommend using requests. However, even with requests it is up to your code to locate all urls in the page and follow up with fetching those pages.
There might be also other Python packages, which are specialized on this kind of task, but here I cannot serve with real experience. | 1 | 1 | 0 | I'd like to know if is it possible to browse all links in a site (including the parent links and sublinks) using python selenium (example: yahoo.com),
fetch all links in the homepage,
open each one of them
open all the links in the sublinks to three four levels.
I'm using selenium on python.
Thanks
Ala'a | Browse links recursively using selenium | 0 | 0 | 1 | 446 |
23,718,920 | 2014-05-18T05:42:00.000 | 1 | 0 | 0 | 0 | python,sockets,interactive,keystroke | 23,719,308 | 1 | true | 0 | 0 | What you want is not possible. As you say, you want to write a socket based cmd like server. The server will open a socket and listen for data from the client. Now it is possible to read socket input character by character (which is not the same as non-blocking BTW), but that will not help you.
It is up to the client program to decide how and when to send the data. So if the client side program decides to "eat" tab and control characters, then you will simply not see them. So if you want to process keystrokes one by one, you will also need a client application. | 1 | 3 | 0 | I am writing a socket based "python cmd like" server module which can support cli interactive functions such as autocompletion or command history, by doing that a simple "telnet" or "nc" client side may able to connect to server to read/set something on server side.
after searching, there are a lot of modules can do "cmd" part such like python standard module "cmd" or "ipython" or even vty simulator, however, I cannot find a module can actually bind to socket directly to detect keystrokes such as "tab" key or "control+c" client side. Most of them just able to process line(s) read, which not suitable for autocompletion with tab press or command history with up/down press.
I think this question can be simplify to:
Is that possible to read socket keystroke input non-blocking, then process this key input value somehow server side - for example ASCII code + 1, then echo back to socket to show in client side?
Thank you for your help. | How to handle keystroke in python socket? | 1.2 | 0 | 1 | 1,014 |
23,720,875 | 2014-05-18T10:20:00.000 | 14 | 0 | 1 | 0 | python,opencv,computer-vision,draw | 56,874,993 | 4 | false | 0 | 0 | As the other answers said, the function you need is cv2.rectangle(), but keep in mind that the coordinates for the bounding box vertices need to be integers if they are in a tuple, and they need to be in the order of (left, top) and (right, bottom). Or, equivalently, (xmin, ymin) and (xmax, ymax). | 1 | 106 | 0 | I'm having trouble with import cv in my python code.
My issue is I need to draw a rectangle around regions of interest in an image.
How can this be done in python? I'm doing object detection and would like to draw a rectangle around the objects I believe I've found in the image. | How to draw a rectangle around a region of interest in python | 1 | 0 | 0 | 229,215 |
23,721,230 | 2014-05-18T11:07:00.000 | 0 | 0 | 1 | 0 | python,dictionary,floating-point,key | 62,230,686 | 6 | false | 0 | 0 | Another way would be enter the keys as strings with the point rather than a p and then recast them as floats for plotting.
Personally, if you don't insist on the dict format, I would store the data as a pandas dataframe with the pH as a column as these are easier to pass to plotting libraries | 1 | 28 | 0 | I am developing a class for the analysis of microtiter plates. The samples are described in a separate file and the entries are used for an ordered dictionary. One of the keys is pH, which is usually given as float. e.g 6.8
I could import it as decimal with Decimal('6.8') in order to avoid a float as dict key. Another solution would be to replace the dot with e.g p like 6p8 or to write 6p8 in my sample description and therefore eliminating the problem at the beginning. But this would cause troubles later on since i cannot plot pH of 6p8 in my figures.
How would you solve this issue? | Float values as dictionary key | 0 | 0 | 0 | 37,680 |
23,721,725 | 2014-05-18T12:09:00.000 | 3 | 0 | 1 | 0 | python,matlab,numpy,scipy | 23,722,134 | 4 | false | 0 | 0 | From my experience, using Python is more rewarding, especially for a beginner in enginnering. In comparison to Matlab, Python is a general purpose language, and knowing it makes many more tasks than, say, signal analysis easy to accomplish. In my opinion it's easier to interface with external hardware or to do other tasks where you need a "glue" language.
And with respect to signal processing, numpy, scipy, and matplotlib are a very good choice! I never felt I would miss out on anything! It was rather the other way around that with Matlab I was missing all the general purpose stuff and the "batteries included" nature of Python. The number of freely available libraries for Python is just overwhelming.
On top, basing your work on an open source project pays back. As a student, you can simply install Python on all the machines that matter to you (no additional costs), you can benefit from reading the source of others (great learning experience), and once you are doing some "production" stuff later on, you have the power to fix stuff yourself. With Matlab and other closed-source packages, you always depend on somebody else.
Good luck! | 3 | 3 | 1 | I am a grad student in Engineering and I am beginning to realise how important a skill programming is in my profession. In undergrad studies we were introduced to MATLAB (Actually Octave) and I used to think that that was the way to go, but I have been doing some research and it seems that the scientific community is starting to move over to Python, through it's numpy and scipy packages, at an alarming rate.
I feel that I am at the beginning of my 'programming journey' and have been told a few times that I should choose a language and stick to it.
I have basically made up my mind that I want to move over to Python, but would like some informed opinions on my decision?
I am not looking to start a MATLAB vs Python style thread as this will only lead to people giving their opinions as facts and I know this is not the style of this forum. I am simply looking for validation that a move from MATLAB to Python is a good idea for a person in my position.
P.S. I know Python is free and MATLAB is expensive, but that it simply not a good enough reason for me to make this decision. | Allthough not very experienced, I have a great interest in scientific programming. Is Python a good choice compared to MATLAB? | 0.148885 | 0 | 0 | 375 |
23,721,725 | 2014-05-18T12:09:00.000 | 6 | 0 | 1 | 0 | python,matlab,numpy,scipy | 23,722,826 | 4 | false | 0 | 0 | You should consider what particular capabilities you need, and see if Numpy and Scipy can meet them. Matlab's real value isn't in the base package, which is more-or-less matched by a combination of numpy, scipy and matplotlib, but in the various toolboxes one can purchase. For instance, I'm not aware of a Robust Control toolbox equivalent for Python.
Another feature of Matlab that doesn't have an easy-to-use Python equivalent is Simulink, especially the mature real-time hardware-in-the-loop simulation and embedded code-generation. There are open-source projects with similar goals: JModelica is worth looking at, as is Scilab's Scicos.
A final consideration is what is used in the industry you plan to work in.
Having said all that, if you can use Python, you should; it's more fun, and it's (probably) a fundamentally better language. If you do become proficient in Python, switching to Matlab if you have to won't be very difficult.
My experience is that using Python made me a better Matlab programmer; Python's basic facilities (list comprehensions, dictionaries, modules, etc.) made me look for similar capabilities in Matlab, and made me organize my Matlab code better. | 3 | 3 | 1 | I am a grad student in Engineering and I am beginning to realise how important a skill programming is in my profession. In undergrad studies we were introduced to MATLAB (Actually Octave) and I used to think that that was the way to go, but I have been doing some research and it seems that the scientific community is starting to move over to Python, through it's numpy and scipy packages, at an alarming rate.
I feel that I am at the beginning of my 'programming journey' and have been told a few times that I should choose a language and stick to it.
I have basically made up my mind that I want to move over to Python, but would like some informed opinions on my decision?
I am not looking to start a MATLAB vs Python style thread as this will only lead to people giving their opinions as facts and I know this is not the style of this forum. I am simply looking for validation that a move from MATLAB to Python is a good idea for a person in my position.
P.S. I know Python is free and MATLAB is expensive, but that it simply not a good enough reason for me to make this decision. | Allthough not very experienced, I have a great interest in scientific programming. Is Python a good choice compared to MATLAB? | 1 | 0 | 0 | 375 |
23,721,725 | 2014-05-18T12:09:00.000 | 3 | 0 | 1 | 0 | python,matlab,numpy,scipy | 23,722,879 | 4 | false | 0 | 0 | I personally feel that working with Python is a lot better, As @bdoering mentioned working on Opensource projects is far better than working on closed source.
Matlab is quite industry specific, and is still not wide spread in the industry. If you work with these softwares, sooner or later you will be stuck between different kinds of them too (ex, Matlab vs Mathematica). However, Syntax will be easy to write and modules will run quickly and simulate. But in the end there will always be a limitation with Matlab. My observation says that using using a software like Matlab may provide you quick simulations of graphs and models, but will limit your learning curve.
Go for Python! | 3 | 3 | 1 | I am a grad student in Engineering and I am beginning to realise how important a skill programming is in my profession. In undergrad studies we were introduced to MATLAB (Actually Octave) and I used to think that that was the way to go, but I have been doing some research and it seems that the scientific community is starting to move over to Python, through it's numpy and scipy packages, at an alarming rate.
I feel that I am at the beginning of my 'programming journey' and have been told a few times that I should choose a language and stick to it.
I have basically made up my mind that I want to move over to Python, but would like some informed opinions on my decision?
I am not looking to start a MATLAB vs Python style thread as this will only lead to people giving their opinions as facts and I know this is not the style of this forum. I am simply looking for validation that a move from MATLAB to Python is a good idea for a person in my position.
P.S. I know Python is free and MATLAB is expensive, but that it simply not a good enough reason for me to make this decision. | Allthough not very experienced, I have a great interest in scientific programming. Is Python a good choice compared to MATLAB? | 0.148885 | 0 | 0 | 375 |
23,725,667 | 2014-05-18T18:52:00.000 | 1 | 1 | 1 | 0 | python,cloud,virtual-machine,cloud-storage | 23,761,086 | 1 | false | 0 | 0 | Assuming you want to maintain performance then you probably still want to keep the tools on the machines that actually have to use them. In other words whatever you are doing will probably run slower if you have to access some 'off machine' location to get any tool required.
If what you are looking for is a way to more easily manage and distribute your tool updates to multiple machines, you could store all your tools in a repository (like SVN or GIT etc or even a home made one) and have a script on your machines which runs every day (or hour or whatever you require) to update the tools to the latest release.
Ideally you want your update to only include changes since the last update, but most distributed repositories will support this automatically. | 1 | 0 | 0 | I have access to a set of cloud machines. Each of these machine is responsible for specific tasks and has a set of tools responsible for the task.
Now these tools are updated weekly adding new functions. All the tools are implemented on the python language.
The problem is that I need to upload every time my code to all of these machines. I want to have a common place for the tools for all the VMs. How can I do that?
My initial idea is to just mount on every VM a service like dropbox. However, I dont know if this is the correct approach for the problem.
Could you please give some suggestions? | One Location for all tools in Cloud Vms | 0.197375 | 0 | 0 | 18 |
23,729,690 | 2014-05-19T04:23:00.000 | 1 | 0 | 1 | 0 | python,data-structures,queue,priority-queue | 23,729,910 | 2 | false | 0 | 0 | No, there isn't a good way to delete any arbitrary value in priority queue. You can only extract top (min/max) element from priority queue.
A set data structure (balanced binary search tree) would be better, because you can find and delete node in O(log n). | 1 | 1 | 0 | Is there a good way to delete values from the class queue.PriorityQueue() without ruining the priority queue? I guess theoretically I could make a loop that will get all the values until I reach the one I need, and insert all of the other ones back one without including the deleted node. This seems like overkill though. Is there a better way?
Edit: I'm trying to make a priority queue of nodes with the key being the cost to get to said node. If I find a cheaper way of getting to the node I would like to replace it on the priority queue with the cheaper cost. | Finding value in a priority queue in python | 0.099668 | 0 | 0 | 1,728 |
23,729,919 | 2014-05-19T04:53:00.000 | 2 | 0 | 0 | 0 | python,theano,summarization,deep-learning | 23,765,727 | 2 | false | 0 | 0 | I think you need to be a little more specific. When you say "I am unable to figure to how exactly the summary is generated for each document", do you mean that you don't know how to interpret the learned features, or don't you understand the algorithm? Also, "deep learning techniques" covers a very broad range of models - which one are you actually trying to use?
In the general case, deep learning models do not learn features that are humanly intepretable (albeit, you can of course try to look for correlations between the given inputs and the corresponding activations in the model). So, if that's what you're asking, there really is no good answer. If you're having difficulties understanding the model you're using, I can probably help you :-) Let me know. | 1 | 3 | 1 | I am trying to summarize text documents that belong to legal domain.
I am referring to the site deeplearning.net on how to implement the deep learning architectures. I have read quite a few research papers on document summarization (both single document and multidocument) but I am unable to figure to how exactly the summary is generated for each document.
Once the training is done, the network stabilizes during testing phase. So even if I know the set of features (which I have figured out) that are learnt during the training phase, it would be difficult to find out the importance of each feature (because the weight vector of the network is stabilized) during the testing phase where I will be trying to generate summary for each document.
I tried to figure this out for a long time but it's in vain.
If anybody has worked on it or have any idea regarding the same, please give me some pointers. I really appreciate your help. Thank you. | Text summarization using deep learning techniques | 0.197375 | 0 | 0 | 2,167 |
23,730,297 | 2014-05-19T05:32:00.000 | 1 | 1 | 0 | 0 | python,websocket,rabbitmq,tornado,pika | 24,450,966 | 2 | true | 0 | 0 | Since the consumers and producers were queuing - dequeuing from a particular queue, at a point PIKA Client just choked out due to the multiple asynchronous threading systems over the shared queue.
Thus, in case anybody else faces the same issue follow the several check ups in your code:
How many connections are you having? How many channels? how many queues? How many producers- consumers? (These could be determined by the sudo rabbitmqctl list_queues etc )
Once you understand the structure you are using, track the running transactions. For several requests by several users.
Thus on each transaction, print the thread action, so that you understand the pika activities. Since these threads run in async, if overwhelmed wrongly, causes the pika client to crash. Thus create a Thread Manager to control the threads.
Solution was advised by Gavin Roy & Michael Klishin, from Pika & RabbitMQ respectively. | 1 | 1 | 0 | I have used Pika to integrate the Websocket in Tornado and RabbitMQ. It sucessfully runs on various queues till some time. Then raises the following error:
CRITICAL:pika.connection:Attempted to send frame when closed
I have taken the code reference from https://github.com/haridas/RabbitChat/blob/master/tornado_webapp/rabbit_chat.py
I have gone through my code thoroughly, however fail to understand why it raises such an error.
Can someone help troubleshoot!
Thanks!
Also note changing the backpressure multiplier does not solve the problem. So looking for a real solution for this one. | CRITICAL:pika.connection:Attempted to send frame when closed | 1.2 | 0 | 0 | 1,276 |
23,732,045 | 2014-05-19T07:29:00.000 | 0 | 0 | 1 | 0 | python | 23,732,162 | 3 | false | 0 | 0 | Suppose your string is pathName, then you can use fileName = pathName.split('\\')[-1]. | 1 | 0 | 0 | I need to split the string using delimiter "\"
The string can be in any of the following format:
file://C:\Users\xyz\filename.txt
C:\Users\xyz\filename.txt
I need my script to give the output as "filename.txt"
I tried to use split('\\\\'). It does not work out. Which is the better function to use? | Split string using delimiter "\" in python | 0 | 0 | 0 | 1,866 |
23,732,972 | 2014-05-19T08:24:00.000 | 1 | 0 | 1 | 0 | python,c,embed,compatibility | 23,733,016 | 2 | true | 0 | 0 | C extension modules are version specific. Each different version of Python requires a different version of the extension module. You need to compile the extension module from source linking against the headers and libraries for the target Python version. | 2 | 1 | 0 | My program needs to import xxx.so, and this xxx.so file is compiled under Python2.4.
I want to run my program under Python2.7 & Python2.4, but there is an error when import xxx.so under Python2.7, I know that is due to mismatching with the Python version.
My question: should I compile xxx.so file to match each Python version? | How to keep backward compatibility of the dynamic library embed C(.so file) between different Python versions | 1.2 | 0 | 0 | 245 |
23,732,972 | 2014-05-19T08:24:00.000 | 1 | 0 | 1 | 0 | python,c,embed,compatibility | 23,748,901 | 2 | false | 0 | 0 | Yes, you should compile it with the matching Python version using the same compiler to assure ABI compatibility.
It's not a problem on *nix platforms, where compiler is bundled with the operating system, but may give you headaches on Windows, where many different compilers are used (mingw, visual studio, etc).
Python C API documentation describes compilers used by the official builds. | 2 | 1 | 0 | My program needs to import xxx.so, and this xxx.so file is compiled under Python2.4.
I want to run my program under Python2.7 & Python2.4, but there is an error when import xxx.so under Python2.7, I know that is due to mismatching with the Python version.
My question: should I compile xxx.so file to match each Python version? | How to keep backward compatibility of the dynamic library embed C(.so file) between different Python versions | 0.099668 | 0 | 0 | 245 |
23,739,630 | 2014-05-19T13:52:00.000 | 1 | 0 | 0 | 0 | python,google-app-engine,webapp2 | 23,742,459 | 2 | false | 1 | 0 | Preventing it on the server side is not trivial - a second call may hit a different instance. So you need to deal with sessions. The code will get complex quickly.
I would recommend disabling the button before a call and reenabling it upon a response. | 1 | 0 | 0 | I am using GAE for an app that has various submit href buttons, and use javascript to submit.
I am having a real tough time trying to figure out how to prevent multiple submits or doubl-clicking. I have tried various methods to disable or remove the href with javascript.
But I am thinking if there is maybe a method to prevent this in the backend.
What methods would you recommend I use? | Prevent double submits | 0.099668 | 0 | 0 | 71 |
23,741,133 | 2014-05-19T15:03:00.000 | 3 | 1 | 1 | 0 | python,fixtures,python-unittest | 23,741,307 | 2 | true | 0 | 0 | You can call if cond: self.skipTest('reason') in setUp(). | 1 | 2 | 0 | in unittest python library, exists the functions setUp and tearDown for set variables and other things pre and post tests.
how I can run or ignore a test with a condition in setUp ? | if condition in setUp() ignore test | 1.2 | 0 | 0 | 1,507 |
23,741,509 | 2014-05-19T15:21:00.000 | 0 | 1 | 0 | 0 | python,django-testing,django-1.4,gitlab-ci | 25,917,374 | 2 | false | 1 | 0 | Do you have Django installed on the testrunner?
If not, try to configure a virtualenv for your testsuite. Best might be (if you have changing requirements) to make the setup and installation of this virtualenv part of your testsuite. | 1 | 3 | 0 | I have project in django 1.4 and I need to run django test in contious integration system (GitLab 6.8.1 with Gitlab CI 4.3).
Gitlab Runner have installed on server with project.
When I run:
cd project/app/ && ./runtest.sh test some_app
I get:
Traceback (most recent call last):
File "manage.py", line 2, in <module>
from django.core.management import execute_manager
ImportError: No module named django.core.management
How I may run tests? | Running django test on the gitlab ci | 0 | 0 | 0 | 3,987 |
23,743,546 | 2014-05-19T17:15:00.000 | 0 | 0 | 0 | 0 | java,android,python,database,jsp | 23,743,682 | 2 | false | 1 | 0 | We are planning an android app in this summer and I'm considering
developing it with Python
Native Android apps are developed using Java.
However, the service provided by the app is supposed to be added to
the website made with JSP later. I'm afraid the difference of the
language would cause any obstacle.
You will need to create an API that communicates between Android and your database. | 1 | 1 | 0 | I'm a student who work part-time at a start-up, which runs a website made with JSP.
We are planning an android app in this summer and I'm considering developing it with Python, which I'm interested in.
However, the service provided by the app is supposed to be added to the website made with JSP later. I'm afraid the difference of the language would cause any obstacle.
Since they will use a common database, I think using different languages to access it won't have any problem. I want to make sure that my guess is correct.
Pardon my poor English. I'd appreciate your answers. | Can JSP and Python used together for same database? | 0 | 0 | 0 | 109 |
23,744,128 | 2014-05-19T17:52:00.000 | -1 | 0 | 0 | 0 | python,sqlite,unit-testing | 23,744,831 | 1 | false | 0 | 0 | I don't understand your problem. Why do you care that it's serverless?
My standard technique for this is:
use SQLAlchemy
in tests, configure it with sqlite:/// or sqlite:///:memory: | 1 | 0 | 0 | Hi I am trying to write python functional tests for our application. It involves several external components and we are mocking them all out.. We have got a better framework for mocking a service, but not for mocking a database yet.
sqlite is very lite and thought of using them but its a serverless, is there a way I can write some python wrapper to make it a server or I should look at other options like HSQL DB? | how to do database mocking or make sqlite run on localhost? | -0.197375 | 1 | 0 | 535 |
23,748,829 | 2014-05-19T23:38:00.000 | 1 | 0 | 1 | 0 | python,constructor,typechecking,ooad,duck-typing | 23,748,899 | 1 | false | 0 | 0 | That statement about returning an object in a valid state is broadly true, I suppose.
Basically, the constructor should setup the members of your class the way you wish based on the arguments that were passed in, similar to other languages.
It is possible to do type checking in Python via isinstance, although often times it's really not needed nor desired.
Really, with duck typing, the general procedure would be to write the code, including the constructor, as if the object is a duck. That is, that is has the methods and behaviors that you are expecting. If an object is passed in that doesn't have a particular method, say, that will raise a runtime exception.
Also, keep in mind the __enter and __exit__ functions, which are used with the with paradigm. This can help clean up resources and is a response to the otherwise necessary try/except/finally blocks. | 1 | 2 | 0 | I am currently studying some classic Object Orientation, and have read in some book (can't remember which one right now) that
"The role of a constructor is to assure that new class instances are returned (by the constructor) in a valid state"
Since python uses duck typing, I wonder, for example, how can I prevent my object of receiving wrong or invalid arguments without explicit type checking, and also if that would leave the risk of taking an invalid instance further inside the programs, possibly causing errors later than would be the desired. | What are the responsibilities of a class constructor in python? | 0.197375 | 0 | 0 | 122 |
23,750,850 | 2014-05-20T04:07:00.000 | 1 | 1 | 0 | 0 | python,encryption,obfuscation,dropbox-api | 23,751,446 | 1 | true | 0 | 0 | To prevent casual misuse of your app secret (like someone who copy/pastes code not realizing they're supposed to create their own app key/secret pair), it's probably worth doing a little obfuscation, but as you point out, that won't prevent a determined individual from obtaining the app secret.
In a client-side app (like a mobile or desktop app), there's really nothing you can do to keep your OAuth app secret truly secret. That said, the consensus seems to be that this doesn't really matter. In fact, in OAuth 2, the recommended flow for client-side apps is the "token" or "implicit" flow, which doesn't use the app secret at all. | 1 | 1 | 0 | I've developed a little Python Dropbox app but I have no I idea how to hide the app key and app secret. Until I solve this problem I'm not sure how I can ship my app as this seems to be a significant security threat.
I know it is hard to obfuscate code, most especially Python so I'm not really sure that that is an option.. but what else could I do? I thought about using some form of encryption and/or storing them on a server to be retrieved when the app starts. Is it possible to write the part that deals with the keys in another language that's more easily to obfuscate like C? As I don't know much about encryption, I'm not sure if any of these options are feasible or not. | Python Dropbox app, what should I do about app key and app secret? | 1.2 | 0 | 0 | 341 |
23,754,108 | 2014-05-20T07:59:00.000 | 0 | 0 | 0 | 0 | python,database,sockets,pyside,qtsql | 23,754,331 | 1 | false | 0 | 1 | I'm not familiar with PySide .. but the idea is
you need to build a function that when internet connection is available it should synchronize your local database with online database and in the server-side you need to build a script that can handle requests ( POST / GET ) to receive the scores and send it to database and I suggest MySQL ..
Hope that helps | 1 | 0 | 0 | I am working on my Python project using PySide as my Ui language. My projet is a game which require an internet connection to update the users'score and store in the database.
My problem is how can I store my database in the internet. I mean that all users can access this information when they are connected to an internet (when they are playing my game) and the information/database must be updated all the time.
I am not sure which database is the most appropriate, how to store this information/database in the internet, how to access this information.
I am using Python and PySide.
For the database, I currently use PySide.QtSql .
Thank you for answer(s) or suggestion(s). | Using Database with Pyside and Socket | 0 | 1 | 0 | 193 |
23,756,453 | 2014-05-20T09:51:00.000 | 0 | 1 | 0 | 1 | python,sockets,bluetooth,connection,bluez | 24,003,966 | 1 | false | 0 | 0 | Setting class of device in my programm in the first place did not work as it got reset. To make the HIDServer work on blueZ I had to set the class of device right before I wait for connections. I cannot say why it gets reset, but I know it does. Maybe somebody else can tell why. | 1 | 1 | 0 | I have developped a HIDServer (bluetooth keyboard) with python on my computer. There are 2 Serversockets (psm 0x11 and 0x13) listening for connections.
When I try to connect my IPhone to my computer, I receive an incoming connection (as can be seen in hcidump), but somehow the connection is terminated by remote host. My sockets never get to accept a client connection. Can you help me please?
hcidumps:
After starting my programm:
HCI Event: Command Complete (0x0e) plen 4
Write Extended Inquiry Response (0x03|0x0052) ncmd 1
status 0x00
When trying to connect IPhone:
HCI Event: Connect Request (0x04) plen 10
bdaddr 60:D9:C7:23:96:FF class 0x7a020c type ACL
HCI Event: Command Status (0x0f) plen 4
Accept Connection Request (0x01|0x0009) status 0x00 ncmd 1
HCI Event: Connect Complete (0x03) plen 11
status 0x00 handle 11 bdaddr 60:D9:C7:23:96:FF type ACL encrypt 0x00
HCI Event: Command Status (0x0f) plen 4
Read Remote Supported Features (0x01|0x001b) status 0x00 ncmd 1
HCI Event: Read Remote Supported Features (0x0b) plen 11
status 0x00 handle 11
Features: 0xbf 0xfe 0xcf 0xfe 0xdb 0xff 0x7b 0x87
HCI Event: Command Status (0x0f) plen 4
Read Remote Extended Features (0x01|0x001c) status 0x00 ncmd 1
HCI Event: Read Remote Extended Features (0x23) plen 13
status 0x00 handle 11 page 1 max 2
Features: 0x07 0x00 0x00 0x00 0x00 0x00 0x00 0x00
HCI Event: Command Status (0x0f) plen 4
Remote Name Request (0x01|0x0019) status 0x00 ncmd 1
HCI Event: Remote Name Req Complete (0x07) plen 255
status 0x00 bdaddr 60:D9:C7:23:96:FF name 'iPhone'
HCI Event: Command Complete (0x0e) plen 10
Link Key Request Reply (0x01|0x000b) ncmd 1
status 0x00 bdaddr 60:D9:C7:23:96:FF
HCI Event: Encrypt Change (0x08) plen 4
status 0x00 handle 11 encrypt 0x01
HCI Event: Disconn Complete (0x05) plen 4
status 0x00 handle 11 reason 0x13
Reason: Remote User Terminated Connection | Bluetooth Socket no incoming connection | 0 | 0 | 0 | 689 |
23,759,202 | 2014-05-20T11:57:00.000 | -2 | 0 | 1 | 0 | python | 48,170,905 | 4 | false | 0 | 0 | Python has a problem and does not see the -2 as a number. This seems to be by design as it is mentioned in the docs.
-2 is interpreted as -(2) {unary minus to positive number 2}
That usually doesn't give a problem but in -a ** 2 the ** has higher priority as - and so with - interpreted as a unary operatoe instead of part of the number -2 ** 2 evaluates to -2 instead of 2. | 1 | 15 | 0 | What should print (-2 ** 2) return? According to my calculations it should be 4, but interpreter returns -4.
Is this Python's thing or my math is that terrible? | Calculation error with pow operator | -0.099668 | 0 | 0 | 1,395 |
23,763,365 | 2014-05-20T14:59:00.000 | 2 | 0 | 0 | 0 | python,django | 23,763,477 | 2 | false | 1 | 0 | This has nothing to do with the Django template, but how you define the variable in the first place.
Basckslashes are only "interpreted" when you specify them as literals in your Python code. So given your Python code above, you can either use the double backslash, or use a raw string.
If you were loading the string "fred\xbf" from your database and outputting it in a template, it would not be "escaped". | 1 | 2 | 0 | I'm using Python 2.7 and Django 1.4
If I have a string variable result = "fred\xbf", how do I tell the Django template to display "fred\xbf" rather than process the backslash and display some strange character?
I know I can escape the backslash: "fred\\xbf" , but can I get the Django template to understand I want the backslash not to be processed? | How do I tell Python not to interpret backslashes in strings? | 0.197375 | 0 | 0 | 2,703 |
23,764,710 | 2014-05-20T15:56:00.000 | 2 | 1 | 1 | 0 | python,open-source | 23,764,809 | 1 | true | 0 | 0 | Not sure if this is an appropriate question for SO - you might get voted down. But ...
Whenever I have seen this question, the answer is almost always:
find a project you like / you're interested in
find something in that project that you feel you can fix / enhance (have a look through their bug tracker)
fork the project (github makes this easy)
make the change, find out what is appropriate for that project (documentation, unit tests, ...)
submit the change back to the project (github has "request pull")
Good luck! | 1 | 0 | 0 | I know python and want to contribute on OpenSource projects that features python. Anyone can help me where to contribute and how.
I already googled it and find github and code.google as a good place to contribute but how to start it I don't know.
Suggest how to get started. | how to contribute on open source project featuring python | 1.2 | 0 | 0 | 435 |
23,766,658 | 2014-05-20T17:50:00.000 | 21 | 1 | 0 | 1 | python,rabbitmq,celery,task-queue,pika | 27,367,747 | 2 | false | 0 | 0 | I’m going to add an answer here because this is the second time today someone has recommended celery when not needed based on this answer I suspect. So the difference between a distributed task queue and a broker is that a broker just passes messages. Nothing more, nothing less. Celery recommends using RabbitMQ as the default broker for IPC and places on top of that adapters to manage task/queues with daemon processes. While this is useful especially for distributed tasks where you need something generic very quickly. It’s just construct for the publisher/consumer process. Actual tasks where you have defined workflow that you need to step through and ensure message durability based on your specific needs, you’d be better off writing your own publisher/consumer than relying on celery. Obviously you still have to do all of the durability checking etc. With most web related services one doesn’t control the actual “work” units but rather, passes them off to a service. Thus it makes little sense for a distributed tasks queue unless you’re hitting some arbitrary API call limit based on ip/geographical region or account number... Or something along those lines. So using celery doesn’t stop you from having to write or deal with state code or management of workflow etc and it exposes the AMQP in a way that makes it easy for you to avoid writing the constructs of publisher/consumer code.
So in short if you need a simple tasks queue to chew through work and you aren’t really concerned about the nuances of performance, the intricacies of durability through your workflow or the actual publish/consume processes. Celery works. If you are just passing messages to an api or service you don't actually control, sure, you could use Celery but you could just as easily whip up your own publisher/consumer with Pika in a couple of minutes. If you need something robust or that adheres to your own durability scenarios, write your own publish/consumer code like everyone else. | 1 | 23 | 0 | I've been working on getting some distributed tasks working via RabbitMQ.
I spent some time trying to get Celery to do what I wanted and couldn't make it work.
Then I tried using Pika and things just worked, flawlessly, and within minutes.
Is there anything I'm missing out on by using Pika instead of Celery? | RabbitMQ: What Does Celery Offer That Pika Doesn't? | 1 | 0 | 0 | 12,748 |
23,769,964 | 2014-05-20T21:06:00.000 | 3 | 0 | 0 | 0 | python,django,django-staticfiles,static-files,collectstatic | 23,770,780 | 1 | true | 1 | 0 | Because of the pluggable app philosophy of django made apparent by their whole encapsulated app structure (urls, views, models, templates, etc., are app specific).
You can see this philosophy pressed further in the latest django project structure where project names are not to be included in the imports / apps are imported globally: from myapp import models and not from project.myapp import models
If you install an app from a third party, you don't need to painstakingly figure out where the application lives, django can simply move it to your environment specific static file serving location.
Simply add to INSTALLED_APPS and you can gain almost all of the functionality of a third party app whose files live who knows where, from templates to models to static files.
PS: I personally don't use the app-directory static file system unless I am making an app pluggable. It's harder to find and maintain IMO when files live absolutely everywhere. | 1 | 1 | 0 | Thinking about a project that has an app called 'website', and has a 'static' folder inside, that contains all the project's static files, why do I have to collect all static files and put at another folder, instead of just map the static folder (website/static) on my webserver? What's the real need to Django collect static files? Just because there are a lot of apps, and you could put your static file in different folders? Or, are there more than that involved? | Why django has collectstatic? | 1.2 | 0 | 0 | 238 |
23,771,759 | 2014-05-20T23:45:00.000 | 1 | 0 | 1 | 1 | python,py2exe | 23,772,838 | 1 | true | 0 | 0 | An exe compiled by py2exe isn't compiled in the same sense as a c/c++ application is. When you run py2exe's setup command, it collects your dependencies and packages them together. Depending on the options supplied, it can create an archive file that contains the .py[odc] files that comprise your app, but they are still on the user system. They can be accessed, decompiled, inspected, or modified. What a user does with your code once they have it is out of your hands. You should not deploy sensitive information, passwords, private keys, or anything else that might cause damage in the "wrong" hands. | 1 | 0 | 0 | In my setup, the py2exe bundles all the dependency modules into a zip and I can see them on the deployed machine. (*.pyo)
My script windows_app.py is specified in the setup.py as setup(windows = ["windows_app.py"]
However I do not see windows_app.pyo on the deployed box anywhere (is this correct?).
I do see "windows_app.exe" though which is expected.
My question here is, can I keep my private password in the windows_app.py (which goes into windows_app.exe) and assume it is a better place as the .pyo are easily decompilable. | py2exe executable containing private password | 1.2 | 0 | 0 | 213 |
23,775,755 | 2014-05-21T06:40:00.000 | 1 | 0 | 1 | 0 | python,pip,gevent | 23,775,864 | 3 | false | 0 | 0 | What you're seeing is because gevent does not support python 3 yet. This is why you're getting the error right now. | 1 | 1 | 0 | I m new to python.I installed python 3.4 and tried to run the commamd
"pip install gevent" but it shows the error "TypeError: unorderable types: NoneType() >= str()".How to resolve this.
thanks in advance | installing gevent in windows | 0.066568 | 0 | 0 | 1,291 |
23,781,823 | 2014-05-21T11:24:00.000 | 1 | 0 | 1 | 0 | python,file,binary | 23,781,948 | 2 | true | 0 | 0 | Under Python 2, the only difference binary mode makes is how newlines are translated when writing; \n would be translated to the platform-dependant line separator.
In other words, just write your ASCII byte strings directly to your binary file, there is no difference between your ASCII data and the binary data as far as Python is concerned. | 1 | 0 | 0 | I have this wired protocol I am implementing and I have to write binary and ASCII data into the same file, how can I do this at the same time or at least the result in the end will be the file with mixed asci and binary data ?
I know that open("myfile", "rb") does open myfile in the binary mode, except that I can't find a solution how to go about this! | How to write binary and asci data into a file in python? | 1.2 | 0 | 0 | 782 |
23,783,185 | 2014-05-21T12:28:00.000 | 0 | 1 | 0 | 1 | python,path,operating-system,listdir | 23,785,492 | 2 | false | 0 | 0 | I think you are asking about how to get the relative path instead of absolute one.
Absolute path is the one like: "/home/workspace"
Relative looks like the following "./../workspace"
You should construct the relative path from the dir where your script is (/home/workspace/tests) to the dir that you want to acces (/home/workspace) that means, in this case, to go one step up in the directory tree.
You can get this by executing:
os.path.dirname(os.path.join("..", os.path.abspath(__file__)))
The same result may be achieved if you go two steps up and one step down to workspace dir:
os.path.dirname(os.path.join("..", "..", "workspace", os.path.abspath(__file__)))
In this manner you actually can access any directory without knowing it's absolute path, but only knowing where it resides relatively to your executed file. | 1 | 0 | 0 | I have a file abc.py under the workspace dir.
I am using os.listdir('/home/workspace/tests') in abc.py to list all the files (test1.py, test2.py...)
I want to generate the path '/home/workspace/tests' or even '/home/workspace' instead of hardcoding it.
I tried os.getcwd() and os.path.dirname(os.path.abspath(____file____)) but this instead generates the path where the test script is being run.
How to go about it? | How to generate path of a directory in python | 0 | 0 | 0 | 1,068 |
23,784,161 | 2014-05-21T13:09:00.000 | 3 | 0 | 1 | 0 | python,comments,canopy | 23,788,272 | 1 | true | 0 | 0 | See "Toggle Block Comment" in the Edit Menu (with platform-specific shortcut). | 1 | 2 | 0 | In emacs you can highlight a section and select a menu function to prepend a "#" to every line in the selection. I am trying to give up my emacs addiction and use the IDE in Canopy but can't find this functionality. Is it missing or am I just having trouble finding it? | Comment out code block in Enthought Canopy | 1.2 | 0 | 0 | 2,979 |
23,784,578 | 2014-05-21T13:25:00.000 | 0 | 0 | 0 | 0 | python,pandas | 23,784,889 | 1 | false | 0 | 0 | Seems to me that it may depend on what your subsequent use case is. But IMHO I would make each column unique type otherwise functions such as group by with totals and other common Pandas functions simply won't work. | 1 | 0 | 1 | I have huge pandas DataFrames I work with. 20mm rows, 30 columns. The rows have a lot of data, and each row has a "type" that uses certain columns. Because of this, I've currently designed the DataFrame to have some columns that are mixed dtypes for whichever 'type' the row is.
My question is, performance wise, should I split out mixed dtype columns into two separate columns or keep them as one? I'm running into problems getting some of these DataFrames to even save(to_pickle) and trying to be as efficient as possible.
The columns could be mixes of float/str, float/int, float/int/str as currently constructed. | Pandas performance: Multiple dtypes in one column or split into different dtypes? | 0 | 0 | 0 | 581 |
23,785,374 | 2014-05-21T13:56:00.000 | 0 | 0 | 0 | 0 | python,django,django-cms | 23,785,693 | 2 | false | 1 | 0 | No, this is not supported at the moment. | 1 | 2 | 0 | Is it possible to restrict a placeholder type without defining it in settings.py?
Something like: {% placeholder "home_banner_title" image %} | Specify Django CMS placeholder type (text,picture,link) in template | 0 | 0 | 0 | 363 |
23,786,694 | 2014-05-21T14:48:00.000 | 1 | 0 | 0 | 0 | python,pandas | 38,926,745 | 2 | false | 0 | 0 | Yes, you will, Pandas sources those dependencies. | 2 | 0 | 1 | Pandas has a number of dependencies, e.g matplotlib, statsmodels, numexpr etc.
Say I have Pandas installed, and I update many of its dependencies, if I don't update Pandas, could I run into any problems? | Updating Pandas dependencies after installing pandas | 0.099668 | 0 | 0 | 184 |
23,786,694 | 2014-05-21T14:48:00.000 | 3 | 0 | 0 | 0 | python,pandas | 23,787,041 | 2 | true | 0 | 0 | If your version of pandas is old (i.e., not 0.13.1), you should definitely update it to take advantage of any new features/optimizations of the dependencies, and any new features/bug fixes of pandas itself. It is a very actively-maintained project, and there are issues with older versions being fixed all the time.
Of course, if you have legacy code that depends on an older version, you should test it in a virtualenv with the newer versions of pandas and the dependencies before updating your production libraries, but at least in my experience the newer versions are pretty backwards-compatible, as long as you're not relying on buggy behavior. | 2 | 0 | 1 | Pandas has a number of dependencies, e.g matplotlib, statsmodels, numexpr etc.
Say I have Pandas installed, and I update many of its dependencies, if I don't update Pandas, could I run into any problems? | Updating Pandas dependencies after installing pandas | 1.2 | 0 | 0 | 184 |
23,790,052 | 2014-05-21T17:25:00.000 | 0 | 0 | 0 | 0 | python,html | 23,790,111 | 1 | true | 1 | 0 | What you are talking about seems to be much more of the job of a browser extension. Javascript will be much more appropriate, as @brbcoding said. Beautiful Soup is for scraping web pages, not for modifying them on the client side in a browser. To be honest, I don't think you can use Python for that. | 1 | 1 | 0 | This is my first StackOverflow post so please bear with me.
What I'm trying to accomplish is a simple program written in python which will change all of a certain html tag's content (ex. all <h1> or all <p> tags) to something else. This should be done on an existing web page which is currently open in a web browser.
In other words, I want to be able to automate the inspect element function in a browser which will then let me change elements however I wish. I know these changes will just be on my side, but that will serve my larger purpose.
I looked at Beautiful Soup and couldn't find anything in the documentation which will let me change the website as seen in a browser. If someone could point me in the right direction, I would be greatly appreciative! | Change website text with python | 1.2 | 0 | 1 | 2,023 |
23,793,628 | 2014-05-21T20:56:00.000 | 1 | 0 | 1 | 0 | python,nlp,nltk | 23,816,393 | 3 | false | 0 | 0 | I do not think your "algo" is even doing entity recognition... however, stretching the problem you presented quite a bit, what you want to do looks like coreference resolution in coordinated structures containing ellipsis. Not easy at all: start by googling for some relevant literature in linguistics and computational linguistics. I use the standard terminology from the field below.
In practical terms, you could start by assigning the nearest antecedent (the most frequently used approach in English). Using your examples:
first extract all the "entities" in a sentence
from the entity list, identify antecedent candidates ("litigation", etc.). This is a very difficult task, involving many different problems... you might avoid it if you know in advance the "entities" that will be interesting for you.
finally, you assign (resolve) each anaphora/cataphora to the nearest antecedent. | 1 | 0 | 1 | First: Any recs on how to modify the title?
I am using my own named entity recognition algorithm to parse data from plain text. Specifically, I am trying to extract lawyer practice areas. A common sentence structure that I see is:
1) Neil focuses his practice on employment, tax, and copyright litigation.
or
2) Neil focuses his practice on general corporate matters including securities, business organizations, contract preparation, and intellectual property protection.
My entity extraction is doing a good job of finding the key words, for example, my output from sentence one might look like this:
Neil focuses his practice on (employment), (tax), and (copyright litigation).
However, that doesn't really help me. What would be more helpful is if i got an output that looked more like this:
Neil focuses his practice on (employment - litigation), (tax - litigation), and (copyright litigation).
Is there a way to accomplish this goal using an existing python framework such as nltk (after my algo extracts the practice areas) can I use ntlk to extract the other words that my "practice areas" modify in order to get a more complete picture? | How to extract meaning from sentences after running named entity recognition? | 0.066568 | 0 | 0 | 1,842 |
23,797,195 | 2014-05-22T03:07:00.000 | 2 | 0 | 1 | 0 | python,multithreading | 23,798,148 | 4 | false | 0 | 0 | I don't like the coupling of the sleep(60) and the timedelta(minutes=1). I think you could get timing inaccuracies which over time could lead to skipping of datums written in the output.
I would instead take advantage of two facts:
It's ok to wait until the minute is up before writing the file
Time moves only forwards, so once the minute is done you know the data is effectively immutable
Bearing this in mind, you know the time after which input for that minute is complete, and you can then write the file with the last 60 minutes of data. Your second thread just sleeps until that condition is true. Then it wakes up, changes the condition to the next minute, processes the data, and goes back to waiting for the condition to trigger again. In essence, you've just written a simple queue, synchronised on minute boundaries. | 1 | 4 | 0 | I have an infinite number of entries being fed through a web interface. On a per-minute basis, I'd like to dump elements that were received in the last hour into a file named appropriately (datetime.now().strftime('%Y_%m_%d_%H_%M')).
Here's my design so far:
Thread-1
Keeps receiving input and adding to a data_dict of structure: {datetime.now().strftime('%Y_%m_%d_%H_%M'): []}
Thread-2
Sleeps for a minute and writes contents of data_dict[(datetime.now() - timedelta(minutes=1)).strftime('%Y_%m_%d_%H_%M')]
Question
Is using dict in this manner thread-safe?
Is this a good design? :) | Logging infinite data on periodic intervals | 0.099668 | 0 | 0 | 704 |
23,801,986 | 2014-05-22T08:49:00.000 | 0 | 0 | 1 | 0 | python,win32com | 23,802,237 | 2 | false | 0 | 0 | You can have multi dimensional arrays or objects, your choice :)
arr = []; arr.append([1,2]); print arr;
would output
[[1,2]] | 1 | 1 | 0 | I need to read values from excel worksheet into 2d array.can anyone tell me how to do this using pythonwin32com. | Two dimensional array in python | 0 | 0 | 0 | 213 |
23,807,459 | 2014-05-22T12:55:00.000 | 1 | 0 | 1 | 1 | python,vm-implementation | 30,846,061 | 2 | false | 0 | 0 | It may depend on Python implementation such as Pypy, Jython. In CPython, you have to use a separate process if you want an independent interpreter otherwise at the very least GIL is shared.
multiprocessing, concurrent.futures modules allow you to run arbitrary Python code in separate processes and to communicate with the parent easily. | 1 | 3 | 0 | Does anyone know how to launch a new python virtual machine from inside a python script, and then interact with it to execute code in a completely separate object space? In addition to code execution, I'd like to be able to access the objects and namespace on this virtual machine, look at exception information, etc.
I'm looking for something similar to python's InteractiveInterpreter (in the code module), but as far as I've been able to see, even if you provide a separate namespace for the interpreter to run in (through the locals parameter), it still shares the same object space with the script that launched it. For instance, if I change an attribute of the sys module from inside InteractiveInterpreter, the change takes affect in the script as well. I want to completely isolate the two, just like if I was running two different instances of the python interpreter to run two different scripts on the same machine.
I know I can use subprocess to actually launch python in a separate process, but I haven't found any good way to interact with it the way I want. I imagine I could probably invoke it with '-i' and push code to it through it's stdin stream, but I don't think I can get access to its objects at all. | Programmatically launch and interact with python virtual machine | 0.099668 | 0 | 0 | 923 |
23,808,446 | 2014-05-22T13:34:00.000 | 1 | 0 | 0 | 0 | python,numpy,statistics,scipy,probability | 23,809,181 | 1 | false | 0 | 0 | If I understand what you're asking, check out Gaussian Mixture Models and Expectation Maximization. I don't know of any pre-implemented versions of these in Python, although I haven't looked too hard. | 1 | 1 | 1 | I have a 2D data and it contains five peaks. Could I fit five 2D Gaussians function to obtain the peaks? In my problem, the peaks do not refer to the clustering problem. Which I think EM would be an appropriate answer for it.
In my case I measure a variable in x-y space and it shows maximum in more than one position. Is still fitting Fourier series or using Expectation-Maximization method an applicable solution to my problem?
In order to make my likelihood, do I need to just add up the five 2D Gaussians distributions with x and y and the height of each peak as variables? | Define a 2D Gaussian probability with five peaks | 0.197375 | 0 | 0 | 190 |
23,810,221 | 2014-05-22T14:49:00.000 | 2 | 0 | 1 | 0 | python,spyder | 23,849,480 | 1 | true | 0 | 0 | There are many options for running a script in spyder. Try pressing F6 on your script to have a look at them.
Specifically, I get the same behavior if I run a blocking script "in the current interpreter" instead of "in a new interpreter". A single plt.show() can block a script from returning, for example.
If you can, I think that the best way to run a script in spyder is to run in a new interpreter, because this you're sure that you don't use any leftover variable from a previous run. And if the last run didn't terminate and you try to rerun it, spyder will ask you if you want to kill the last one before running it again. | 1 | 2 | 0 | I have installed WinPython with Python 3.3.3 and the Spyder IDE.
I have a problem with running files twice. The first time I run a file (using F5), there is no problem. The second time, Python or Spyder stucks. I can only stop it using Ctrl+C.
Each time, I want to run my file, I have to kill the current process in Spyder (using the exclamation mark in the orange triangle in the lower right corner), and afterwards restart the session using the green triangle, a button that appears next to the 'kill' button after clicking that 'kill' button.
Has anyone had de same problems, and how to solve this? | Cannot run Python script twice without restarting Spyder | 1.2 | 0 | 0 | 2,374 |
23,816,672 | 2014-05-22T20:42:00.000 | 0 | 0 | 1 | 0 | python,python-idle,canopy | 23,818,121 | 1 | true | 0 | 0 | Sorry for the trouble. Canopy's IPython is IPython's QtConsole; the same behavior appears in a plain (non-Canopy) QtConsole. We'll see if we can find a fix.
Meanwhile, a workaround is to run plain (non-QtConsole) IPython: from Canopy's Tools menu, open a Canopy Command Prompt, then type ipython --pylab qt (or just ipython if you don't want pylab behavior), and run your script from there. | 1 | 1 | 0 | I have just started using Enthought Canopy (v1.4.9 64 bit for Windows) instead of IDLE. I am a complete beginner and teaching myself python from various online courses.
When I run scripts in IDLE the output scrolls to the bottom of the IDLE screen, so if I am asking for raw_input multiple times the user can see what input is being asked for each time and just enter it without having to manually scroll down to the bottom of the output before entering their input. However, in Canopy the output does not scroll all the way to the bottom of the 'Python' window.
Is there any command I can put in a script to tell if to automatically scroll to the bottom?
I've tried to search for how to do this online but could only find tutorials on setting up scroll bars. | Enthought Canopy (python) does not scroll to bottom of output automatically | 1.2 | 0 | 0 | 377 |
23,817,491 | 2014-05-22T21:38:00.000 | 16 | 0 | 1 | 0 | python,regex | 23,817,519 | 1 | true | 0 | 0 | Yes, like that: [^\W_]
Where \W is the opposite of \w | 1 | 11 | 0 | I've been trying to match with regex all alphanumeric characters, except for the underscore.
I'm currently using r"^[a-zA-Z0-9]*", but I wondered if it was possible to use \w and exclude _.
Thanks! | How to match all alphanumeric except underscore on Python | 1.2 | 0 | 0 | 4,838 |
23,819,504 | 2014-05-23T01:04:00.000 | 1 | 0 | 0 | 0 | python,c,opengl,interop,python-cffi | 23,861,390 | 1 | false | 0 | 1 | It would help to know what the turnaround time for the simulation runs is and how fast you want to display and update graphs. More or less realtime, tens of milliseconds for each? Seconds? Minutes?
If you want to draw graphs, I'd recommend Matplotlib rather than OpenGL. Even hacking the Matplotlib code yourself to make it do exactly what you want will probably still be easier than doing stuff in OpenGL. And Matplotlib also has "XKCD" style graphs :-)
PyOpenGL works fine with wxPython. Most of the grunt work in modern 3D is done by the GPU so it probably won't be worth doing 3D graphics in C rather than Python if you decide to go that route.
Hope this helps. | 1 | 1 | 1 | Current conditions:
C code being rewritten to do almost the same type of simulation every time (learning behavior in mice)
Matlab code being written for every simulation to plot results (2D, potentially 3D graphs)
Here are my goals:
Design GUI (wxPython) that allows me to build a dynamic simulator
GUI also displays results of simulation via OpenGL (or perhaps Matplotlib)
Use a C wrapper (CFFI) to run the simulation and send the results (averages) to OpenGL or Matplotlib
Question:
In order to have this software run as efficiently as possible, it makes sense to me that CFFI should be used to run the simulation...what I'm not sure about is if it would be better to have that FFI instance (or a separate one?) use an OpenGL C binding to do all the graphics stuff and pass the resulting graphs up to the Python layer to display in the GUI, or have CFFI send the averages of the simulations (the data that gets plotted) to variables in the Python level and use PyOpenGL or Matplotlib to plot graphs. | Interoperability advice - Python, C, Matplotlib/OpenGL run-time efficency | 0.197375 | 0 | 0 | 461 |
23,823,519 | 2014-05-23T07:35:00.000 | 2 | 0 | 1 | 0 | python,setuptools,distribute | 24,061,938 | 1 | true | 0 | 0 | The situation is legitimately confusing as there are too many installers available for Python and the landscape has changed recently.
Distribute was a fork of setuptools which itself is an extension to distutils. They merged back with setuptools in 2013. Your book is most likely out of date. The documentation of setuptools and distribute has been a confusing mess since it assumes you already have intimate knowledge of distutils. Distutils2 was an abandoned effort to get a more capable distutils into the Py3.3 standard lib.
Since distutils still lacks key features like generating executable wrapper scripts you would be best off working with a recent version of setuptools. Read through the distutils documentation first as setuptools is a superset of its functionality.
You can't depend on your users having setuptools installed so it is helpful to include the ez_setup.py bootstrapping script with your code. This will let your setup.py install setuptools if needed. | 1 | 0 | 0 | I am following a python bbok which says to install Distribute. However I am confused should I install Distribute or Setuptools as both of them have merged now. Is there still a difference between the two? Since I have installed pip and that automaticallly installs setuptools I want to know how can I check if Distribute or Setuptools is installed or not? | What should I install Distribute or Setuptools | 1.2 | 0 | 0 | 55 |
23,831,048 | 2014-05-23T13:50:00.000 | 1 | 0 | 1 | 0 | jquery,python,ajax | 23,831,288 | 1 | false | 1 | 0 | You can use "Network" panel of chrome's devTool to figure out what is the path that the ajax requests to.
Then use python script to fetch the content with the path. | 1 | 0 | 0 | I want to parse a website which contains a list of people and their information, The problem is that the website using ajax loads new and new information as I scroll down the website.
I need information of ALL the people.
urllib.open(..).read() does not take care of the scroll down. Can you please suggest me a way to parse all the data. | python program to read a ajax website | 0.197375 | 0 | 1 | 62 |
23,831,422 | 2014-05-23T14:08:00.000 | 1 | 0 | 0 | 0 | python,binary,endianness | 23,831,750 | 3 | false | 0 | 0 | Note: I assume Python 3.
Endianness is not a concern when writing ASCII or byte strings. The order of the bytes is already set by the order in which those bytes occur in the ASCII/byte string. Endianness is a property of encodings that maps some value (e.g. a 16 bit integer or a Unicode code point) to several bytes. By the time you have a byte string, the endianness has already been decided and applied (by the source of the byte string).
If you were to write unicode strings to file not opened with b mode, the question depends on how those strings are encoded (they are necessarily encoded, because the file system only accept bytes). The encoding in turn depends on the file, and possibly on the locale or environment variables (e.g. for the default sys.stdout). When this causes problems, the problems extend beyond just endianness. However, your file is binary, so you can't write unicode directly anyway, you have to explicitly encode and decode. Do this with any fixed encoding and there won't be endianness issues, as an encoding's endianness is fixed and part of the definition of the encoding. | 1 | 4 | 0 | When using file.write() with 'wb' flag does Python use big or litte endian, or sys.byteorder value ? how can i be sure that the endianness is not random, I am asking because I am mixing ASCII and binary data in the same file and for the binary data i use struct.pack() and force it to little endian, but I am not sure what happen to the ASCII data !
Edit 1: since the downvote, I'll explain more my question !
I am writing a file with ASCII and binary data, in a x86 PC, the file will be sent over the network to another computer witch is not x86, a PowerPC, witch is on Big-endian, how can I be sure that the data will be the same when parsed with the PowerPC ?
Edit 2: still using Python 2.7 | What endianness does Python use to write into files? | 0.066568 | 0 | 0 | 8,145 |
23,833,693 | 2014-05-23T15:57:00.000 | 2 | 0 | 0 | 1 | python,google-app-engine,python-2.7,cron | 23,835,748 | 1 | true | 1 | 0 | As far as I know, that isn't possible.
The Cron.yaml page is only made for defining the jobs, not to code.
I'd recommend putting your logic inside of the job that you're calling, as you mentioned.
Hope this helps. | 1 | 0 | 0 | Is it possible to have conditions (if ... else ...) in GAE cron.yaml?
For ex., to have something like
if app_identity.get_application_id() == 'my-appid' then run the job.
Understand, that probably the same result I can have by implementing it in the job handler. Just interesting if it could be done within cron.yaml. | Does cron.yaml support conditions? | 1.2 | 0 | 0 | 134 |
23,837,487 | 2014-05-23T19:58:00.000 | 1 | 0 | 1 | 1 | python,continuous-integration,teamcity,virtualenv | 23,838,949 | 3 | true | 0 | 0 | ok, solution seem like that
implement custom bash scripts like tests.sh
create step which execute this files like bash tests.sh | 2 | 1 | 0 | I want to use teamcity as CI for my python project. My project use virtualenv to store project related dependencies. So I create venv folder under project root and put env. related stuff there.
But when I trying to create build step with source venv/bin/activate as custom script - it fails with source: not found, if i creating this step also as command line, but executable file and put source as file and venv/bin/activate as parameter, then it fails with Cannot run process source venv/bin/activate : file not found
How to solve this? | runnig python tests via teamcity: Error: Source not found | 1.2 | 0 | 0 | 1,992 |
23,837,487 | 2014-05-23T19:58:00.000 | 0 | 0 | 1 | 1 | python,continuous-integration,teamcity,virtualenv | 62,431,055 | 3 | false | 0 | 0 | Actually I solved it by adding #!/bin/sh at the beggining. :)
Thank you for your answers as well. | 2 | 1 | 0 | I want to use teamcity as CI for my python project. My project use virtualenv to store project related dependencies. So I create venv folder under project root and put env. related stuff there.
But when I trying to create build step with source venv/bin/activate as custom script - it fails with source: not found, if i creating this step also as command line, but executable file and put source as file and venv/bin/activate as parameter, then it fails with Cannot run process source venv/bin/activate : file not found
How to solve this? | runnig python tests via teamcity: Error: Source not found | 0 | 0 | 0 | 1,992 |
23,838,453 | 2014-05-23T21:17:00.000 | 3 | 0 | 0 | 0 | python,numpy,machine-learning,scipy,scikit-learn | 23,841,290 | 1 | false | 0 | 0 | No, the ordering of the patterns in the training set do not matter. While the ordering of samples can affect stochastic gradient descent learning algorithms (like for example the one for the NN) they are in most cases coded in a way that ensures internal randomness. SVM on the other hand is globally convergant and it will result in the exact same solution regardless of the ordering. | 1 | 0 | 1 | I'm doing some classification with Python and scikit-learn. I have a question which doesn't seem to be covered in the documentation: if I'm doing, for example, classification with SVM, does the order of the input examples matter? If I have binary labels, will the results be less accurate if I put all the examples with label 0 next to each other and all the examples with label 1 next to each to other, or would it be better to mix them up? What about the other algorithms scikit provides? | Example order in machine learning algorithms (Scikit Learn) | 0.53705 | 0 | 0 | 138 |
23,838,512 | 2014-05-23T21:23:00.000 | 0 | 0 | 1 | 0 | python,boolean | 23,838,545 | 1 | false | 0 | 0 | x is an empty array. If you convert an empty array to a boolean, you get False; but that doesn't mean that x is the same thing as False. | 1 | 0 | 0 | Hey all so I was reading through the documentation for Python and it states that any empty string, list, dictionaries, I think they are called objects I think?(I don't really understand the concept of objects :\ if someone would be able to explain it to me I would be happy). Anyways my question is x = []; bool(x) # False so therefore: shouldn't bool(x==False) be true? But it returns false and that's the part that I am confused about. | Python: Empty list boolean comparison | 0 | 0 | 0 | 365 |
23,845,235 | 2014-05-24T12:49:00.000 | 0 | 0 | 0 | 0 | python,numpy,scipy | 23,876,265 | 2 | false | 0 | 0 | The returned value of scipy.optimize.minimize is of type Result:
Result contains, among other things, the inputs (x) which minimize f. | 1 | 2 | 1 | I need, for a simulation, to find the argument (parameters) that maximizes a multivariable function with constraints.
I've seen that scipy.optimize.minimize gives the minimum of a function (and, the maximum of the minus function) of a given function and I can use constraints and bounds. But, reading the doc, I've find out that it returns the minimum value but not the parameter that minimizes it (am I right?)
scipy.optiminize.fmin does give the parameter that minimize the function, but this doesn't accept bounds or contraints.
Looking in numpy, there is a function called argmin but it takes a vector as argument and return the "parameter" that minimizes it.
Is there such a function that, like minimize, accept constraint and, like fmin, return the parameter that minimize the function?
Thanks in advance. | Constrained optimization in SciPy | 0 | 0 | 0 | 4,417 |
23,845,961 | 2014-05-24T14:10:00.000 | 1 | 0 | 1 | 0 | python,pycharm | 47,682,445 | 5 | false | 0 | 0 | You can run one python file in the run window, and debug another python file but without any breakpoints. This allows you to see the console outputs of two concurrently running python scripts in Pycharm. Not ideal, but it's the best work-around I've found to this annoying problem. | 1 | 7 | 0 | I'm running Python, zmq code for a server and a client. I would like to see the runs(console messages) in a side-by-side mode (split mode) for better analysis of the interaction between the two.
It has the "split" mode between "Run" and "Terminal" and others but could not find split mode within the Runs category.
Were you able to see multiple runs in a side-by-side mode?
Is there any plugin or other way make it work? | View runs in split mode in PyCharm | 0.039979 | 0 | 0 | 8,110 |
23,846,012 | 2014-05-24T14:14:00.000 | 0 | 1 | 0 | 0 | java,php,python,automation,sap | 24,307,979 | 4 | false | 1 | 0 | You can implement Scheduled Jobs using JAVA if i am understanding you correctly. | 3 | 1 | 0 | I am given a task to automate few boring tasks that people in office do everyday using SAP Logon 640.
There are like 30-40 transactions that are required to be automated.
I searched a lot on SAP Automation and found SAP GUI Scripting but failed to find any starting point for python, php or java.
How should i start to automate SAP transactions using python , php or java ? I am not even sure what i need from my IT department to get started. | How to Automate repeated tasks in SAP Logon | 0 | 0 | 0 | 4,691 |
23,846,012 | 2014-05-24T14:14:00.000 | 0 | 1 | 0 | 0 | java,php,python,automation,sap | 42,849,870 | 4 | false | 1 | 0 | SapGui has buit in record and playback tool which gives you out of the box vbs files which you can use for automation , if the values does not change, then you can use the same scripts every time.
You can find it in the main menu of the sap gui window customise local layout(Alt+F12)->Script Recording and playback. | 3 | 1 | 0 | I am given a task to automate few boring tasks that people in office do everyday using SAP Logon 640.
There are like 30-40 transactions that are required to be automated.
I searched a lot on SAP Automation and found SAP GUI Scripting but failed to find any starting point for python, php or java.
How should i start to automate SAP transactions using python , php or java ? I am not even sure what i need from my IT department to get started. | How to Automate repeated tasks in SAP Logon | 0 | 0 | 0 | 4,691 |
23,846,012 | 2014-05-24T14:14:00.000 | 1 | 1 | 0 | 0 | java,php,python,automation,sap | 23,878,952 | 4 | false | 1 | 0 | We use either VBScript or C# to automate tasks. Using VBSCript is the easiest. Have the SAP GUI record a task then it will produce a vbscript that can serve as a starting point for your coding. When you have this vbscript file then you can translate it into other languages. | 3 | 1 | 0 | I am given a task to automate few boring tasks that people in office do everyday using SAP Logon 640.
There are like 30-40 transactions that are required to be automated.
I searched a lot on SAP Automation and found SAP GUI Scripting but failed to find any starting point for python, php or java.
How should i start to automate SAP transactions using python , php or java ? I am not even sure what i need from my IT department to get started. | How to Automate repeated tasks in SAP Logon | 0.049958 | 0 | 0 | 4,691 |
23,846,617 | 2014-05-24T15:16:00.000 | 3 | 0 | 0 | 0 | python,mysql,python-2.7 | 23,846,676 | 4 | false | 0 | 0 | Run a simple algorithm against the primary key. For instance, if you have an integer for user id, separate by even and odd numbers.
Use a mod function if you need more than 2 groups. | 2 | 2 | 0 | I need to develop an A/B testing method for my users. Basically I need to split my users into a number of groups - for example 40% and 60%.
I have around 1,000,00 users and I need to know what would be my best approach. Random numbers are not an option because the users will get different results each time. My second option is to alter my database so each user will have a predefined number (randomly generated). The negative side is that if I get 50 for example, I will always have that number unless I create a new user. I don't mind but I'm not sure that altering the database is a good idea for that purpose.
Are there any other solutions so I can avoid that? | Algorithm for A/B testing | 0.148885 | 1 | 0 | 1,690 |
23,846,617 | 2014-05-24T15:16:00.000 | 0 | 0 | 0 | 0 | python,mysql,python-2.7 | 23,846,772 | 4 | false | 0 | 0 | I would add an auxiliary table with just userId and A/B. You do not change existent table and it is easy to change the percentage per class if you ever need to. It is very little invasive. | 2 | 2 | 0 | I need to develop an A/B testing method for my users. Basically I need to split my users into a number of groups - for example 40% and 60%.
I have around 1,000,00 users and I need to know what would be my best approach. Random numbers are not an option because the users will get different results each time. My second option is to alter my database so each user will have a predefined number (randomly generated). The negative side is that if I get 50 for example, I will always have that number unless I create a new user. I don't mind but I'm not sure that altering the database is a good idea for that purpose.
Are there any other solutions so I can avoid that? | Algorithm for A/B testing | 0 | 1 | 0 | 1,690 |
23,846,899 | 2014-05-24T15:45:00.000 | 3 | 0 | 1 | 0 | python,windows-7,installation,lighttable | 29,813,451 | 1 | false | 0 | 1 | It's pretty easy to do.
Unzip to where ever
Move the LightTable directory (from inside LightTableWin) to your Program Files (x86) directory.
2.1 If you are using Windows Explorer, you'll need to start windows explorer as an administrator (found by right clicking the program icon)
open your Program Files (x86)\LightTable directory, right click
and drap the LightTable.exe file into the same directory and select
Create Shortcut
Left Click your short cut and drag it to
your start menu. When it asks if you want to pin it to your start
menu, select yes.
Click on Start, and then click on the light table short cut.
Use Light Table | 1 | 2 | 0 | I am trying to download/install Light Table. I want it to show up in the start menu.
When downloading light table, it shows up as a Zip folder in the TEMP file. I've extracted the files and am unable to get it to show up in the start menu.
Normally the programs I download have an installer that does this automatically. Light Table doesn't seem to have this.
I'm sure I can use it from the TEMP folder, but would really like it in the start menu, program files folder or C drive.
I've only done basic use of PCs (gaming, web browsing, MS Office). | Install Light Table editor on Windows 7 | 0.53705 | 0 | 0 | 2,589 |
23,849,163 | 2014-05-24T20:07:00.000 | 5 | 1 | 0 | 0 | python,unit-testing,flask,integration-testing | 23,849,290 | 1 | true | 1 | 0 | Most of this is personal opinion and will vary from developer to developer.
There are a ton of python libraries for unit testing - that's a decision best left to you as the developer of the project to find one that fits best with your tool set / build process.
This isn't exactly 'unit testing' per se, I'd consider it more like integration testing. That's not to say this isn't valuable, it's just a different task and will often use different tools. For something like this, testing will pay off in the long run because you'll have piece of mind that your bug fixes and feature additions aren't impacting your end to end code. If you're already doing it, I would continue. These sorts of tests are highly valuable when refactoring down the road to ensure consistent functionality.
I would not waste time testing 3rd party APIs. It's their job to make sure their product behaves reliably. You'll be there all day if you start testing 3rd party features. A big reason to use 3rd party APIs is so you don't have to test them. If you ever discover that your app is breaking because of a 3rd party API it's probably time to pick a different API. If your project scales to a size where you're losing thousands of dollars every time that API fails you have a whole new ball of issues to deal with (and hopefully the resources to address them) at that time.
In general, I don't test static content or html. There are tools out there (web scraping tools) that will let you troll your own website for consistent functionality. I would personally leave this as a last priority for the final stages of refinement if you have time. The look and feel of most websites change so often that writing tests isn't worth it. Look and feel is also really easy to test manually because it's so visual. | 1 | 3 | 0 | I'm teaching myself backend and frontend web development (I'm using Flaks if it matters) and I need few pointers for when it comes to unit test my app.
I am mostly concerned with these different cases:
The internal consistency of the data: that's the easy one - I'm aiming for 100% coverage when it comes to issues like the login procedure and, most generally, checking that everything that happens between the python code and the database after every request remain consistent.
The JSON responses: What I'm doing atm is performing a test-request for every get/post call on my app and then asserting that the json response must be this-and-that, but honestly I don't quite appreciate the value in doing this - maybe because my app is still at an early stage?
Should I keep testing every json response for every request?
If yes, what are the long-term benefits?
External APIs: I read conflicting opinions here. Say I'm using an external API to translate some text:
Should I test only the very high level API, i.e. see if I get the access token and that's it?
Should I test that the returned json is what I expect?
Should I test nothing to speed up my test suite and don't make it dependent from a third-party API?
The outputted HTML: I'm lost on this one as well. Say I'm testing the function add_post():
Should I test that on the page that follows the request the desired post is actually there?
I started checking for the presence of strings/html tags in the row response.data, but then I kind of gave up because 1) it takes a lot of time and 2) I would have to constantly rewrite the tests since I'm changing the app so often.
What is the recommended approach in this case?
Thank you and sorry for the verbosity. I hope I made myself clear! | How to properly unit test a web app? | 1.2 | 0 | 0 | 1,256 |
23,850,291 | 2014-05-24T22:34:00.000 | 2 | 0 | 1 | 0 | python,macos,python-2.7,pip,macports | 23,850,568 | 1 | false | 0 | 0 | The solution is: dont use Macports for installing Python's packages.
Macports is a general package manager and it registers installed packages in its database.
Pip is a package manager for Python so if you want to install Python package, use appropriate package management tool. Pip doesnt have it's own database to keep evidence about installed stuff - it just checks Python's path to see if the package is there (and that's what you want).
Sooner or later you'll use Virtualenv anyway and you'll need pip to install packages in there too so it's better to use it everywhere. | 1 | 3 | 0 | Until today, I have been using the macports version of python27 and installing python packages through macports. Today, I needed some packages which were not available through macports; I learned about pip and found them there. After installing these packages through pip, however, I realized that neither pip nor macports could see what had been installed by the other. So, for consistency, I decided to uninstall all macports packages, install python27 and py27-pip through macports and then proceed to install all of my python packages through pip.
This worked fine, but since macports does not know about my pip-installed python packages, I ran into trouble when installing something else which depends on python (e.g., inkscape): macports tried to install its own version of, e.g. py27-numpy (already installed by pip) and then failed installation because it "already exists and does not belong to a registered port."
Is there a consistent way to use pip and to get macports to recognize that the python packages it might need for something else are already installed? | Macports does not recognize pip-installed packages | 0.379949 | 0 | 0 | 780 |
23,851,357 | 2014-05-25T02:06:00.000 | 0 | 1 | 1 | 0 | python,twitter | 24,349,793 | 1 | false | 0 | 0 | First, please specify what "Twitter API" you are using with your terminal.
There are Twitter API provided by Twitter and various wrapper libraries for Python.
For your question:
A) Technically No. But if you can tell me your OS and your terminal
Python version, maybe I can help you more :)
B) Depends on "which Twitter API" question at first. | 1 | 0 | 0 | I am really new to Python and I want to use the Twitter API on PyCharm but it kept on telling me that it isn't recognized.
I ran Tweeter API using just the terminal and it works. But, with the terminal it has limited functionality, hence I want to use the IDE instead.
So;
A) what is the difference between Python on the terminal and the IDE?
B) How would I install and run Twitter API on the IDE? | Using Twitter in Python IDE | 0 | 0 | 0 | 58 |
23,859,042 | 2014-05-25T19:18:00.000 | 1 | 0 | 0 | 0 | python,eclipse,pydev | 24,248,091 | 2 | false | 1 | 0 | In the PyDev editor you can use Ctrl+Shift+F9 to terminate/relaunch by default.
But as you're dealing with flask, you should be able to use it to reload automatically on code-changes without doing anything by setting use_reloader=True.
I.e.: I haven't actually tested, but its documentation says that you can set the reload flag for that run(use_reloader=True). | 1 | 0 | 0 | I've just started using PyDev to work on a Flask app. The thing is, every time I make a change, I have to click on the "stop process" button in the console window, then click "Run" again.
This is necessary, because Flask runs a web server on a specific port, and running more than one instance of the application results in errors connecting to the port.
Is there a way I can automatize this process? (configuration, some sort of event handler, or any other way) | How to terminate previous build's process when running the project? | 0.099668 | 0 | 0 | 38 |
23,859,110 | 2014-05-25T19:23:00.000 | 1 | 0 | 0 | 0 | python,django,django-cms | 23,859,178 | 1 | true | 1 | 0 | I don't know about Django CMS, but if all you want to do is let them edit a plain text chunk on a web page, plain old Django can do that without breaking a sweat. Django admin can be used to handle editing at the very least, and you just need a model with a TextField to store the text, and a template to render it into an HTML page. You could probably figure out how to do it after working through the Django tutorial. | 1 | 0 | 0 | I am currently working on a website using Django. Here's my issue. I want some pages of my website to be partially editable (just text edition) by registered users. And I want it to be user friendly enough.
I first thought of using regular html forms to make the content editable. And then I discovered Django CMS. As far as I understand I can pretty much do what I want with Django CMS. But I am wondering if it's not too heavy in this situation, and I want to have a lot of control on what I make editable or not by the users.
Therefore my questions are :
Should I use Django CMS or not ?
If yes, would it be possible to restrict the standard usage of Django CMS depending on the logged user ? (For example, I mean by that, just allowing the user to edit a paragraph, and not to modifying the whole layout of the page)
Thanks ! | Using Django cms for editable webpage? | 1.2 | 0 | 0 | 206 |
23,859,613 | 2014-05-25T20:22:00.000 | 4 | 0 | 0 | 0 | python,pyqt,pyqt4 | 41,981,238 | 6 | false | 0 | 1 | You may simply try this:
os.startfile(whatever_valid_filename)
This starts the default OS application for whatever_valid_filename, meaning Explorer for a folder name, default notepad for a .txt file, etc. | 1 | 4 | 0 | I have searched a lot and I know how to open a directory dialog window.
But what I am looking for is the method to open a directory folder under windows OS, just like you right click one of your local folder and select open.
Any suggestions? | PyQt - How to open a directory folder? | 0.132549 | 0 | 0 | 21,145 |
23,866,874 | 2014-05-26T09:24:00.000 | 0 | 0 | 0 | 0 | mysql,django,python-2.7 | 23,866,951 | 1 | false | 0 | 0 | Check if there is any install path dependencies on your installer to figure out if your python is in the right place.
But I recommend that you install the connector you want manually.
P.S. Are you sure you need the connector? Or you just saw the error and assumed you need it? | 1 | 0 | 0 | I'm installing MySQL 5.6 Community Edition using MySQL installer and everything was installed properly except for "Connector/Python 2.7 1.1.6".
Upon mousing over, I get the error message "The product requires Python 2.7 but it was not detected on this machine. Python 2.7 requires manual installation and must be installed prior to installing this product"
The problem is, I have Python 2.7 installed in C: already and I can't seem to direct this detection towards where I have Python 2.7.
(I am using Windows 8) | Connector installation error (MySQL installer) | 0 | 1 | 0 | 50 |
23,869,132 | 2014-05-26T11:31:00.000 | 1 | 0 | 1 | 0 | python,excel,vba | 23,869,285 | 1 | false | 0 | 0 | It is xlCalculationAutomatic or you could use the number -4105 in Python. | 1 | 1 | 0 | how to make excel workbook calculation to automatic in both vba and python script?
I tried this Application.Calculation = xlCalculateAutomatic but it is not working.
It throws me below error.
global name 'xlCalculateAutomatic' is not defined. | workbook calculation to automatic in both vba and python | 0.197375 | 0 | 0 | 93 |
23,870,365 | 2014-05-26T12:38:00.000 | 7 | 0 | 1 | 0 | python,django,pycharm | 32,492,931 | 6 | false | 1 | 0 | I have met the problems today. At last, I finished it by:
Create project in command line
Create app in command line
Just open the existing files and code in pycharm
The way I use have the benefits:
don't need by professional pycharm
code django project in pycharm | 1 | 51 | 0 | I'm new in this area so I have a question. Recently, I started working with Python and Django. I installed PyCharm Community edition as my IDE, but I'm unable to create a Django project.
I looked for some tutorials, and there is an option to select "project type", but in the latest version this option is missing. Can someone tell me how to do this? | How to set up a Django project in PyCharm | 1 | 0 | 0 | 89,573 |
23,873,744 | 2014-05-26T15:49:00.000 | 0 | 0 | 0 | 1 | python,winapi,impersonation,python-2.6 | 23,874,294 | 1 | false | 0 | 0 | Your script must be running as a service. I believe since Windows Vista you must have a separate app in the user session for the GUI. | 1 | 0 | 0 | I have to write a Python script on Windows (using win32 api), executed with system privileges. I need to impersonate the currently logged user to show him a popup (because system user can't).
So, I'm searching a way to do this operation. I find this method:
win32security.ImpersonateLoggedOnUser
but requires a handle that can be obtained with this method:
win32security.LogonUser
The last method however requires the user password, but I haven't. There is a way to get this handler (or another way to impersonate the currently logged user, or another way to show a popup from system user) without the user password? I'm the system user, so I have full privileges on the machine...
Thanks a lot and sorry for my bad english!
Federico | Python - Impersonate currently logged user (from system user) | 0 | 0 | 0 | 881 |
23,873,888 | 2014-05-26T15:57:00.000 | 0 | 0 | 0 | 0 | python,apache,ubuntu,request,wsgi | 23,883,386 | 3 | false | 1 | 0 | I access this variable with web.ctx.env
web.ctx.env.get('HTTP_X_SOURCE')
This code works well on another server with apache 2 and wsgi.
On my new server (ubuntu 13)
test with pure web.py (no apache no wsgi), the variable pass
test with apache2-wsgi+web.py the variable don't pass
On my old server (ubuntu 12)
test with pure web.py (no apache no wsgi), the variable pass
test with apache2-wsgi+web.py the variable pass too | 1 | 0 | 0 | I have a probleme with apache2 and wsgi
I send my server a request with a custom field in headers (HTTP_X_SOURCE) and apache2 (or wsgi) block this field.
request => apache2 => web.py
Does anyone know why apache2 or wsgi block this field ? | Wsgi custom field in request header | 0 | 0 | 0 | 1,218 |
23,878,794 | 2014-05-26T23:06:00.000 | 0 | 0 | 1 | 1 | python | 23,879,072 | 1 | false | 0 | 0 | Portable Python and Python (x,y) both work on RT | 1 | 0 | 0 | I would like to install and run python on windows RT. Is it possible?
I have tried with python.org but it doesn't seem to have a specific version for it. I wonder whether there is anything I could use instead? | is it possible to install python on windows RT? | 0 | 0 | 0 | 896 |
23,878,984 | 2014-05-26T23:32:00.000 | 0 | 0 | 0 | 0 | python,xml,pybrain | 23,879,122 | 1 | false | 0 | 0 | PyBrain declares in it's setup.py only one required package scipy.
Scipy is a bit complex stuff and it is best installed from binaries (at least on Windows). So if you manage installing scipy, you shall make PyBrain running. | 1 | 0 | 0 | I'm using PyBrain in a project over Windows 7 and I've had not problem with this library until I had to write the trained network to a XML file.
I tried this "from pybrain.tools.xml.networkwriter import NetworkWriter" but I got an importation error.
Can anyone tell me if there's a requirement to get this job done?
I tried installing the library called "lxml", because I have it installed on my linux pc, but it doesn't seem to work along side with pybrain. | Which XML library is used for PyBrain? | 0 | 0 | 1 | 120 |
23,883,154 | 2014-05-27T07:10:00.000 | 0 | 0 | 0 | 0 | datasift-python | 27,569,384 | 2 | false | 0 | 0 | If you are looking to uncover the identity of Facebook users from DataSift's Facebook Public Source, this is not possible; this data source has been anonymised for privacy reasons. | 1 | 0 | 0 | DataSift makes users anonymous by creating a hash out of their user id. This makes it imposible to retrieve the user and be ale to target him.
If for example you are running a query meant at discovering negative tweets about your brand, is there any way you can target those users with specific advertising? Or is there any way you can connect to the authors of those posts? | DataSift anonymity - How can I target the authors of specific posts | 0 | 0 | 0 | 62 |
23,884,156 | 2014-05-27T08:08:00.000 | 1 | 0 | 0 | 0 | python,gstreamer | 23,921,536 | 1 | false | 0 | 1 | I'd suggest to file a bug and ideally make your test files available.
If you want to track this down yourself take a look at the GST_DEBUG="*:3" ./your-app output to see which element is emitting the warning. | 1 | 0 | 0 | I'm writing a mediaplayer-gui fitting some needs of a medialibrary containing classical music only.
Language is python3/tkinter.
One backend is gstreamer1.0, playbin (seems to be the only one, playing gapless).
When playbin gets the uri of a file with 5.0 channels
(FRONT_LEFT,FRONT_RIGHT,FRONT_CENTER,REAR_LEFT,REAR_RIGHT)
it gives following warning:
** (python3:13745): WARNING **: Unpositioned audio channel position flag set but channel positions present
and plays the file downmixed to stereo.
5.0 is most common in classical-music media(LFE is mostly unwanted).
Which gstreamer-object is the one, i can tell about channel-layout and what signal do i have to connect to, to get that object?
Additional info:
5.1 gives the same warning, but plays without downmixing;
5.0 using gstplay-1.0 from commandline gives warning & downmixing;
using gst123 based on gstreamer0.1 plays everything right | how to make playbin of gstreamer1.0 playing multichannel-audio 5.0 playing without downmixing to stereo | 0.197375 | 0 | 0 | 193 |
23,891,195 | 2014-05-27T13:46:00.000 | -1 | 0 | 1 | 0 | python | 23,891,448 | 3 | true | 0 | 0 | I'll answer to my own question since I got an idea while writing the question, and maybe someone will need that.
I added a link from that folder to my site-packages folder like that:
ln -s /home/me/python/pyutils /path/to/site-packages/pyutils
Then, since the PYTHONPATH contains the /path/to/site-packages folder, and I have a pyutils folder in it, with init.py, I can just import like:
from pyutils import mymodule
And the rest of the /home/me/python is not in the PYTHONPATH | 1 | 1 | 0 | I have a case for needing to add a path to a python package to sys.path (instead of its parent directory), but then refer to the package normally by name.
Maybe that's weird, but let me exemplify what I need and maybe you guys know how to achieve that.
I have all kind of experimental folders, modules, etc inside a path like /home/me/python.
Now I don't want to add that folder to my sys.path (PYTHONPATH) since there are experimental modules which names could clash with something useful.
But, inside /home/me/python I want to have a folder like pyutils. So I want to add /home/me/python/pyutils to PYTHONPATH, but, be able to refer to the package by its name pyutils...like I would have added /home/me/python to the path. | Add path to python package to sys.path | 1.2 | 0 | 0 | 2,392 |
23,894,089 | 2014-05-27T16:06:00.000 | 0 | 0 | 0 | 0 | python,oauth-2.0,dropbox-api | 23,894,424 | 1 | true | 0 | 0 | No, the Dropbox API currently doesn't expose any way to send shared folder invites. But a shared folder API is being worked on. | 1 | 0 | 0 | I am developing an app to retrieve and store files to dropbox using oauth2 protocol. Is there any way to use dropbox core api to send invites to share a folder in Python ? | Is there any way to use dropbox core api to send invites to share a folder in Python? | 1.2 | 0 | 1 | 139 |
23,895,408 | 2014-05-27T17:22:00.000 | 1 | 0 | 0 | 0 | python,pandas | 23,895,466 | 1 | true | 0 | 0 | i would recommend just using pandas.io.sql to download your database data. it returns your data in a DataFrame.
but if, for some reason, you want to access the columns, you already have your answer:
assignment: df['column%d' % count] = data
retrieval: df['column%d' % count] | 1 | 1 | 1 | I am trying to initialize an empty dataframe with 5 column values. Say column1, column2, column3, column4, column5.
Now I want to read data from database and want to insert specific column values from the database to this dataframe. Since there are 5 columns its easier to do it individually. But i have to extend the number of columns of the dataframe to 70. For that I am using for loop.
To update the coulmn value I was using
dataframe['column "+count+"'] = .... where count is an incremental variable ranging upto 70.
But the above code adds a new column to the dataframe. How can I use the count variable to access these column names? | Change the column name of dataframe at runtime | 1.2 | 0 | 0 | 193 |
23,897,254 | 2014-05-27T19:16:00.000 | 3 | 0 | 1 | 0 | python,multiprocessing,lsf | 23,901,931 | 1 | true | 0 | 0 | One (very simplified) way to think of LSF is as a system that launches a process and lets the process know how many cores (potentially on different hosts) have been allocated to it. LSF can't prevent your program from doing something stupid (like for example, if multiple instances of it run at the same time, and one instance overwrites the other's output).
Some common ways of using LSF.
Run 6 sequential jobs that process one file each. These 6 can run in parallel. Have a dependant seventh job that runs after the previous 6 finish, which will combine the output of the previous 6 into a single output.
Run a parallel job that is assigned 6 cores on a single host. Seems that the python multiprocessing module would fit in well here. The env variable $LSB_MCPU_HOSTS will tell you how many cores are assigned to the job, so you know how big to make the pool.
Run a parallel jobs that is assigned 6 cores, and could run on multiple hosts. Again, your process must be able to start itself on these other hosts. (or use blaunch to help out)
I'm not sure which of these 3 ways best fits you needs. But I hope that the explanation helps you decide. | 1 | 3 | 0 | I have a single task to complete X number of times in Python and I will be using LSF to speed that up. Is it better to submit a job containing several Python scripts which can be run separately in parallel or one Python script that utilizes the multiprocessor module?
My issue is I don't trust LSF to know how to split up the Python code into several processes (I'm not sure how LSF does this). However, I also don't want several Python scripts floating around as that seems inefficient and disorganized.
The task at hand involves parsing six very large ASCII files and saving the output in a Python dict for later use. I want to parse the six files in parallel (they take about 3 minutes each). Does LSF allow Python to tell it something like "Hey, here's one script, but you're going to split it into these six processes"? Does LSF need Python to tell it that or does it already know how to do that?
Let me know if you need more info. I have trouble balancing between "just enough" and "too much" background. | LSF: Submit one Python script that uses multiprocessor module *or* submit several scripts at once that are "pre-split"? | 1.2 | 0 | 0 | 1,380 |
23,898,363 | 2014-05-27T20:30:00.000 | 1 | 0 | 1 | 0 | python-asyncio | 47,181,879 | 2 | false | 1 | 0 | About case #2: Blocking code should be at least wrapped with .run_in_executor. | 1 | 7 | 0 | I'm trying to use a coroutine function outside of the event loop. (In this case, I want to call a function in Django that could also be used inside the event loop too)
There doesn't seem to be a way to do this without making the calling function a coroutine.
I realize that Django is built to be blocking and a therefore incompatible with asyncio. Though I think that this question might help people who are making the transition or using legacy code.
For that matter, it might help to understand async programming and why it doesn't work with blocking code. | How to interface blocking and non-blocking code with asyncio | 0.099668 | 0 | 0 | 1,630 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.