Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
23,023,710 | 2014-04-11T22:36:00.000 | 1 | 0 | 1 | 0 | python,validation,anti-patterns | 23,024,788 | 2 | false | 0 | 0 | While Tom Dalton's answer is probably correct as far as the best design goes, it may be worth noting that import cycles often work just fine in Python.
The limitation though is that you need to use import my_module syntax and avoid top-level (global) code that uses the imported modules. Declaring functions (or classes with methods) that use the imported module is fine.
You usually run into trouble if you're using from my_module import obj or something similar, since this will only work if obj has already been defined in the other module. If that other module is in the process of importing your module, the class definition or global variable assignment may not have have happened yet.
So for your specific case, an alternative solution may be to have your validate module use import my_class, then is_MyClass can do isinstance(input, my_class.MyClass). | 2 | 0 | 0 | I recently refactored my code to put input validation methods that are shared among several classes in their own module, validate.py. Some of these validation methods check if their input is an instance of a class, e.g. MyClass. Therefore validate.py must import MyClass so it's method is_MyClass can check if isinstance(input, MyClass). But, I want to use some validation methods from validate.py in MyClass to sanitize input to MyClass.my_method, so MyClass must import validate.py.
Something tells me I just casually refactored my way into an anti-pattern. If what I'm trying to do implies circular dependencies, then I must be Doing It Wrong™.
But, code reuse is a good idea. So what's the best practice for sharing validation methods in this way? | How to avoid circular dependencies in validation module | 0.099668 | 0 | 0 | 289 |
23,023,851 | 2014-04-11T22:51:00.000 | 3 | 0 | 0 | 0 | python,scipy,mathematical-optimization | 23,089,696 | 1 | true | 0 | 0 | I recently ran into the same problem with fmin_bfgs.
As far as I could see, the answer is negative. I didn't see a way to limit the stepsize.
My workaround was to first run Nelder-Mead fmin for some iterations, and then switch to fmin_bfgs. Once I was close enough to the optimum, the curvature of my function was much nicer and fmin_bfgs didn't have problems anymore.
In my case the problem was that the gradient of my function was very large at points further away from the optimum.
fmin_l_bfgs_b works also without constraints, and several users have reported reliable performance.
aside: If you are able to convert your case to a relatively simple test case, then you could post it to the scipy issue tracker so that a developer or contributor can look into it. | 1 | 0 | 1 | I'm trying to estimate a statistical model using MLE in python using the fmin_BFGS function in Scipy.Optimize and a numerically computed Hessian.
It is currently giving me the following warning: Desired error not necessarily achieved due to precision loss.
When I print the results of each evaluation, I see that while the starting guess yields a reasonable log-likelihood. However, after a few guesses, the cost function jumps from ~230,000 to 9.5e+179.
Then it gives a runtime warning: RuntimeWarning: overflow encountered in double_scalars when trying to compute radical = B * B - 3 * A * C in the linesearch part of the routine.
I suspect that the algo is trying to estimate the cost function at a point that approaches an overflow. Is there a way to reduce the rate at which the algorithm changes parameter values to keep the function in a well-behaved region? (I would use the constrained BFGS routine but I don't have good priors over what the parameter values should be) | Restricting magnitude of change in guess in fmin_bfgs | 1.2 | 0 | 0 | 236 |
23,027,988 | 2014-04-12T08:19:00.000 | 1 | 0 | 0 | 0 | python,firefox,selenium | 23,028,084 | 2 | false | 0 | 0 | Selenium cannot connect to an existing browser. It can only launch new instances. | 1 | 0 | 0 | I am new to selenium.
I found selenium would not use my local firefox browser. Seems it create a fresh one with no plugin.
But I want do something with plugin on, such as: modify request headers, aotuproxy. I only found setting headers example in java. Though proxy can be set by using webdriver.FirefoxProfile().set_preference('network.proxy.http',...., it is not so sweet to fit my aim.
So I think it would be very nice to make selenium use my firefox. But I can not figure it out. | How can I make Selenium use my firefox (not create a fresh one) | 0.099668 | 0 | 1 | 97 |
23,028,941 | 2014-04-12T10:07:00.000 | 3 | 0 | 0 | 1 | python,sockets,web,tornado | 23,031,157 | 1 | true | 0 | 0 | You can start multiple servers that share an IOLoop within the same process. Your HTTPServer could listen on one port, and the TCPServer could listen on another. | 1 | 2 | 0 | I know the httpserver module in tornado is implemented based on the tcpserver module, so I can write a socket server based on tornado. But how can I write a server that is both a socket server and a web server?
For example, if I want to implement a chat app. A user can either login through a browser or a client program. The browser user can send msg to the client user through the back-end server. So the back-end server is a web and socket server. | How to use tornado as both a socket server and web server? | 1.2 | 0 | 1 | 1,209 |
23,030,284 | 2014-04-12T12:19:00.000 | 1 | 0 | 0 | 0 | python,numpy,machine-learning,scikit-learn,recommendation-engine | 27,126,021 | 1 | false | 0 | 0 | I believe You can use centered cosine similarity /pearson corelation to make this work and make use of collaborative filtering technique to achieve this
Before you use pearson co -relation you need to fill the Null ( the fields which dont have any entries) with zero ,now pearson co relation centers the similarity matrix around zero ,which gives optimum recommendation . | 1 | 3 | 1 | I am trying to build a content-based recommender system in python/pandas/numpy/sklearn.
Here are the matrix involved and their size:
X: n_customers * n_features (contains the features of each customer)
Y: n_customers *n_products (contains the scores given by each customer to each product)
Theta: n_features * n_products
The aim is to learn Theta in order to be able to predict the score given by a customer to all products (X*Theta). Indeed, Y is a sparse matrix, a customer score only a very small % of the whole quantity of products. This is why Y contains a lot of NaN values.
Here is my problem:
This is a regression problem with many targets (here target=product). But I want to do the regression only on not null values. because the number of NaN differ from one product to another, how can I vectorize that ?
Assume there are 1000 products and 100 000 customers, each one having 20 features.
For each product I need to the regression on the not null values. So without vectorization, I would need 1000 different regressor learning each one a Theta vector of length 20.
If possible I would like to solve this problem with sklearn. The ridge regression for example takes into account multiple targets (Y as a matrix)
I hope it's clear enough.
Thank you for your help. | Content based recommender system with sklearn or numpy | 0.197375 | 0 | 0 | 3,553 |
23,031,149 | 2014-04-12T13:42:00.000 | 0 | 1 | 1 | 0 | python,python-3.x,raspbian,pycrypto | 23,031,224 | 1 | false | 0 | 0 | Having looked into it there does not seem to be a pycrypto version for python3 at the moment. I think you're options are to look for an alternative package or to convert your code to python 2. There are tools available which can do this automatically, for example 3to2 is available in pip. | 1 | 0 | 0 | I'm trying to install pycrypto for python 3.x.x on raspberry pi
but when i run python setup.py install
from the command line, it is by default installed to python 2.7.x
i have installed python-dev and still with no luck, i have read that using a PIP might help, but unfortunately i don't know how to use it. all my codes are written for python 3.3.x and it would take me a very long time to re-write them all for 2.7.
so how can i fix it without re-writing my codes | how to install python package in Raspbian? | 0 | 0 | 0 | 419 |
23,031,858 | 2014-04-12T14:45:00.000 | 1 | 0 | 1 | 0 | python,arrays,python-2.7 | 23,033,736 | 2 | true | 0 | 0 | Python itself can't do that out of the box, as in you can't say something like text.readlines() and have it work on each separate element. What you can do is read the text and use the split method of strings. For exsmple, if the data is separated by commas, text.split(",") will return you a list that contains all the elements.
If words are separated by spaces only, you can just call the split() method without any arguments. | 1 | 0 | 0 | I have a txt file with about 5000 names("MARY","PATRICIA","LINDA",...). I want to put that names into a list in python. I searched for a solution, but I could only find some methodes to read txt files line by line or as one element, but I need to read it element by element. Is there a nice way to do that? | Read a list out of a txt element by element | 1.2 | 0 | 0 | 310 |
23,037,748 | 2014-04-13T00:25:00.000 | 0 | 0 | 0 | 0 | python,youtube-api,gdata | 23,088,868 | 1 | false | 0 | 0 | There used to be a large selection of things in the past, but they are all gone now. So do not think it is possible any more. | 1 | 1 | 0 | Is there a way to pull traffic information from a particular youtube video (demographics for example, age of users, country, gender, etc.), say using the python gdata module or the youtube API? I have been looking around the module's documentation, but so far nothing. | YouTube API retrieve demographic information about a video | 0 | 0 | 1 | 182 |
23,038,209 | 2014-04-13T01:48:00.000 | 0 | 0 | 1 | 1 | python,python-2.7,enthought,leap-motion | 24,131,195 | 2 | false | 0 | 0 | Try this:
Put the four files into one folder.
Right click on the Sample.py until it says "Open with" and gives some choices.
Select Python Launcher.app (2.7.6) # This version of Python Launcher must match the Mac built in Python Version.
If your version of LeapPython.so is constructed correctly, it should run. | 1 | 0 | 0 | I am trying to install the leap motion sdk into Enthought Canapy. The page called Hello World on leap motion mentions i need to put these four files:
Sample.py, Leap.py, LeapPython.so and libLeap.dylib
into my "current directory". I don't know how to find my current directory. I have tried several things including typing into terminal "python Sample.py" which tells me:
/Users/myname/Library/Enthought/Canopy_64bit/User/Resources/Python.app/Contents/MacOS/Python: can't open file 'Sample.py': [Errno 2] No such file or directory
I've tried to put the 4 files in the MacOS file, but it still gives me this error. Any suggestions would be greatly appreciated. | Installing Leap Motion sdk into Enthought SDK | 0 | 0 | 0 | 671 |
23,038,760 | 2014-04-13T03:25:00.000 | 3 | 0 | 1 | 0 | python,sorting | 23,038,786 | 1 | true | 0 | 0 | is creating a sorting algorithm a common task for a professional developer?
No. It's good to be able to do it, but most of the time, you'll just use sorts other people already wrote.
On what task do developer need to create a sorting algorithm?
If you're providing a sorting routine for other people to use, you may need to implement it yourself. For example, Python's list.sort. Alternatively, if the standard sorts don't provide some property or capability you need, you may need to write your own.
what are the advantages of an in-place sorting algorithm?
Low extra memory usage. Sometimes we care about that; usually we don't. | 1 | 0 | 1 | I am new to programming.
is creating a sorting algorithm a common task for a professional developer?
On what task do developer need to create a sorting algorithm?
And Finally, what are the advantages of an in-place sorting algorithm?
any help is appreciated!! | Is In place sorting algorithm always faster? What are the adavantage of in place sorting? Python | 1.2 | 0 | 0 | 330 |
23,040,983 | 2014-04-13T09:22:00.000 | 1 | 0 | 0 | 0 | python,django | 23,041,136 | 1 | true | 1 | 0 | I guess you'll have to roll your own, but really doesn't sound hard.
Is there a reason you'd like to do it in Django? Django is not a simple CMS that you just install and enable some features to make it work. It's a framework, which means you'll have to do some things yourself. And doing this yourself, if you're at least a bit proficient, shouldn't be that hard.
Let's say you have a site / an app written in django and want to implement private messages. All you'd need are two models: User and Messages and save your private message to Messages with two foreign keys for sender and reciever.
You'll have to be more specific with your question to get more specific answers. | 1 | 0 | 0 | I'm trying to develop an HTML app with Django using python 3.3.3, and was wondering if there was a simple way to implement a user - to - user private messaging system. I've searched for preexisting apps, but most are out of active development, and other online answers were mostly not useful at all. If possible, I would like it so there are no external dependencies. If there is a simple way to implement this function I would love to know. Thanks. | Adding private messaging to a Django project | 1.2 | 0 | 0 | 225 |
23,041,079 | 2014-04-13T09:31:00.000 | 0 | 1 | 0 | 1 | python,shell,dreamhost,pycrypto | 23,041,133 | 1 | false | 0 | 0 | Your web server does not read your .bash_profile. | 1 | 0 | 0 | I have a Python script on my Dreamhost shared server. When I access my script via SSH (using the UNIX Shell) my script executes fine and is able to import the Pycrypto module Crypto.Cipher.
But if I access my script via HTTP using my websites url. The script fails when it goes to import the Pycrypto module Crypto.Cipher. It gives the error ImportError: No module named Crypto.Cipher.
Do you know what might be causing this weird error? And how I can fix it.
Some important information:
- I have installed a custom version of python on my shared server. Its just Python 2.7 with Pycrypto and easy_install installed.
- I am certain that the script is running under Python 2.7 and not Dreamhosts default 2.6 version. I know this because the script prints sys.version_info(major=2, minor=7, micro=0, releaselevel='final', serial=0) both in the UNIX shell and HTTP.
- I installed Pycrypto manually (using tar, and running setup.py) as opposed to using easy_install or pip.
- I have editted my .bash_profile's PATH variable correctly (well I believe I have done it correctly because the script is run under Python 2.7 not 2.6).
Any advice would be extremely helpful. | Shared Server: Python Script run under UNIX Shell vs HTTP | 0 | 0 | 0 | 132 |
23,045,524 | 2014-04-13T16:53:00.000 | 2 | 0 | 1 | 1 | python,multithreading,exit | 23,045,672 | 1 | true | 0 | 0 | That depends on what you mean by "orderly".
Even if you don't have any non-daemonic threads, if you call sys.exit() from main thread, the other threads will not complete in an "orderly" fashion. There's no guarantee they will clean up after themselves.
The only really clean way to do it is for the main thread to signal the other threads they should complete and abort (e.g. by setting a flag or an Event which they check periodically), wait for them to complete (by joining them), and then return from its main function. | 1 | 2 | 0 | The obvious ways I can think of to make a Python environment exit, is either sys.exit(), or os._exit(). However, sys.exit() doesn't work outside the main thread, and os._exit() doesn't run shutdown handlers (e.g. those registered via atexit.register). Also, when there are non-daemon threads running, just exiting the main thread (as might be effected through thread.interrupt_main, for instance) won't make the rest of the environment shut down, either.
Is there a way to make Python exit from another thread than the main thread, which runs shutdown handlers? | Shutting down a Python environment orderly | 1.2 | 0 | 0 | 66 |
23,045,695 | 2014-04-13T17:09:00.000 | 0 | 0 | 0 | 0 | python,python-imaging-library | 50,678,634 | 2 | false | 0 | 0 | To convert the available RGB image to HSI format( Hue,Saturation, Intensity), you can make use of the CV_RGB2HSI function available in the openCV docs. | 1 | 5 | 1 | I want to convert an RGB image to HSI,I found lot of inbuilt functions like rgb_to_hsv, rgb_to_hls, etc. Is there any function for conversion from RGB to HSI color model in python?? | RGB to HSI function in python | 0 | 0 | 0 | 3,209 |
23,045,812 | 2014-04-13T17:19:00.000 | 1 | 1 | 0 | 0 | python,serial-port,arduino | 23,051,290 | 1 | false | 0 | 0 | It was a buffer issue on the Arduino side. There was a line that kept printing a blank character out for every character it read in, causing the buffer to overflow. I removed that line and it's working fine now. | 1 | 0 | 0 | I have a script that utilizes OpenCV to track an object and communicate the location to an arduino. Essentially all it's doing is passing an integer to the arduino and the arduino interprets the integer as left/middle/right and turns on the appropriate LED. It works fine for ~30 seconds after which CPU usage jumps to 95%+ and the process begins to lag like crazy. If I remove the ser.write command and print left/middle/right to terminal then it runs fine. What might be getting backed up causing the high CPU usage? I've tried different baud rates and there is a 0.01 second delay after each ser.write command. | Serial communication in python causing higher CPU usage over time | 0.197375 | 0 | 0 | 285 |
23,048,756 | 2014-04-13T21:39:00.000 | 4 | 0 | 1 | 1 | python,terminal | 23,048,869 | 6 | false | 0 | 0 | Sounds like you have python 2 and 3 installed and your pythonpath is pointed at python 2, so unless specified it uses that version. If you are using python I would suggest setting up a virtual environment (virtualenv) for each project, which means you could run whatever version you'd like in that project and keep all dependencies contained. | 3 | 22 | 0 | I'm just starting to learn Python and did search around a little, so forgive me if this has been asked and answered.
When running scripts through the command line/terminal, I have to type "python3" to run the latest version of Python. With Python 2.X I just use "python".
Is there a way to run Python 3 just using "python"?
It may seem a little lazy, but I'm mostly just curious if it is possible or if it will break anything unnecessarily if I could in fact do it. | How can I make the "python" command in terminal, run python3 instead of python2? | 0.132549 | 0 | 0 | 75,788 |
23,048,756 | 2014-04-13T21:39:00.000 | 0 | 0 | 1 | 1 | python,terminal | 41,886,126 | 6 | false | 0 | 0 | on raspbian linux in the terminal i just run it by typing python3 file.py or just python file.py for python 2 | 3 | 22 | 0 | I'm just starting to learn Python and did search around a little, so forgive me if this has been asked and answered.
When running scripts through the command line/terminal, I have to type "python3" to run the latest version of Python. With Python 2.X I just use "python".
Is there a way to run Python 3 just using "python"?
It may seem a little lazy, but I'm mostly just curious if it is possible or if it will break anything unnecessarily if I could in fact do it. | How can I make the "python" command in terminal, run python3 instead of python2? | 0 | 0 | 0 | 75,788 |
23,048,756 | 2014-04-13T21:39:00.000 | 14 | 0 | 1 | 1 | python,terminal | 32,158,988 | 6 | false | 0 | 0 | If you are using Linux, add the following into into ~/.bashrc
alias python=python3
Restart the shell and type python and python3 should start instead of python2. | 3 | 22 | 0 | I'm just starting to learn Python and did search around a little, so forgive me if this has been asked and answered.
When running scripts through the command line/terminal, I have to type "python3" to run the latest version of Python. With Python 2.X I just use "python".
Is there a way to run Python 3 just using "python"?
It may seem a little lazy, but I'm mostly just curious if it is possible or if it will break anything unnecessarily if I could in fact do it. | How can I make the "python" command in terminal, run python3 instead of python2? | 1 | 0 | 0 | 75,788 |
23,050,106 | 2014-04-14T00:23:00.000 | -3 | 1 | 1 | 0 | python,c++,matlab | 23,050,146 | 3 | false | 0 | 0 | Python is great for Firmware programming like with Arduino's. C++ is great and very powerful for Programming software & applications. If you want to program hardware, go with python. If you want to program software, go with C++. Im learning C++ and its great. | 1 | 0 | 0 | I’m considering learning Python with the idea of letting go of MatLab, although I really like MatLab. However, I’m concerned that getting all of the moving and independent pieces to fit together may be a challenge and one that may not be worth it in the end. I’ve also thought about getting into Visual Basic or Visual C++. In the end, I keep coming back to the ease of MatLab. Any thoughts or comments regarding the difficulty of getting going in Python? Is it worth it? | MatLab user thinking of learning Python | -0.197375 | 0 | 0 | 320 |
23,056,486 | 2014-04-14T09:25:00.000 | 0 | 0 | 1 | 0 | python,multithreading,class | 23,059,173 | 1 | false | 0 | 0 | You do not need multiple threads for this. Instead, you can use select() to wait for data to read (from the keyboard) on STDIN_FILENO with a timeout (upon which you can advance to the next "frame" of animation). | 1 | 0 | 0 | I'm making a spelling word game. In which the word fall from the sky and if players type in the correct word, the word disappears.
I've successfully realized it with ncurses library and self defined function.
Now I need to rewrite it with one self defined class.
But It's really hard to figure out how to realized it with only one class in which two threadings are needed! One for the word falling from the sky, and another for the player typing.
Do you have any ideals to realize it? | How to make a class that can run multi-threading program? | 0 | 0 | 0 | 29 |
23,057,631 | 2014-04-14T10:21:00.000 | 8 | 0 | 0 | 0 | python,beautifulsoup | 27,881,018 | 3 | false | 1 | 0 | It may not be the fastest solution, but it is short and seems to work...
clonedtag = BeautifulSoup(str(sourcetag)).body.contents[0]
BeautifulSoup creates an extra <html><body>...</body></html> around the cloned tag (in order to make the "soup" a sane html document). .body.contents[0] removes those wrapping tags.
This idea was derived Peter Woods comment above and Clemens Klein-Robbenhaar's comment below. | 1 | 21 | 0 | I have to copy a part of one document to another, but I don't want to modify the document I copy from.
If I use .extract() it removes the element from the tree. If I just append selected element like document2.append(document1.tag) it still removes the element from document1.
As I use real files I can just not save document1 after modification, but is there any way to do this without corrupting a document? | clone element with beautifulsoup | 1 | 0 | 1 | 10,039 |
23,064,886 | 2014-04-14T16:06:00.000 | 0 | 0 | 0 | 0 | python-2.7,physics,curve-fitting,differential-equations | 23,065,974 | 2 | false | 0 | 0 | Certainly you intend to have the third derivative on the right.
Group your data in relatively small bins, possibly overlapping. For each bin, compute a cubic approximation of the data. From that compute the derivatives in the center point of the group. With the derivatives of all groups you now have a classical linear regression problem.
If the samples are equally spaced, you might try to move the problem into frequency space via FFT. A sensible truncation of the data might be a problem here. In the frequency space, the task reduces to a polynomial linear regression. | 1 | 2 | 1 | I have a curve of >1000 points that I would like to fit to a differential equation in the form of x'' = (a*x'' + b x' + c x + d), where a,b,c,d are constants. How would I proceed in doing this using Python 2.7? | Curve fitting differential equations in Python | 0 | 0 | 0 | 756 |
23,066,353 | 2014-04-14T17:23:00.000 | 1 | 1 | 1 | 0 | python,unicode | 23,066,522 | 3 | false | 0 | 0 | You should do one of two things (at least):
Add a hook to your repository making it verify on checkin that all python files are still pure ASCII.
Put the explicit ASCII-encoding tag in the files.
You might want to check if you get significantly better startup when the explicit tag is UTF-8 though. Anyway, I would consider that a bug of the interpreter.
This way, if anyone slips and mistakenly adds some non-ASCII characters, you won't have to chase that (potential) bug. Explicitly restricting to ASCII has one advantage: You actually can reliably see what each string contains and there are no equal-seeming distinct names. | 1 | 0 | 0 | We have a large project that is entirely coded in ASCII. Is it worth putting coding statements at the beginning of each source file (e.g. #coding=utf-8) for some reason if the source doesn't have any unicode in it?
Thanks,
--Peter | If your source is ASCII, should you specify coding? | 0.066568 | 0 | 0 | 90 |
23,068,670 | 2014-04-14T19:32:00.000 | 0 | 0 | 0 | 0 | python,openpyxl | 23,135,653 | 1 | false | 0 | 0 | Look at the HeaderFooter class in the worksheet section of the code. | 1 | 0 | 0 | How do you set the header/footer height in openpyxl? I don't find a setting for it. In a spreadsheet there are settings for height and autofit height but I don't see a means for setting either of those in openpyxl. | how to set the header/footer height in openpyxl | 0 | 1 | 0 | 563 |
23,070,922 | 2014-04-14T21:37:00.000 | 0 | 1 | 1 | 1 | python,chipmunk,pymunk | 23,200,199 | 1 | true | 0 | 0 | Try and go to the folder where setup.py is first and then do python setup.py install. As you have noticed, it assumes that you run it from the same folder as where its located. | 1 | 1 | 0 | I have downloaded pymunk module on my computer. When I typed in "python setup.py install" in terminal, it says "no such file or directory", then I typed in the complete path of setup.py instead of setup.py, and it still could not run since the links to other files in the code of setup.py are not complete paths. (Like README.txt, terminal said "no such file or directory". Sorry I'm a python newbie. Someone tell me how can I fix it?
Thanks!!!! | Compile pymunk on mac OS X | 1.2 | 0 | 0 | 118 |
23,070,926 | 2014-04-14T21:37:00.000 | 2 | 0 | 0 | 0 | python,user-interface,tkinter | 23,071,223 | 2 | false | 0 | 1 | If your main goal is to solidify your python knowledge, I would recommend Tkinter. It's simpler and it's already installed with Python.
If you want to build complex applications, I recommend PyQt, which is way more powerful. | 1 | 2 | 0 | I'm trying to solidify my python knowledge by doing some gui development, should I try Tkinter or jump directly to PyQT for better IDE support? | Tkinter first or PyQt? | 0.197375 | 0 | 0 | 2,947 |
23,070,947 | 2014-04-14T21:39:00.000 | 0 | 0 | 0 | 0 | python,google-app-engine,webapp2,gql | 23,071,550 | 2 | false | 0 | 0 | Could you make the number a separate field? Then you won't have to search for prefix. | 1 | 0 | 0 | I'm trying to stop duplicates of a database entity called "Post" in my program. In order to do this I want to append a number like "1" or "2" next to it. In other words:
helloworld
helloworld1
helloworld2
In order to do this I need to query the database for postid's starting with helloworld. I read that GQL doesn't support the LIKE operation, so what can I do instead?
Thanks! | How to look for a substring GQL? | 0 | 1 | 0 | 117 |
23,072,424 | 2014-04-15T00:01:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 35,249,523 | 4 | false | 0 | 0 | Although other people's answers are far better about the semantics of how the deep-vs-shallow. I would mention that it's generally better style (in my opinion) to use:
a = list(b)
rather than
a = b[:]
as it's a little more explicit. | 2 | 0 | 0 | how would I create a non destructive copy of a list using b = a[:] ? | Non destructive copy of list | 0 | 0 | 0 | 1,127 |
23,072,424 | 2014-04-15T00:01:00.000 | 1 | 0 | 1 | 0 | python,python-3.x | 23,072,486 | 4 | false | 0 | 0 | list_[:] does a copy of the first level list, not the 2nd level lists within it.
Two get both (all, really) levels, try copy.deepcopy(list_). | 2 | 0 | 0 | how would I create a non destructive copy of a list using b = a[:] ? | Non destructive copy of list | 0.049958 | 0 | 0 | 1,127 |
23,075,224 | 2014-04-15T05:21:00.000 | 0 | 0 | 0 | 0 | python,http | 23,190,634 | 1 | true | 0 | 0 | import requests url = "192.168.6.x:8089/test/request?"; files =""" """ r = requests.post(url, str(files)) print (r.text) print(r.elapsed) | 1 | 0 | 0 | How do i send xml file to http request(to web url) in python 2.7 or 3.3 or 3.4 and what are the packages that need to be installed in ubuntu.. | How do i send xml file to http request in python and what are the packages that need to be installed in ubuntu | 1.2 | 0 | 1 | 168 |
23,076,483 | 2014-04-15T06:51:00.000 | 13 | 0 | 0 | 0 | python,sqlalchemy | 23,080,688 | 1 | true | 0 | 0 | Simple answer: call session.flush(). | 1 | 9 | 0 | I need ID of an object which I've just added using session.add(). I need the auto-increment ID of this object before committing session.
If I called the object instance.id, I get None.
Is there a way to get ID of a object without committing? | SQLAlchemy get added object's id without committing session | 1.2 | 1 | 0 | 6,464 |
23,077,652 | 2014-04-15T07:52:00.000 | 0 | 0 | 1 | 0 | python | 23,077,796 | 2 | false | 0 | 0 | I won't give you the solution, but I can give you a hint: consider doing a boolean comparison between each item. For example, if 'a' is the solution and 'b' is the student's answer, a AND b would yield 'true' if the two values are the same, i.e. the answer is correct.
On the other hand, the value would be 'false' if the answer is wrong. In this case... (you can finish the statement) | 2 | 1 | 0 | I'm a computer science major at UT and I'm taking a Python class this semester. The professor gave us a lab assignment I've been trying to figure out for the last 3 hours and gotten nowhere. So here I am asking for any help that you can offer. Here is the question:
This program compares two parallel lists to grade a multiple choice
exam. One list has the exam solution and the second list has a
student's answers. The question number of each missed question is
stored in a third list.
You must use the three lists provided in your solution.
Your solution must use indexing.
Do not write any other user-defined functions
Write all your code in the main function.
You may not embed Python programming statements inside list brackets [ ]
I know I need to use a for loop in order to populate the third list, but I can't get down how to compare the two initial lists to have only the wrong answers populate the third list.
Any help would be greatly appreciated, thanks in advance! | How do I compare parallel string lists in Python | 0 | 0 | 0 | 421 |
23,077,652 | 2014-04-15T07:52:00.000 | 0 | 0 | 1 | 0 | python | 23,078,567 | 2 | false | 0 | 0 | Is this not very straightforward and the constraint to use indexing make it a simple for loop?
You have three lists, es (exam solutions), sa (student answers) and mq (missed questions). All you need to do is run a loop on index variable i ranging over length of es, compare es value with sa value to calculate score and append i to mq if the value in sa is '' or null or whatever signifies a missed question in sa.
Sorry if I missed or misunderstood something here. | 2 | 1 | 0 | I'm a computer science major at UT and I'm taking a Python class this semester. The professor gave us a lab assignment I've been trying to figure out for the last 3 hours and gotten nowhere. So here I am asking for any help that you can offer. Here is the question:
This program compares two parallel lists to grade a multiple choice
exam. One list has the exam solution and the second list has a
student's answers. The question number of each missed question is
stored in a third list.
You must use the three lists provided in your solution.
Your solution must use indexing.
Do not write any other user-defined functions
Write all your code in the main function.
You may not embed Python programming statements inside list brackets [ ]
I know I need to use a for loop in order to populate the third list, but I can't get down how to compare the two initial lists to have only the wrong answers populate the third list.
Any help would be greatly appreciated, thanks in advance! | How do I compare parallel string lists in Python | 0 | 0 | 0 | 421 |
23,077,972 | 2014-04-15T08:10:00.000 | 1 | 0 | 0 | 0 | python,postgresql,mod-wsgi | 23,084,850 | 1 | false | 0 | 0 | Query is of about 1.4 million lines and it crashes on query.all().
When you say it crashes: Do you mean the python client executable, or the PostgreSQL server?
I strongly suspect the crash is in Python. I'd say you're reading all results into memory at once, and they jus tdon't fit.
What you will need to do is progressively read the query results, process them, and discard them from memory. In psycopg2 you do this by iterating over the cursor object, or using cursor.fetchone(). Pylons should offer similar methods. | 1 | 0 | 0 | Using postgresql9.1.9 in pylons1.0 project with mod_wsgi.
Getting "out of memory error".
Query is of about 1.4 million lines and it crashes on query.all().
The column used for filtering is indexed.
In postgresql.conf, shared_buffers=24MB, max_connections=100.
Can you please suggest the work around? | MemoryError for postgresql 9.1.9 | 0.197375 | 1 | 0 | 292 |
23,083,599 | 2014-04-15T12:21:00.000 | 0 | 0 | 0 | 0 | python,django,sqlite | 49,475,275 | 2 | false | 1 | 0 | There is an easy way to do this. (in Django 2)
After making the necessary changes to the model.py file of your app, run command:
python manage.py makemigrations - This will generate a new file in migration folder of your app.
python manage.py migrate - This will apply those edits on actual databse.
To check if the changes have been applied, run command : .schema <tablename> in your terminal, after entering the sqlite command-line program. | 1 | 1 | 0 | I created a table before I code the Django app and now I merged both the app and the table with following command python manage.py inspectdb > models.py. However after some while I really need to change the value type of one of the column. Is it enough to chage it through the model file or do I need some additional steps? | How to change a sqlite table column value type from Django model? | 0 | 0 | 0 | 1,433 |
23,084,714 | 2014-04-15T13:09:00.000 | 0 | 0 | 1 | 0 | debugging,python-3.x,komodo | 23,084,715 | 1 | false | 0 | 0 | Debugging in a Python3 Virtual Environment does not work yet in the current Komode IDE version on windows.
My experience is, that even if the path is properly set on debug sessions in the project settings, those are ignored in the debug session and even worse, Komodo uses c:\Python27 which should not be visible in the session.
Example:
I have c:\python27 and c:\python33 on the system
My Python 3 VENV is in c:\project\demo and I added the python 3 exe from c:\project\demo\scripts\python.exe to the project settings
I also set the environment variabels PYTHONPATH and PATH to
c:\project\demo\scripts\ respectively c:\project\demo\ and c:\project\demo\mylib
Also c:\project\demo\mylib is added to the python3 search paths.
I can run my tests from a VENV from the console withpout any problem, which has exactly the mentioned configuration.
Now I open the file with the test in the Komodo project and run / debug it.
The result is that the debug session cannot import from c:\project\demo\mylib. When I print the environment in the debug session, it shows me that it is set to c:\Python27
I wonder why all those settings exist if the debug session does not use them. | 1 | 0 | 0 | I was trying to develop and debug Python 3 applications in a virtual environment with Komodo IDE.
I applied all the project settings, tried out my tests and did a lot of research using google, stackoverflow and also tried sarchiung the komodo forums for "virtual environment" oder "venv".
My impression is, that this cannot be done at the present. Or is there a way?
How can debugging in a Python3 Virtual Environment be done in the current commercial Komodo IDE version on windows? | Develop and debug Python 3 applications in a virtual environment with Komodo IDE 8 | 0 | 0 | 0 | 215 |
23,085,308 | 2014-04-15T13:34:00.000 | 2 | 0 | 0 | 1 | python,colors,ncurses,curses,xterm | 23,091,381 | 1 | true | 0 | 0 | can_change_color() actually reports whether colors can be remapped, via init_color() -- an uncommon capability -- not whether colors can be used at all, via init_pair(). To check for that basic color capability, what you want is has_colors().
init_color(), on the terminals where it works, lets you do things like tweak the exact shade of blue used -- or make the terminal's idea of "blue" show up as something else entirely. | 1 | 2 | 0 | I wrote a little-more-than-throwaway monitoring script in Python which uses ncurses and color to display some values which update frequently, but which are hardly ever of interest. To alert me to significant changes, I set things up so that when these values get into the realm of being interesting, the text changes from black-on-white to white-on-red. This works fine on my Linux (openSuSE 12.2) box, but on Solaris 10 curses.can_change_color() always returns False, no matter what I have tried. On both platforms, I am using the same version of Python (2.7.2) and ncurses (5.7). I have a number of terminal emulators available to me (gnome-terminal, xterm, rxvt). All are capable of displaying my shell prompt in red, so I know they support color. I've tried setting TERM to a number of xterm variants, including xtermc, xterm-color, rxvt, rxvt-16color. Some of those terminal names aren't in the default location, so I occasionally also have to set TERMINFO to point at a terminfo capability database. I'm thus sure the entries I desire are found.
The Python curses.can_change_color() function is just a thin wrapper around the ncurses library routine of the same name. Why is it always returning False? | Curses can_change_color() always returns False | 1.2 | 0 | 0 | 701 |
23,085,522 | 2014-04-15T13:42:00.000 | 0 | 0 | 0 | 1 | javascript,html,google-app-engine,python-2.7 | 23,128,324 | 4 | false | 1 | 0 | My thanks to all of you for taking time to respond. Each response was useful it it's own way.
The AJAX/JQuery looks a promising route for me, so many thanks for the link on that. I'll stop equivocating and stick with Python rather than try Go and start working through the tutorials and courses.
Gary | 1 | 0 | 0 | I want to build an application with an HTML5 interface that persists data using google-app-engine and could do with some some advice to avoid spending a ton of time going down the wrong path.
What is puzzling me is the interaction between HTML5, Javascript/JQuery and Python.
Let's say, I have designed an HTML5 site. I believeetc I can use prompts and forms to collect data entered by users. I also know that I can use Javascript to grab that data and keep it in the form of objects...I need objects for reasons I'll not go into.
But when I look at the app-engine example, it has HTML form information embedded in the Python code which is what is used to store the data in the cloud Datastore.
This raises the following questions in my mind:
do I simply use Python to get user entered information?
how does python interact with a separately described HTML5/CSS2 forms and prompts?
does Javascript/Jquery play any role with respect to data?
are forms and prompts the best way to capture use data? (Is there a better alternative)
As background:
It is a while since I programmed but have used HTML and CSS a fair bit
I did the Javascript and Jquery courses at Codeacademy
I was considering using Go which looks funky but "experimental" worries me and I cannot find a good IDE such as devTable
I can do the Python course at Codeacademy pretty quickly if I need it? I think I may need to understand there objects syntax
I appreciate this is basic basic stuff but if I can get my plumbing sorted, I doubt that I'll have to ask too man more really stupid questsions
Gary | What is best way to save data with appengine/HTML5/JavaScript/Python combo? | 0 | 0 | 0 | 133 |
23,086,241 | 2014-04-15T14:11:00.000 | 0 | 0 | 1 | 0 | python,csv,dynamic | 23,086,286 | 1 | true | 0 | 0 | I am assuming you mean to ask about separate calls to writer.writerow() vs. building a list, then writing that list with writer.writerows().
Memory wise, the former is more efficient. That said, don't worry too much about speed here; writing to disk is your bottleneck, not how you build the data. | 1 | 0 | 1 | I'm talking about time taken to dynamically write values to a csv file vs appending those same values to an array. Eventually, I will append that array to the csv file, but that's out of the scope of this question. | Is dynamically writing to a csv file slower than appending to an array in Python? | 1.2 | 0 | 0 | 658 |
23,087,949 | 2014-04-15T15:19:00.000 | 1 | 0 | 0 | 0 | python,google-app-engine,webapp2 | 23,091,888 | 2 | false | 1 | 0 | You can do it in many ways.
Set cookie by first response and it will be passed to next request - unsafe even cookie is crypted but can be.
First GET will send author to second POST page - POST will send author (hidden field).
First GET will send author to POST url as param (same as above).
You will create session id and save in datastore and with author, GET will send session id cookie, PUT will send session id and you will read from datastore session id with author.
You can use memcache as datastore but it is dangerous (it can be flushed and data is not persistent in cache by design).
You can pass session id from GET to POST with use hidden field not cookie or url.
Consider the simples is GET and redirect to valid POST with variable in URL or in hidden field - other methods is more complex but it need chain of GET/POST. | 1 | 0 | 0 | Is it possible to pass information between requests with webapp2?
I have a class that has to set the author variable on HTTP GET. The HTTP POST will check if author exists, and then continue posting. I tried by having a global variable author=None and then setting author in the HTTP GET, but I think the object is destroyed when the HTTP POST request is made to the same controller.
Any help would be great, thanks! | Pass information between requests webapp2 | 0.099668 | 0 | 0 | 345 |
23,088,338 | 2014-04-15T15:36:00.000 | 3 | 0 | 1 | 1 | python | 51,430,363 | 4 | false | 0 | 0 | You know, you can start python with py -specific version
To run a script on interpreter with a specific version you'll just start your script with following parameters, py yourscript.py -version | 1 | 4 | 0 | I'm learning python now using a mac which pre-installed python 2.7.5. But I have also installed the latest 3.4.
I know how to choose which interpreter to use in command line mode, ie python vs python 3 will bring up the respective interpreter.
But if I just write a python script with this header in it "#!/usr/bin/python" and make it executable, how can I force it to use 3.4 instead of 2.7.5?
As it stands, print sys.version says:
2.7.5 (default, Aug 25 2013, 00:04:04)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] | how to specify version of python to run a script? | 0.148885 | 0 | 0 | 28,056 |
23,096,631 | 2014-04-15T23:59:00.000 | 0 | 0 | 0 | 0 | python,bittorrent | 25,778,457 | 1 | false | 1 | 0 | It's trivial to "break down" files as you put it. You'll need an algorithm to disassemble them, and then to reassemble them later, presumably by a browser since you mentioned HTML and CSS. Bittorrents implements this, and additionally the ability upload and download from a distributed "swarm" of peers also interested in the same data. Without reinventing the wheel by creating your own version of bittorrent, and again assuming you want to use this data in a browser, you'll want to create a torrent of all the HTML, CSS and other files relevant to your web application, and seed that using bittorrent. Next you'll want to create a bootstrap "page" that makes use of one of the several Javascript bittorrent clients now available to download the torrent, and then load the desired pages and resources when the client completes the download. | 1 | 0 | 0 | I am trying to make a program that breaks down files like HTML or CSS into chunks like that of a torrent. I am completely unsure how to do this. They need to be broken down, than later reassembled in order. anybody know how to do this?
It doesn't have to be in Python, that was just my starting point. | How do I break down files in a similar way to torrents? | 0 | 0 | 0 | 350 |
23,097,154 | 2014-04-16T00:59:00.000 | 14 | 0 | 1 | 0 | python,rest,flask,eve | 23,102,713 | 2 | true | 0 | 0 | You could use mongodb $regex operator, which is blacklisted by default in Eve (MONGO_QUERY_BLACKLIST = ['$where', '$regex']).
Add MONGO_QUERY_BLACKLIST = ['$where'] to your settings.py. Then you can query your API like this:
?where={"name": {"$regex": ".*foo.*"}}.
Be careful however. If you don't control the client, enabling regexes could potentially increase your API vulnerability. | 1 | 4 | 0 | There's some way to return items that field contains some value? Eg.
GET /people?contains="foo"
Return all persons that have the word 'foo' in the name.
Thanks in advance | Python Eve contains filter | 1.2 | 0 | 0 | 3,557 |
23,097,604 | 2014-04-16T01:51:00.000 | 0 | 1 | 0 | 0 | python,multithreading,buffer,raspberry-pi | 23,097,644 | 1 | false | 0 | 1 | I don't think that RasPi would work that well running multithreaded programs. Try the first method, though it would be interesting to see the results of a multithreaded program. | 1 | 0 | 0 | I apologize in advance for this being a bit vague, but I'm trying to figure out what the best way is to write my program from a high-level perspective. Here's an overview of what I'm trying to accomplish:
RasPi takes input from altitude sensor on serial port at 115000 baud.
Does some hex -> dec math and updates state variables (pitch, roll, heading, etc)
Uses pygame library to do some image manipulation based on the state variables on a simulated heads up display
Outputs the image to a projector at 30 fps.
Note that there's no user input (for now).
The issue I'm running into is the framerate. The framerate MUST be constant. I'd rather skip a data packet than drop a frame.
There's two ways I could see structuring this:
Write one function that, when called, grabs data from the serial bus and spits out the state variables as the output. Then write a pygame loop that calls this function from inside it. My concern with this is that if the serial port starts being read at the end of an attitude message, it'll have to pause and wait for the message to start again (fractions of a second, but could result in a dropped frame)
Write two separate modules, both to be running simultaneously. One continuously reads data from the serial port and updates the state variables as fast as possible. The other just does the image manipulation, and grabs the latest state variables when it needs them. However, I'm not actually sure how to write a multithreaded program like this, and I don't know how well the RasPi will handle such a program. | How to structure my Python code? | 0 | 0 | 0 | 73 |
23,098,583 | 2014-04-16T03:42:00.000 | 0 | 0 | 1 | 0 | firefox,ipython,ipython-notebook | 23,773,427 | 1 | false | 0 | 0 | I found that the problem occurs when changing the cookie preference "Keep until:" " they expire" to "ask me every time" (in Preferences->Privacy->History). As soon as I switch to "they expire" or "I close Firefox" and reload the page with my notebook, it renders as expected and the notebook is shown as running. Creating new notebooks works also correctly.
There is an issue open for this: github.com/ipython/ipython/issues/5703 | 1 | 0 | 0 | I upgraded from Ipython 1.2.1 to Ipython 2.0. When I try to open an existing notebook or create a new notebook in Firefox, I only get a blank screen. There is no error message in the terminal window that I used to start the notebook server. This happens on CentOs 6.5 with Python 2.7.5 and Firefox 24.4 as well as on Mac OS 10.8.5 with Python 2.7.6 and Firefox 28. Starting Firefox in safe-mode did not make any difference. If I use Safari instead of Firefox, the notebooks display as expected. Any ideas what could be wrong or how to debug this? | Opening notebook with Ipython 2.0 in Firefox yields only a blank screen | 0 | 0 | 0 | 247 |
23,102,090 | 2014-04-16T06:52:00.000 | 2 | 0 | 1 | 0 | android,windows,python-2.7,kivy | 23,105,151 | 1 | true | 0 | 1 | I am using kivy on windows 7 . You can use option 1 . it wont damage your current python 2.7 because you can just change the path of environment to the python interpreter which comes with kivy .
in case you need to turn back to your older python installed just change the environment variables .
Ino order to add third party libraries , most of them are already installed in kivy . for others you can find them on kivy.org :)
If you need to use for example pyQt4 or similar library you need to use different interpreter . I am also doing same stuff .In my case , I use pycharm and keep different configuration (i.e. python interpreter) for different programs . | 1 | 1 | 0 | I need to ask a question on how to set a development environment for Kivy under Windows 7. Normally i work with Kivy on Linux, but i need to develop an application for a client who uses Windows. From the Kivy documents, there are two alternatives:
1- Download and Unzip the file containing the Kivy environment plus the Python interpreter with it. I have concerns here. Will this damage my existing Python environment (2.7)? if not is it sand boxed well? Plus if i need to add other third party libraries (ex : pyodbc to run on a Kivy application on a PC) where shall they be installed ?
2- set up Kivy for existing Python environment. Another concern here : is the "unofficial" windows installer a good way to get Kivy running under Windows? and same concerns as above for the Python environment.
Thank you in advance. | Kivy development environment on Windows 7 | 1.2 | 0 | 0 | 879 |
23,104,754 | 2014-04-16T09:05:00.000 | 1 | 0 | 1 | 0 | c#,visual-studio-2013,ironpython,vsix | 23,131,781 | 1 | true | 0 | 0 | I would presume that there is a way to include them in the VSIX file and also know where they are on disk - at least, you could use AppDomain.CurrentDomain.GetAssemblies() for find the IronPython assembly and Assembly.Location to find where it is, and hope the VSIX puts the Lib directory near that. (My only experience with VSIX was a while ago and I hated it, so I can't provide much advice in that department.)
Assuming you're embedding IronPython, once you have the location you can just use ScriptEngine.SetSearchPaths to tell IronPython where the Lib directory is. If you're shelling out to ipy.exe then set the IRONPYTHONPATH environment variable before starting it. | 1 | 0 | 0 | I'm currently writing a Visual Studio extension, which provides scripting capabilities. I'm using IronPython (the newest one), but I have some problems with Python's standard libraries.
As I understand, all necessary files reside in the <IronPython folder>\Lib folder. I cannot rely on my users installing IronPython, so I have to provide these files in other way.
If it is possible, I'd simply embed the whole Lib folder in my assembly and allow IronPython access to it from the code, but I'm not sure, if this is possible. I can try to add the Lib folder to extension's package and extract it to wherever Visual Studio will copy my extension's files, but I'm unsure, how to get access to them during extension's runtime. Also, I'd like to set appropriate paths transparently to the user and I'm also unsure, whether this can be done.
How can I solve this problem? | Embedding IronPython's stdlib in VS extension | 1.2 | 0 | 0 | 110 |
23,110,542 | 2014-04-16T13:13:00.000 | 3 | 0 | 1 | 0 | python,math | 23,110,634 | 3 | true | 0 | 0 | With just a single point (and nothing else) you cannot solve such a problem, there are infinitely many lines going through a single point.
If you know the angle to x axis then simply m=tan(angle) (you do not need any points to do that, point is only required to figure out c value, which should now be simple).
To convert angle from the y-axis to the x-axis simply compute pi/2 - angle | 3 | 0 | 0 | I understand that the equation for a straight line is:
y = (m * x) + c
where m is the slope of the line which would be (ydelta/xdelta) but I dont know how to get this value when I only know a single point and an angle rather than two points.
Any help is appreciated. Thanks in advance. | How do I find the slope (m) for a line given a point (x,y) on the line and the line's angle from the y axis in python? | 1.2 | 0 | 0 | 928 |
23,110,542 | 2014-04-16T13:13:00.000 | -1 | 0 | 1 | 0 | python,math | 23,110,747 | 3 | false | 0 | 0 | Okay, let's say your point is (x,y)=(1,2)
Then you want to solve 2 = m + c. Obviously there is no way you can do this. | 3 | 0 | 0 | I understand that the equation for a straight line is:
y = (m * x) + c
where m is the slope of the line which would be (ydelta/xdelta) but I dont know how to get this value when I only know a single point and an angle rather than two points.
Any help is appreciated. Thanks in advance. | How do I find the slope (m) for a line given a point (x,y) on the line and the line's angle from the y axis in python? | -0.066568 | 0 | 0 | 928 |
23,110,542 | 2014-04-16T13:13:00.000 | 0 | 0 | 1 | 0 | python,math | 23,111,191 | 3 | false | 0 | 0 | The equation of a line is y = mx + c.
You are given a point on this line, and the angle of this line from the y-axis.
The gradient m will be math.cot(angle_in_radians). The x and y values will be the same as your given point. To find c, simply evaluate y - mx. | 3 | 0 | 0 | I understand that the equation for a straight line is:
y = (m * x) + c
where m is the slope of the line which would be (ydelta/xdelta) but I dont know how to get this value when I only know a single point and an angle rather than two points.
Any help is appreciated. Thanks in advance. | How do I find the slope (m) for a line given a point (x,y) on the line and the line's angle from the y axis in python? | 0 | 0 | 0 | 928 |
23,112,584 | 2014-04-16T14:36:00.000 | 0 | 0 | 1 | 0 | python,regex,match | 23,112,619 | 2 | false | 0 | 0 | For that you can do:
re.compile(r'I can\'?t.*').match(str)
This will match either "I can't" with some other text following or just "I can't" | 1 | 3 | 0 | I can\'?t (.*)
My regex is of the above form. But my match object doesn't match if the string given to it ends after t
re.compile(r'I can\'?t (.*)').match(str)
If str = "I cant", it doesn't work. But if str = "I can't use", it works (match returns something). | Python regular expression zero or more occurrences | 0 | 0 | 0 | 3,131 |
23,116,965 | 2014-04-16T18:08:00.000 | 0 | 0 | 0 | 1 | python,c++,wrapper,librsync | 23,261,836 | 1 | true | 0 | 0 | Generally, no, it's not possible to build a C library that will work smoothly across platforms.
Some BSD systems can run Linux applications (and maybe vice versa) so you could build the whole thing in that way, but it would require shipping a Linux Python. | 1 | 0 | 0 | So I'm writing an application in python which requires librsync for more efficient file transfers. I want my librsync wrapper to work so that if librsync is already installed on the system it will use that, but otherwise try to use a version shipped with my application. The wrapper currently works on linux with librsync already installed and I also managed to compile librsync to a DLL that works with the wrapper on windows. When I compile it on linux to a .so file I can move it to other linux systems and it will work, but when I try to use it on FreeBSD I get an "invalid file layout" error.
I'm wondering, is it possible to compile librsync to a library file that would work cross-platform? (or only on all *NIX systems) Also if you think there's a better way to do this please let me know. | Writing a python wrapper for librsync with ctypes. How should I compile librsync to work on all systems? | 1.2 | 0 | 0 | 366 |
23,117,242 | 2014-04-16T18:21:00.000 | 0 | 0 | 1 | 0 | python-2.7,opencv,opensuse,undefined-symbol | 25,503,548 | 1 | false | 0 | 0 | Not exactly a prompt answer (nor a direct one). I had the same issue and (re)installing various dependencies didn't help either.
Ultimately, I cloned (from git) and compiled opencv (which includes the cv2.so library) from scratch, replaced the old cv2.so library and got it to work.
Here is the git repo: https://github.com/Itseez/opencv.git | 1 | 0 | 1 | I'm using OpenSUSE 13.1 64-bit on an Lenovo ThinkPad Edge E145.
I tryed to play a bit around with Python(2.7) and Python-OpenCV(2.4). Both is installed by using YAST.
When i start the Python-Interactive-Mode (by typing "python") and try to "import cv" there are 2 things that happen:
case 1: "import cv" --> End's up with:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.7/site-packages/cv.py", line 1, in <module>
from cv2.cv import *
ImportError: /usr/lib64/python2.7/site-packages/cv2.so: undefined symbol: _ZN2cv23adaptiveBilateralFilterERKNS_11_InputArrayERKNS_12_OutputArrayENS_5Size_IiEEddNS_6Point_IiEEi
case 2: "import cv2" --> End's up with:
MemoryAccessError
and the interactive mode shutdown and i'm back at the normal commandline.
Have anyone any idea how can i solve this problem?
Greetings | Python OpenCV "ImportError: undefined Symbol" or Memory Access Error | 0 | 0 | 0 | 892 |
23,120,347 | 2014-04-16T21:18:00.000 | 2 | 0 | 1 | 0 | python,python-3.x,import | 23,120,423 | 2 | false | 0 | 0 | You can only use module1.module2 without an explicit import if module1 itself imports module2. For instance, os internally imports one of several other path-handling modules (depending on the OS) and calls it path. This path is then just a variable inside the os module that lets you access the os.path module. | 1 | 1 | 0 | Why do we need to import module1.module2 if we can just import module1?
Example:
Why do we need import tkinter.messagebox and do tkinter.messagebox.askyesno(“blah text”) when we also can do import os and still can do os.path.join(“/“, “blah”)?
I use import os in my code regularly, and I saw in someone else’s code the import tkinter.messagebox. | Why do we sometimes need to import module1.module2 but sometimes not? | 0.197375 | 0 | 0 | 122 |
23,123,384 | 2014-04-17T02:08:00.000 | 2 | 0 | 1 | 0 | python,rapid-prototyping | 36,361,624 | 3 | false | 0 | 1 | Yes, more than one.
In my humble experience, I tried many Open Source tools for parametric CAD modeling using Python (FreeCAD, Rhino-Grasshopper, Blender, Salome).
All of them are valid options and the best one is represented by your ability to either model or code.
I recently favour SALOME (www.salome-platform.org) because of the straight forward "dump study" option, the continue development and the good API documentation.
Particularly I did some 3d prints using the exportSTL command once I had a solid worthy of printing and it was ok.
Nevertheless, if you intend to work on surfaces rather than solids, I don't think you will find anything worthy Open Source (Rhino has a little price to pay). | 1 | 4 | 0 | I am currently in a project where a lot of 3D printing designs need to be done. They are all parameterized, so I'd like to write a python code to generate those design files (in .STL format) for me. I was wondering that, is there a python package that can do this? Because currently I am all doing those by hand using SolidWorks.
Thanks! | Is there a python library to generate STL file for 3D printing? | 0.132549 | 0 | 0 | 5,404 |
23,124,598 | 2014-04-17T04:27:00.000 | 0 | 0 | 1 | 0 | python,json,serialization,deserialization | 23,124,693 | 1 | false | 0 | 0 | The "hard" part is mainly step 3 of your serialization, converting the contained values to strings (and later back during deserialization)
For simple types like numbers, strings, booleans, it's quite straight forward, but for complex types like a socket connected to a remote server or an open file descriptor, it won't work very well.
The solution is usually to either move the complex types from the types you want to serialize and keep the serialized types very clean, or somehow tag or otherwise tell the serializers exactly which properties should be serialized, and which should not. | 1 | 0 | 0 | As I'm trying to solve "Not JSON serializable" problems for the last couple of hours, I'm very interested in what the hard part is while serializing and deserializing an instance.
Why a class instance can not be serialized - for example - with JSON?
To serialize:
Note the class name (in order to rebuild the object)
Note the variable values at the time of packaging.
Convert it to string.
Optionally compress it (as msgpack does)
To deserialize:
Create a new instance
Assign known values to appropriate variables
Return the object.
What is difficult? What is complex data type? | Why an instance can not be serialized with JSON? | 0 | 0 | 0 | 439 |
23,128,964 | 2014-04-17T09:07:00.000 | 0 | 0 | 0 | 0 | python,django,python-2.7,google-analytics-api,http-status-code-403 | 29,837,598 | 3 | false | 1 | 0 | You should use View ID Not account ID, the 'View ID', you can go:
Admin -> Select Site -> Under "View" -> View Settings , if it doesn't works
you can go: Admin->Profiles->Profile Settings | 2 | 3 | 0 | I am trying to access data from google-analytics. I am following the guide and is able to gauthorize my user and get the code from oauth.
When I try to access data from GA I only get 403 Insufficient Permission back. Do I somehow have to connect my project in Google API console to my analytics project? How would I do this? Or is there some other reason why I get 403 Insufficient Permission back?
I am doing this in Python with Django and I have Analytics API turned on i my API console! | Google Analytics reports API - Insufficient Permission 403 | 0 | 0 | 1 | 5,740 |
23,128,964 | 2014-04-17T09:07:00.000 | 9 | 0 | 0 | 0 | python,django,python-2.7,google-analytics-api,http-status-code-403 | 24,274,077 | 3 | true | 1 | 0 | Had the same problem, but now is solved.
Use View ID Not account ID, the 'View ID', can be found in the Admin->Profiles->Profile Settings tab
UPDATE
now, if you have more a account , you must go: Admin -> Select account -> under View-> click on View Settings | 2 | 3 | 0 | I am trying to access data from google-analytics. I am following the guide and is able to gauthorize my user and get the code from oauth.
When I try to access data from GA I only get 403 Insufficient Permission back. Do I somehow have to connect my project in Google API console to my analytics project? How would I do this? Or is there some other reason why I get 403 Insufficient Permission back?
I am doing this in Python with Django and I have Analytics API turned on i my API console! | Google Analytics reports API - Insufficient Permission 403 | 1.2 | 0 | 1 | 5,740 |
23,130,855 | 2014-04-17T10:36:00.000 | 0 | 0 | 0 | 0 | python,linux,ftplib | 23,132,117 | 1 | false | 0 | 0 | Use wireshark, you have to setup a filter for ftp (filter like: port ftp). Then
start sniffing the transfer traffic. When the transfer is done you can see the
captured packets in the main window. You have to right click on the first ftp packet
and then select "Follow TCP stream". Then you should be able to view all your transfered
bytes and such. This works also with tcpdump and such but it is commandline only.
Kind regards,
Dirk | 1 | 1 | 0 | I am using Python 3.4 on a Linux PC. I have written a program to access ftp page using ftplib module and download a file from it on my pc.
I want to know the total network data transfer (including both sent and received) that happened in this process?
How should I go about it?
Any leads will be helpful. | Get data transfered in Python Modules | 0 | 0 | 1 | 47 |
23,136,122 | 2014-04-17T14:36:00.000 | 1 | 0 | 1 | 0 | python,django,logging,thread-local-storage | 23,138,661 | 1 | true | 1 | 0 | You can create a logging.Filter() object that grabs the thread-local variable (or a suitable default when its not there) and add it to the log record. Attach that filter to the root logger and it will be called for all log records before they are passed to handlers. Once the variable is in the log record it can be used in the formatters you use to display/save the information. | 1 | 2 | 0 | I'd like to prepend the user email to all web app logs.
I can store the email (taken from the cookie and such) in threading.local(). But I can't be always sure the variable will be there in the thread locals.
Is there a way to tell all the loggers in my app to act like that? | Python logging with thread locals | 1.2 | 0 | 0 | 989 |
23,138,783 | 2014-04-17T16:37:00.000 | 5 | 0 | 1 | 0 | python,recursion | 23,138,843 | 2 | true | 0 | 0 | Yes, the inner function will be redefined each time the function is called. However, it's not as bad as you might assume; the Python code is parsed into a code object once, and only the function object (which serves as a sort of wrapper for the code object) is built anew each time through. | 1 | 7 | 0 | I have some recursive backtracking code that tests if a choice is valid before making it. Is it a bad idea to nest the is_legal_choice function inside the recursive solve function? Will this inner function be redefined each time the solve function is called? | Is defining an inner function inside a recursive function a bad idea? | 1.2 | 0 | 0 | 747 |
23,141,471 | 2014-04-17T19:07:00.000 | 0 | 0 | 0 | 0 | python,user-interface,console,tkinter,integration | 23,141,544 | 2 | false | 0 | 1 | (I am pretty sure there is a better way, but) one way is to change the .pyc object file extension to .pyw and the console will not appear when you launch your GUI using the .pyw file. | 1 | 0 | 0 | I have a python script that runs the GUI which is coded using Tkinter.
Problem is when i run the script, there are 2 windows opened. One is the GUI and other is the black console window.
I need to integrate both the windows so that when I start the script only one window appears.
Any ideas are much appreciated.
Thanks in advance. | Integration of console window with GUI using TkInter | 0 | 0 | 0 | 780 |
23,142,277 | 2014-04-17T19:55:00.000 | 2 | 0 | 1 | 0 | python,python-2.7,queue | 23,142,323 | 2 | true | 0 | 0 | I'm fairly certain you want a deque from the collections module. It includes (among other things) append, pop, pop_left, and rotate methods, and also supports indexing. Indexing slows toward the middle, but is fast at the ends. | 1 | 0 | 0 | I would like a container to do the following in python 2-7:
I need that container to behave like a queue: first in first out, I append to it objects and then get them in the same order "from the other end".
However I also need to be able to read up to 5 objects from the beginning of the queue without popping them, then if I don't need them anymore I will pop them from the queue.
I am new to python and I need to know is there any container that would act as such? Or any easy simple implementation for it? | Python Queue-Like Container | 1.2 | 0 | 0 | 318 |
23,142,757 | 2014-04-17T20:24:00.000 | 4 | 0 | 0 | 0 | python,django,uwsgi,gunicorn,django-middleware | 23,143,062 | 1 | true | 1 | 0 | You don't say where your "understanding" comes from, but it's not really accurate. Django itself is pretty agnostic about how it runs - it depends on the server - but it's very unusual for it to be invoked from scratch on each request. About the only method where that's the case is CGI, and it'll run like a dog.
Speaking in very general terms, there are two ways Django can be run. Either it runs inside a process of the web server itself - as with mod_wsgi on Apache - or it runs in a completely separate process and receives requests via reverse proxy from the server, as with uwsgi/gunicorn. Either way, the lifetime of the Django process is not directly connected with the request, but is persistent across many requests. In the case of mod_wsgi for example, the server starts up threads and/or processes (depending on the configuration) and each one lasts for a large number of consecutive requests before being killed and restarted.
For each process, this means that any modules that have been loaded stay in memory for the lifetime of the process. Everything from the middleware onwards is executed once per request, but they wouldn't usually need to be re-imported and run each time. | 1 | 1 | 0 | This might sound stupid question so apologies in advance.
I am trying to understand how Django framework actually works behind the scenes. It's my understanding that Django does not run all the time and gets invoked by uwsgi/gunicorn or anything else when a request comes in and processed as follows:
WsgiHandler or ModPythonHandler
Import settings, custom exceptions
Load middleware
Middleware -> URLResolver
Middleware -> View -> Template
Middleware -> HttpResponse
But what I cannot understand that is there any part of Django which keeps running all the time like cache management or some other functions or instances rather being created per request. I would really appreciate if you can explain a bit or give pointers. | How Django framework works behind the scenes? | 1.2 | 0 | 0 | 651 |
23,146,168 | 2014-04-18T01:45:00.000 | 2 | 1 | 1 | 1 | python,makefile,gentoo,portaudio,pyaudio | 23,233,317 | 1 | true | 0 | 0 | Finally I could find the source of the problem. Somehow portaudio is installing itself to /usr/local/ but the robot I'm working on uses the folders in /usr i.e /usr/lib /usr/include and not /usr/local/lib etc.
Putting the libraries in /usr/lib and also transferring manually some portaudio libs you can find in python site-packages folder solved the problem. | 1 | 2 | 0 | Working on Gentoo (on the robot Nao) that has no make and no gcc on it, it is really hard for me to install portaudio. I managed to put pyaudio in the right location so that python can detect it but whenever I try "import pyaudio" it asks me to install portaudio first.
i have a virtual machine running gentoo emulating the robot where gcc and make are available. I could compile portaudio on that machine but then after copying its content to the robot I cannot run make install. Where should I put each library file exactly so that pyAudio can find it?
Thanks | Where should I put portaudio so that Pyaudio can find it | 1.2 | 0 | 0 | 976 |
23,146,345 | 2014-04-18T02:06:00.000 | 2 | 0 | 1 | 0 | c#,python,sockets,io | 23,146,561 | 1 | false | 0 | 0 | I would use byte[]. It will get the job done. | 1 | 0 | 0 | I am using asynchronous socket API in C#. In the client side, I need a buffer to store the binary data read from the server. And other client logic will check the buffer, unpack the head to see the length, if the length is less than that indicated by the header, continue. And next time we check the buffer again. For the network logic, I need to maintain this buffer, and I want to know what data type should I use.
In python we use a string as a buffer, but I don't think this is gonna work in C#. Inefficient, Encoding problem (I need to parse the binary data my own, not necessarily to a string), Frequently changed. What about stringbuilder? Any other suggestions? | What data type is good for a IO buffer in C# | 0.379949 | 0 | 1 | 116 |
23,147,008 | 2014-04-18T03:41:00.000 | 0 | 0 | 1 | 0 | python,tree,family-tree | 34,161,376 | 2 | true | 0 | 0 | After searching a lot, I found that the Graph ADT suits the above problem better. Since a family has relations over a wide span in all directions, using a graph ADT would be conventional.
Each node can store details about a person.
Node can consist of parent node links, and some functionalities
to find relation between two nodes etc..
To find relationships, assume the parent nodes as the Parents, and
the parent of the parent nodes as grandparents etc..
Traverse to the parent node, find if there is any other child nodes,
mark them as siblings etc..
The idea is this, I think it will help to solve this problem! | 1 | 1 | 0 | I ve recently started with python and am working on building a Family tree using python. My idea is that the tree should grow in both sides, i.e) both the older generations as well as younger generations can be added to the same tree.
I tried implementing with Binary tree ADT and N-ary tree ADT, but that doesn't work well. Can anyone suggest me an ADT that is best for building that family tree, and guide me how to implement that? | Creating Family Trees Using Python | 1.2 | 0 | 0 | 5,025 |
23,147,796 | 2014-04-18T05:13:00.000 | 2 | 0 | 1 | 0 | python | 23,147,890 | 1 | false | 0 | 0 | It doesn't actually add a b to the beginning of the string -- b is just a marker that python puts on the string when representing it to you so that you know it's a bytes type, not str. Bytes are really just numbers (0-255) so you can walk through the byte object and get each value, figure out what number it corresponds to and add 5, etc.
hint - this task probably gets easier if you choose to use a bytearray to store the bytes. | 1 | 0 | 0 | I have a request to, "Encode the file by adding 5 to every byte in the file". I tried opening the file as read binary, but all that does is add a b to the beginning of the string- I don't think that is what the expectation of the statement is. I tried looking into pickle, but I don't think that is right either.
What else could this mean? Any ideas as to what possible solutions there are? | Changing bytes in the file? | 0.379949 | 0 | 0 | 148 |
23,153,964 | 2014-04-18T12:17:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 23,154,081 | 3 | false | 0 | 0 | The strings seem to be UTF-8 encoded twice. | 1 | 0 | 0 | I receive a text string from a third party api with garbled character encodings.
When I print that string to the command line, the string contains words like
Zäune instead of Zäune
Gartenmöbel instead of Gartenmöbel
etc.
What can I do, to fix the incoming text string with python 2.7, so it prints properly to the command line?
Thanks | How to convert Zäune to Zäune in Python 2.7 | 0.066568 | 0 | 0 | 204 |
23,154,120 | 2014-04-18T12:26:00.000 | 0 | 0 | 0 | 0 | python,facebook,google-app-engine,authentication,google-cloud-endpoints | 23,223,929 | 1 | true | 1 | 0 | For request details, add 'HttpServletRequest' (java) to your API function parameter.
For Google authentication, add 'User' (java) to your API function parameter and integrate with Google login on client.
For twitter integration, use Google app-engine OpenID.
For facebook/loginForm, its all on you to develop a custom auth. | 1 | 1 | 0 | I'm trying to implement a secure google cloud endpoint in python for multi-clients (js / ios / android)
I want my users to be able to log by three ways loginForm / Google / Facebook.
I read a lot of docummentation about that but I didn't realy understood how I have to handle connection flow and session (or something else) to keep my users logged.
I'm also looking for a way to debug my endpoint by displaying objects like Request for exemple.
If someone know a good tutorial talking about that, it will be verry helpfull.
thank you | google endpoint custom auth python | 1.2 | 0 | 1 | 95 |
23,156,780 | 2014-04-18T15:03:00.000 | 1 | 0 | 0 | 0 | python,html,xpath,web-scraping,scrapy | 50,809,934 | 3 | false | 1 | 0 | The xpath('//body//text()') doesn't always drive dipper into the nodes in your last used tag(in your case body.) If you type xpath('//body/node()/text()').extract() you will see the nodes which are in you html body. You can try xpath('//body/descendant::text()'). | 1 | 23 | 0 | I would like to have all the text visible from a website, after the HTML is rendered. I'm working in Python with Scrapy framework.
With xpath('//body//text()') I'm able to get it, but with the HTML tags, and I only want the text. Any solution for this? | How can I get all the plain text from a website with Scrapy? | 0.066568 | 0 | 1 | 27,552 |
23,163,508 | 2014-04-18T22:43:00.000 | 1 | 0 | 1 | 0 | python,string,list,pandas | 23,163,657 | 1 | false | 0 | 0 | The DataFrame doesn't know the name of the variable you've assigned to it.
Depending on how you're printing the object, either the __str__ or __repr__ method will get called to get a description of the object. If you want to get back 'df2', you could put them into a dictionary to map the name back to the object.
If you want to be very sneaky, you could patch the object's __str__ or __repr__ methods to return what you want. This is probably a very bad idea, though. | 1 | 0 | 1 | I have a list of dataframes but when I call the content of the list it returns the content of the called dataframe.
List = [df1, df2, df3, ..., dfn]
List[1]
will give,
class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 4753 entries, etc
but I want it to give
str(List[1])???
'df2'
Thanks for the help | Call the name of a data frame rather than its content (Pandas) | 0.197375 | 0 | 0 | 70 |
23,166,158 | 2014-04-19T05:16:00.000 | 17 | 1 | 0 | 1 | python,amazon-web-services,ssh,amazon-ec2 | 23,166,196 | 2 | false | 1 | 0 | You can run the program using the nohup command, so that even when the SSH session closes your program continues running.
Eg: nohup python yourscriptname.py &
For more info you can check the man page for it using
man nohup. | 1 | 13 | 0 | I have a python script that basically runs forever and checks a webpage every second and notifies me if any value changes. I placed it on an AWS EC2 instance and ran it through ssh. The script was running fine when I checked after half an hour or so after I started it.
The problem is that after a few hours when I checked again, the ssh had closed. When I logged back in, there was no program running. I checked all running processes and nothing was running.
Can anyone teach me how to make it run forever (or until I stop it) on AWS EC2 instances? Thanks a lot.
Edit: I used the Java SSH Client provided by AWS to run the script | Make python script to run forever on Amazon EC2 | 1 | 0 | 0 | 8,060 |
23,166,349 | 2014-04-19T05:41:00.000 | -1 | 0 | 0 | 1 | python,windows,openerp,openerp-7 | 23,171,711 | 1 | false | 1 | 0 | Can we make it in such way that when we Shutdown the System it should
SignOut Automatically without the User interference
there is no need to logoff the users. HTTP is a transactional protocol. All is done when the client has made any request. After any client request the system is always in a clean state. There is no state in the clients, that must be flushed to the server before switching off.
When you shutdown and start-up the OpenERP server again, all clients has lost their "session" and if they do a new request they will be redirect to the login page.
Of course, this could by annoying when users starts to fill a screen form (still in browser), send the request and then get redirected to the login, because there were no valid session. | 1 | 0 | 0 | To SignIn/SignOut into OpenERP 7 we have to login into OpenERP and click on the Icon which is on the right top just beside the "Compose New Message" Icon. Now most of the users forget to SignOut from ERP. Can we make it in such way that when we Shutdown the System it should SignOut Automatically without the User interference. Just like a Windows service. Is there any way to do that ?
Please help me out. | Automatic SignOut in OpenERP 7 during System Shutdown | -0.197375 | 0 | 0 | 183 |
23,166,386 | 2014-04-19T05:45:00.000 | 0 | 0 | 1 | 1 | python | 23,376,739 | 1 | false | 0 | 0 | I simply replaced the executable link in my IDE from "/usr/bin/python" to "/Library/Frameworks/Python.framework/Versions/3.4/bin". | 1 | 1 | 0 | I have Python 2.7.5 running on OS X 10.9.2.
I downloaded the Python installer "python-3.4.0-macosx10.6.dmg" from python.org.
After the installation, I still get 2.7.5 when querying python -V.
I am not sure what I need to do to replace 2.7.5 with 3.4 besides installing python-3.4.0-macosx10.6.dmg. | Replacing Python 2.7.5 with Python 3.4 on OS X 10.9.2 | 0 | 0 | 0 | 312 |
23,172,943 | 2014-04-19T17:04:00.000 | 0 | 0 | 1 | 1 | python,installation,pycharm | 51,773,003 | 1 | false | 0 | 0 | If you go to Run -> Edit Configurations in PyCharm, this will let you set CLI arguments, and there's also a couple of different PYTHONPATH-related fields (Add content roots to PYTHONPATH, Add source roots to PYTHONPATH). You can also right-click a folder under the Project menu and check Mark as Sources Root - which I believe adds this directory to the PYTHONPATH at PyCharm script run-time.
Also, like metsfansaid, you could create a batch file to populate your Windows PYTHONPATH environmental variables prior to running anything in the new environment. I believe PyCharm will inherit those. | 1 | 2 | 0 | Is it possible to launch a command line prompt from pycharm that has all the environment variables set already (IE PYTHONPATH) for my projects custom environment. | Launch command line from pycharm with environment variables set | 0 | 0 | 0 | 1,608 |
23,173,427 | 2014-04-19T17:49:00.000 | 1 | 0 | 0 | 0 | python,networkx | 23,183,710 | 1 | false | 0 | 0 | To generate trees with more nodes it is only needed to increase the "number of tries" (parameter of random_powerlaw_tree). 100 tries is not enough even to have a tree with 11 nodes (it gives an error). For example, with 1000 tries I manage to generate trees with 100 nodes, using networkX 1.8.1 and python 3.4.0 | 1 | 1 | 1 | I am trying to use one of the random graph-generators of NetworkX (version 1.8.1):
random_powerlaw_tree(n, gamma=3, seed=None, tries=100)
However, I always get this error
File "/Library/Python/2.7/site-packages/networkx/generators/random_graphs.py", line 840, in random_powerlaw_tree
"Exceeded max (%d) attempts for a valid tree sequence."%tries)
networkx.exception.NetworkXError: Exceeded max (100) attempts for a valid tree sequence.
for any n > 10, that is starting with
G = nx.random_powerlaw_tree(11)
I would like to generate trees with hundreds of nodes. Does anyone know how to correctly set these parameters in order to make it run correctly? | Parameters to let random_powerlaw_tree() generate trees with more than 10 nodes | 0.197375 | 0 | 0 | 451 |
23,174,516 | 2014-04-19T19:30:00.000 | 9 | 1 | 1 | 1 | python,setuptools,distutils,setup.py | 23,174,731 | 1 | true | 0 | 0 | To manually include files in a distribution do the following:
set include_package_data = True
Create a MANIFEST.in file that has a list of include <glob> lines for each file you want to include from the project root. You can use recursive-include <dirname> <glob> to include from sub-directories of the project root.
Unfortunately the documentation for this stuff is really fragmented and split across the Python distutils, setuptools, and old distribute docs so it can be hard to figure out what you need to do. | 1 | 9 | 0 | So, I want the long_description of my setup script to be the contents from my README.md file. But when I do this, the installation of the source distribution will fail since python setup.py sdist does not copy the readme file.
Is there a way to let distutils.core.setup() include the README.md file with the sdist command so that the installation will not fail?
I have tried a little workaround where I default to some shorter text when the README.md file is not available, but I actually do want that not only PyPi gets the contents of the readme file but also the user that installs the package. | read README in setup.py | 1.2 | 0 | 0 | 3,112 |
23,175,354 | 2014-04-19T20:48:00.000 | 1 | 1 | 1 | 0 | python,raspberry-pi,pycrypto | 23,175,972 | 1 | false | 0 | 0 | Fixed it!
I did: sudo apt-get install python-dev and then installed pycrypto again with pip. That worked! | 1 | 2 | 0 | On my Raspberry Pi I have installed Paramiko. When I installed it, it came up with an error, something like "pycrypto didn't install". I then used pip and easy_install to try and install pycrypto, but an error comes up with that, something like failed with error code 1 in /root/build/crypto
How can I install pycrypto?
I am using a Raspberry Pi with Raspbian Wheezy. | No module named pycrypto with Paramiko | 0.197375 | 0 | 0 | 1,338 |
23,175,486 | 2014-04-19T20:59:00.000 | 6 | 0 | 1 | 0 | python,image,python-3.x,webcam,python-3.4 | 24,272,513 | 3 | false | 0 | 0 | I've been looking for the same thing and so far I have come up with a blank. This is what I have so far:
2.7 3.2 3.3 3.4 LINUX WIN32
-------------------------------------------------------
OpenCV YES - - - YES YES
PyGame YES YES YES YES YES YES
SimpleCV YES - - - YES YES
VideoCapture YES - - - - YES
Resources
opencv.org/downloads.html
pygame.info/downloads/
simplecv.org/download
videocapture.sourceforge.net/ | 1 | 4 | 0 | I want to be able to take a photo from a webcam in python 3 and Windows. Are there any modules that support it? I have tried pygame, but it is only linux and python 2, and VideoCapture is only python 2. | taking webcam photos in python 3 and windows | 1 | 0 | 0 | 12,385 |
23,178,275 | 2014-04-20T04:01:00.000 | 1 | 0 | 0 | 0 | python,python-3.x,machine-learning,classification,bayesian | 29,718,358 | 1 | false | 0 | 0 | Ideally, it is said that the more you train your data, the 'better' your results are but it really depends after you've tested it and compared it to the real results you've prepared.
So to answer your question, training the model with keywords may give you too broad results that may not be arguments. But really, you have to compare it to something, so I suggest you might want to also train your model with some sentence structure that arguments seem to follow (a pattern of some sort), it might eliminate the ones that are not arguments. Again, do this and then test it to see if you get higher accuracy than the previous model.
To answer your next question: Which would be the best approach in terms of text classification accuracy and time to retrieve? It really depends on the data your using, I can't really answer this question because you have to perform cross-validation to see if your model achieves high accuracy. Obviously, the more features you are looking, the poorer your learning algorithm's performance. And if you are dealing with gigabytes of text to analyze, I suggest using Mapreduce to perform this job.
You might want to check out SVMs as your learning model, test it out with the learning models (naive bayes, positive naive bayes and decision trees) and see which one performs better.
Hope this helps. | 1 | 6 | 1 | I need to classify text and i am using Text blob python module to achieve it.I can use either Naive Bayes classifier/Decision tree. I am concern about the below mentioned points.
1) I Need to classify sentences as argument/ Not an argument. I am using two classifiers and training the model using apt data sets. My question is all about do i need to train the model with only keywords ? or i can train the data set with all possible argument and non argument sample sentences? Which would be the best approach in terms of text classification accuracy and time to retrieve?
2) Since the classification would be either argument/not an argument, which classifier would fetch exact results? It is Naive Bayes /Decision tree/Positive Naive bayes?
Thanks in advance. | Text classification in python - (NLTK Sentence based) | 0.197375 | 0 | 0 | 1,170 |
23,178,570 | 2014-04-20T04:53:00.000 | 8 | 0 | 0 | 0 | python,user-interface,kivy | 23,179,572 | 1 | true | 0 | 1 | You can have two separate windows running two separate kivy apps controlling/communicating with each other via osc/twisted/... However one "App" instance is only limited to one App window for that process. It can launch another process(subprocess.popen) which has a new window though. | 1 | 5 | 0 | I'm looking at using Kivy to create a program that needs to display a window on each monitor, is there a way to accomplish this? I'd also prefer not to have a single window spanning across.
If not, is there another (good looking, windows/linux) GUI toolkit that can accomplish this? | Multiple monitors with Kivy | 1.2 | 0 | 0 | 1,500 |
23,179,993 | 2014-04-20T08:21:00.000 | 0 | 0 | 1 | 0 | python,pyaudio | 23,180,013 | 1 | false | 0 | 0 | Install Python 3.3.5 (32Bit) and then install PyAudio with the windows installer.
It works at my PC (Win7 x64). | 1 | 0 | 0 | On Windows 7, I want to install PyAudio to use with WinPython, but the PyAudio installer crashes out because there is no Python entry in the registry. WinPython does a lot of things its own way so I'm not surprised installing it doesn't set up the registry in the same way installing a regular version of Python does. Anyway, what can I do?
Using Python 3.3.2 as part of the WinPython installation.
Having another problem:
I followed the suggestions to register WinPython with Windows 7, and then installed PyAudio, which went fine until I tried to run "import pyaudio" at which point it exited, saying "Please Build and Install the PortAudio Python Bindings first." My intention is not to have to build anything, and the PyAudio installer web page says it includes PortAudio V19.
Further information: I noticed that the installer for PyAudio says "32 bit only" and I suspect my version of WinPython was "built for 64 bit" (not sure what that means, but the installation directory is c:\WinPython-64bit-3.3.3.2) | Install PyAudio with WinPython | 0 | 0 | 0 | 774 |
23,181,153 | 2014-04-20T10:44:00.000 | 4 | 0 | 1 | 0 | python,constants,immutability | 23,181,193 | 1 | false | 0 | 0 | Syntactically, you can't. You have two choices:
pass a copy of your object to these functions - this way, you won't have to worry about it being corrupted
create a wrapper, that implements the required interface, wraps your object, and doesn't allow modifications
The second option is of course only possible if you know what interface is expected and if it's simple. | 1 | 7 | 0 | I have an object (numpy.ndarray, for example), that should be passed as an argument to several functions. In C++ in such cases I declared such arguments as const, in order to show, that they must not be changed.
How can I do the same in Python? | const arguments in Python | 0.664037 | 0 | 0 | 5,300 |
23,181,615 | 2014-04-20T11:44:00.000 | 1 | 0 | 1 | 0 | java,python,ruby,pycharm,rubymine | 23,365,722 | 1 | true | 1 | 0 | IDE comes with it's own version of JRE on Windows.
You can easily configure your environment to use your system wide or any custom JRE (and then delete bundled one if so desired). Just check .bat file in INSTALL_FOLDER\bin folder and see what environment variables and in what order it uses when searching for JRE.
By overriding one of them (IDE-specific one has priority) you can point to a desired JRE installation. | 1 | 1 | 0 | The PyCharm and RubyMine IDE's comes with a folder named JRE in the root installation dir, the JRE folder increments the size of the installation around 150 MB, well, I supposse that this folder just contains exactlly the same java runtime environment that an official JRE installer downloaded from Java.com installs, so my question is:
If I've previouslly installed JRE from Java site I can delete forever the JRE folder from PyCharm and/or RubyMine installation directories to reduce the total size?
I've tried to delete the JRE folder from PyCharm and RubyMine root directories to test whether the IDE's really depends from that folder and seems that both IDE's works as normally with the JRE folder deleted, but I need to be sure that is safe or not to delete the JRE folder from Pychar/RubyMine directories if I currentlly have JRE installed. | Issue with PyCharm and RubyMine, JRE folder? | 1.2 | 0 | 0 | 200 |
23,184,702 | 2014-04-20T16:17:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,app.yaml,xhtml2pdf | 23,335,617 | 1 | false | 1 | 0 | I got it now! Don't use XHTML2PDF - use ReportLab on its own instead. | 1 | 0 | 0 | I am new to GAE, web dev and python, but am working my way up.
I have been trying to get xhtml2pdf working on GAE for some time now but have had no luck. I have downloaded various packages but keep getting errors of missing modules. These errors vary depending on what versions of these packages and dependencies I use. I have even tried using the xhtml2pdf "required dependency" versions.
I know xhtml2pdf used to be hosted on GAE according to a stackoverflow post from 2010, but I don't know if this is the case anymore. Have they replaced it with something else that the GAE team think is better?
I have also considered that the app.yaml is preventing my app from running. As soon as I try importing the pisca module, my app stops.
Could anyone please give me some direction on how to get this working? In the sense of how to install these packages with dependencies and where they should be placed in my project folder (note that I am using Windows). And what settings I would need to add to my app.yaml file. | How do I get xhtml2pdf working on GAE? | 0 | 0 | 0 | 113 |
23,185,906 | 2014-04-20T18:07:00.000 | 7 | 0 | 1 | 0 | python,multithreading | 23,186,579 | 1 | true | 0 | 0 | It depends on the application, and on the python implementation that you are using.
In CPython (the reference implementation) and pypy the GIL only allows one thread at a time to execute Python bytecode. Other threads might be doing I/O or running extensions written in C.
It is worth noting that some other implementations like IronPython and JPython don't have a GIL.
A characteristic of threading is that all threads share the same interpreter and all the live objects. So threads can share global data almost without extra effort. You need to use locking to serialize access to data, though! Imagine what would happen if two threads would try to modifiy the same list.
Multiprocessing actually runs in different processes. That sidesteps the GIL, but if large amounts of data need to be shared between processes that data has to be pickled and transported to another process via IPC where it has to be unpickled again. The multiprocessing module can take care of the messy details for you, but it still adds overhead.
So if your program wants to run Python code in parallel but doesn't need to share huge amounts of data between instances (e.g. just filenames of files that need to be processed), multiprocessing is a good choice.
Currently multiprocessing is the only way that I'm aware of in the standard library to use all the cores of your CPU at the same time.
On the other hand if your tasks need to share a lot of data and most of the processing is done in extension or is I/O, threading would be a good choice. | 1 | 2 | 0 | i just recently read an article about the GIL (Global Interpreter Lock) in python.
Which seems to be some big issue when it comes to Python performance. So i was wondering myself
what would be the best practice to archive more performance. Would it be threading or
either multiprocessing? Because i hear everybody say something different, it would be
nice to have one clear answer. Or at least to know the pros and contras of multithreading
against multiprocessing.
Kind regards,
Dirk | Python multithreading best practices | 1.2 | 0 | 0 | 4,963 |
23,189,794 | 2014-04-21T02:08:00.000 | 1 | 0 | 0 | 0 | django,python-2.7,django-tables2 | 67,400,201 | 1 | false | 1 | 0 | You can use tables.columns.LinkColumn instead of tables.LinkColumn.
I solved my problem this way. | 1 | 0 | 0 | I have this problem when using django-tables2 and a custom template rendering.
The issue arises when I added another column, one that is not specified in the model, and the error AttributeError: 'module' object has no attribute 'LinkColumn' pops up.
The table and the custom rendering worked when just the model columns where used. | django-tables2 'module' object has no attribute 'LinkColumn' | 0.197375 | 0 | 0 | 1,133 |
23,190,348 | 2014-04-21T03:24:00.000 | 4 | 0 | 1 | 0 | python-3.x,python-2.7,pyalsaaudio | 58,691,944 | 2 | false | 0 | 0 | It's now called pyalsaaudio.
For me pip install pyalsaaudio worked. | 1 | 3 | 0 | Has the alsaaudio library been ported to python3? i have this working on python 2.7 but not on python 3.
is there another library for python 3 if the above cannot be used? | alsaaudio library not working | 0.379949 | 0 | 0 | 6,761 |
23,190,913 | 2014-04-21T04:42:00.000 | 0 | 1 | 1 | 0 | python,variables,cgi,text-files | 23,190,978 | 1 | true | 0 | 0 | Traditionally this is done using cookies or hidden form fields. | 1 | 0 | 0 | I want to pass a variable in one python cgi script to other cgi script? how can i do this as in php. using url or something...?
i saved variable in text file, thus read and get saved variable when other page load
Is this method good? | How to pass variable in one .py cgi to other python cgi script | 1.2 | 0 | 0 | 241 |
23,190,976 | 2014-04-21T04:50:00.000 | 1 | 0 | 0 | 0 | python,mysql,pyqt | 23,191,497 | 1 | false | 0 | 1 | You don't need to download the file using FTP (or the like) to load it into Qt.
Assuming the database stores the correct file path to the image, you can just use the same functionality once you get the file path, i.e. you anyway only need the file path to load the image into Qt. There is nothing special you would do by downloading the image itself.
If the database is on a remote server, a possible approach is to use the JDBC API to access the database, get the image as a binary file and then serialize it, which can be transferred over the network. | 1 | 0 | 0 | I want to connect to a mySQL database with python, then query a number corresponding to an image, and load this image in Qt. From what I found online, it is suggested not to use mysql database to store the image, but instead store a file location on the server. If this is the case, can I load the image (do i have to download it?) into qt using mysql or do i have to open another connection with ftp, download the image to a folder, and then load it with qt? If there are any resources on this type of workflow I would appreciate it. | mySQL connection and loading images to Qt | 0.197375 | 1 | 0 | 421 |
23,191,241 | 2014-04-21T05:16:00.000 | 1 | 0 | 0 | 1 | python,cygwin,bottle | 23,191,551 | 1 | true | 0 | 0 | Since you get a connection refused error, the best I can think of is that this is a browser issue. Try editing the LAN settings on your Chrome browser to bypass proxy server for local address. | 1 | 1 | 0 | I am running python 2.7 + bottle on cygwin and I wanted to access a sample webpage from chrome.
I am unable to access the website running on http://localhost:8080/hello but when I do a curl within cygwin I am able to access it.
Error Message when accessing through Chrome
Connection refused
Description: Connection refused
Please let me know how I can access my local bottle website running inside Cygwin from windows browser. | Accessing localhost from windows browser | 1.2 | 0 | 1 | 1,043 |
23,195,522 | 2014-04-21T10:28:00.000 | 0 | 0 | 0 | 0 | python,c++,opencv | 71,003,756 | 6 | false | 0 | 0 | I have done this task.
Compare file sizes.
Compare exif data.
Compare first 'n' byte, where 'n' is 128 to 1024 or so.
Compare last 'n' bytes.
Compare middle 'n' bytes.
Compare checksum | 1 | 11 | 1 | There are many questions over here which checks if two images are "nearly" similar or not.
My task is simple. With OpenCV, I want to find out if two images are 100% identical or not.
They will be of same size but can be saved with different filenames. | OpenCV - Fastest method to check if two images are 100% same or not | 0 | 0 | 0 | 15,991 |
23,198,532 | 2014-04-21T13:35:00.000 | 1 | 0 | 1 | 0 | python,memorystream | 23,198,619 | 3 | false | 0 | 0 | I believe you are looking for the StringIO library. | 2 | 2 | 0 | Is it possible to supply a path to the buffer where to write the data instead of supplying a file path e.g. instead of object.save("D:\filename.jpg") supply it a path to memory buffer. I want to do this to avoid writing the image object data to file as .JPG and save it directly into memory so that I can have it in memory rather than loading it again from disk. | Writing to a memory file instead of file path | 0.066568 | 0 | 0 | 2,587 |
23,198,532 | 2014-04-21T13:35:00.000 | 0 | 0 | 1 | 0 | python,memorystream | 23,198,740 | 3 | false | 0 | 0 | If object.save supports file-like objects, that means, objects, that have a write-method, you can provide the method with a StringIO.StringIO instance. It has the same interface as a normal file-object, but keeps its contents in memory. | 2 | 2 | 0 | Is it possible to supply a path to the buffer where to write the data instead of supplying a file path e.g. instead of object.save("D:\filename.jpg") supply it a path to memory buffer. I want to do this to avoid writing the image object data to file as .JPG and save it directly into memory so that I can have it in memory rather than loading it again from disk. | Writing to a memory file instead of file path | 0 | 0 | 0 | 2,587 |
23,199,042 | 2014-04-21T14:05:00.000 | 3 | 0 | 1 | 0 | python | 23,199,188 | 3 | false | 0 | 0 | You are navigating into trouble.
DOn't do that: either use the number "tell" tells you about, or count what you have in memory, regardless of the file contents.
You won't be able to correlate a position in text, read in memory, to a physicall place in a text file: text files are not meant for that. They are meant to be read one line at a time, or in whole: your pogram consumes the text, and let the OS worry about the file position.
You can open your file in binary mode, read its contents as they are into memory, and have some method of retrieving readable text from those contents as needed - doing this with a proper class can make it not that messy.
Consider the problem you already have with the line-endings which could be either "\n" or "\r\n" and still count as a single character, and now, imagine that situation one hundred fold more complex if the file has a single utf-8 encoded character that takes more than one byte to encode.
And even in binary files, knowing the absolute file pointer position can only be useful in a handful situations where, usually, one would be better using a database engine to start with. | 1 | 3 | 0 | I want to keep track of the file pointer on a simple text file (just a few lines), after having used readline() on it. I observed that the tell() function also counts the line endings.
My questions:
How to instruct the code to skip counting the line endings ?
How to do the first question regardless the line ending type (to work the same in case the text file uses just \n, or just \r, or both) ? | How to exclude \n and \r from tell() count in Python 2.7 | 0.197375 | 0 | 0 | 270 |
23,199,796 | 2014-04-21T14:51:00.000 | 13 | 0 | 0 | 0 | python,pandas,filtering,dataframe,outliers | 39,591,989 | 18 | false | 0 | 0 | scipy.stats has methods trim1() and trimboth() to cut the outliers out in a single row, according to the ranking and an introduced percentage of removed values. | 1 | 319 | 1 | I have a pandas data frame with few columns.
Now I know that certain rows are outliers based on a certain column value.
For instance
column 'Vol' has all values around 12xx and one value is 4000 (outlier).
Now I would like to exclude those rows that have Vol column like this.
So, essentially I need to put a filter on the data frame such that we select all rows where the values of a certain column are within, say, 3 standard deviations from mean.
What is an elegant way to achieve this? | Detect and exclude outliers in a pandas DataFrame | 1 | 0 | 0 | 440,407 |
23,200,484 | 2014-04-21T15:33:00.000 | 1 | 0 | 1 | 0 | python-c-api | 23,351,768 | 1 | false | 0 | 0 | The only problem is that the error messages, if an error occurs while parsing the tuple, will be appropriate to a function call.
Otherwise, it should work on arbitrary tuples just as well as on argument lists. | 1 | 1 | 0 | I am looking for confirmation on this issue:
Can I use PyArg_ParseTuple() on any Python tuple, or just on those passed as argument lists from function calls?
I see strong indication for the former, but to my reading the documentation is rather vague on this point, hence my question here. | PyArg_ParseTuple() on arbitrary tuples | 0.197375 | 0 | 0 | 315 |
23,200,789 | 2014-04-21T15:51:00.000 | 3 | 1 | 0 | 0 | python,git,fetch,pull,pygit2 | 23,750,194 | 1 | false | 0 | 0 | Remote.fetch() does not update the files in the workdir because that's very far from its job. If you want to update the current branch and checkout those files, you need to also perform those steps, via Repository.create_reference() or Reference.target= depending on what data you have at the time, and then e.g. Repository.checkout_head() if you did decide to update.
git-pull is a script that performs very many different steps depending on the configuration and flags passed. When you're writing a tool to simulate it over multiple repositories, you need to figure out what it is that you want to do, rather than hoping everything is set up just so that git-pull won't surprise you. | 1 | 2 | 0 | I do have the following problem. I'm writing a script which searches a folder for repositories, looks up the remotes on the net and pulls all new data into the repository, notifying me about new changes. The main idea is clear. I'm using python 2.7 on Windows 7 x64, using pygit2 to access the git features. The command-line supports the simple command "git pull 'origin'", but the git api is more complicated and I don't see the way. Okay, I came that far:
import pygit2
orepository=pygit2.Repository("path/to/repository/.git")
oremote=repo.remotes[0]
result=oremote.fetch()
This code retrieves the new objects and downloads it into the repository, but doesn't update the master branch or check the new data out. By inspecting the repository with TortoiseGit I see that nothing way checked out , even the new log messages don't appear when showing the log. I need to use the git pull command to refresh the repository and working copy at all. Now my question: What do I need to do to do all that by using pygit2? I mean, I download the changes by fetching them, but what do I need to do then? I want to update the master branch and working copy too...
Thank you in advance for helping me with my problem.
Best Regards. | pulling and integrating remote changes with pygit2 | 0.53705 | 0 | 0 | 2,018 |
23,201,047 | 2014-04-21T16:06:00.000 | 0 | 0 | 1 | 0 | python | 23,201,407 | 1 | true | 0 | 0 | Yes, it will try to overwrite .pyc file with the new version. But this won't affect the first program unless explicit module reload is called, in it because the module is loaded into memory.
OTOH, for example, printing stack for an exception needs reading source file, and, if it's changed, wrong lines will be printed. So this replacing on the fly is recommended only when module is properly reloaded just after this. | 1 | 1 | 0 | Let's say I have a python script called scr.py. Running python scr.py creates a scr.pyc file which is interpreted by Python. Now, let's say I make a change in scr.py while it is running, and then in another terminal window, I run python scr.py again. What happens? Does the original scr.pyc file get overwritten? Are there any problems that might occur? Could you run two slightly different copies of the same file at the same time? | Can i run two copies of a python program? | 1.2 | 0 | 0 | 68 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.