Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
19,030,833 | 2013-09-26T14:20:00.000 | 2 | 0 | 1 | 0 | c++,python,visual-studio-2010,crt,msvcr90.dll | 24,505,571 | 2 | false | 0 | 0 | I have the same problem. The solution is indeed to build python from the sources. But there is a big drawback: all extra 3rd party python modules pre-build for Windows that you download from internet will not work! This is because all of those modules will be prebuild with VS 2008 and you get again in trouble with incompatible runtimes. The solution is that all such extra modules needs to be again rebuild from sources, but the task is not easy in all cases. The modules are ususally tested in VS2008 and you get a lot of troubles trying to run them with VS2010. I got this mostly with database connectors for MySQL, MSSQL and others. | 2 | 1 | 0 | I have some code here that we used to use to call a Python script from our (very large) application. It worked fine when we used VS2008 (compiler v90), which is what the default version of python27 was compiled with.
In the last year we've upgraded our application to VS2010, and I was looking to update the Python-calling dll, thinking it would be a morning's work. Unfortunately, after wrestling with the linker and missing dlls for ages, most of my colleagues agree that our application and python27.dll are using incompatible versions of Windows CRT.
I thought it would be simple enough to find a version of python27.dll (or indeed another version would be fine) compiled with VS2010 (v100) - but I can't.
Is there a way to call a Python script from an application compiled in VS2010? | Running Python from C++ (VS2010, compiler v100) | 0.197375 | 0 | 0 | 250 |
19,030,833 | 2013-09-26T14:20:00.000 | 2 | 0 | 1 | 0 | c++,python,visual-studio-2010,crt,msvcr90.dll | 19,031,237 | 2 | true | 0 | 0 | An answer may be: Download the python sources - compile a custom python.dll and link against that. | 2 | 1 | 0 | I have some code here that we used to use to call a Python script from our (very large) application. It worked fine when we used VS2008 (compiler v90), which is what the default version of python27 was compiled with.
In the last year we've upgraded our application to VS2010, and I was looking to update the Python-calling dll, thinking it would be a morning's work. Unfortunately, after wrestling with the linker and missing dlls for ages, most of my colleagues agree that our application and python27.dll are using incompatible versions of Windows CRT.
I thought it would be simple enough to find a version of python27.dll (or indeed another version would be fine) compiled with VS2010 (v100) - but I can't.
Is there a way to call a Python script from an application compiled in VS2010? | Running Python from C++ (VS2010, compiler v100) | 1.2 | 0 | 0 | 250 |
19,031,616 | 2013-09-26T14:54:00.000 | 0 | 0 | 1 | 0 | python,django,windows,virtualenv | 19,032,797 | 2 | false | 1 | 0 | I once faced the same problem and it took me so much time to configure another environment that I eventually had to create a VM with the same version of OS and libraries. I then made a raw copy of the project and it worked fine. | 1 | 0 | 0 | I am working on a Django project that was created by another developer on a different machine. I see that in the root of the application, there is a .virtualenv directory. Is it possible to simply setup this project locally on my Windows machine using the project settings and Python version (the app uses 2.7), so that I can run it like a local Django application so debugging is feasible?
I have access to the development web server and have copied the full source of the app down to my Win7 machine but cannot seem to get things setup correctly to run the app locally so I can debug.
I currently have Python 2.7, 2.7.5 and 3.3.2 installed on my local dev machine. I would call myself pretty new to Django and Virtualenv.
If anyone has any guidance on how I can get my environment straitened out so I can run the app with debugging, I would be very thankful.
Thank you in advance. | Importing and debugging a Python Django project made in a different environment | 0 | 0 | 0 | 69 |
19,032,609 | 2013-09-26T15:38:00.000 | 0 | 0 | 0 | 0 | python,sockets,python-2.7,socketserver,python-sockets | 19,033,825 | 2 | false | 0 | 0 | If the location of your server is constant, why wouldn't you just define the server ip address in your code and have your script connect to it? The user would never have to see the ip address of your server. | 1 | 3 | 0 | I am building a multiplayer game, so once the server started i want to broadcast server name continuously so that client can know that there are some server is running. I don't want to give IP address and port number to connect to server. can someone help me to broadcast server name.
its an app not an web app. | Broadcasting socket server in python | 0 | 0 | 1 | 6,087 |
19,032,985 | 2013-09-26T15:53:00.000 | 1 | 0 | 1 | 0 | python,api,entities,alchemy | 19,069,075 | 1 | false | 0 | 0 | try adding "language=english" to your request, I think I used that once | 1 | 1 | 0 | When calling the Alchemy API for language processing, it's sometimes auto recognizing the wrong language. (The text has lots of names, which sometimes throws off the auto recognition.) I know the text is all English, so is there a way to force the api to process in English? You'd think there would be a simple parameter, but I don't see it in the docs. | How do I force the Alchemy API to process text in English? | 0.197375 | 0 | 0 | 108 |
19,036,088 | 2013-09-26T18:38:00.000 | 0 | 0 | 0 | 0 | python,macos,filemaker | 19,038,893 | 2 | false | 1 | 0 | Applescript should be fine for this, since it's likely running on the client already. | 1 | 0 | 0 | I currently have a python script set up, using PySerial, that reads incoming weight data from a Mettler-Toledo scale. Everything is working just as I want it to with regards to reading it, but I need to get the result into FileMaker Pro 9. I'm not familiar with FileMaker, but could this be done by invoking the Python script from an AppleScript, or is there a better way? | Python - How to send PySerial results to FileMaker Pro 9 | 0 | 0 | 0 | 133 |
19,036,197 | 2013-09-26T18:44:00.000 | 1 | 1 | 1 | 0 | php,python,hash,laravel,laravel-4 | 19,036,400 | 1 | true | 1 | 0 | you cant the best you can do is encrypt it with a reversible encryption ... but then you need to store the key somewhere ... eventually you will have some plain text somewhere (or encoded at best) that will allow decryption ... you could store the hash and do a query against a db that maps hashes to pw's but you still have the password in plaintext somewhere ... you cannot login with just a hash anywhere ... (because the hash ends up getting hashed and then no longer matches the expected hash)
an option may be to use rainbow tables to find something that results in an identical hash and use that instead ... but if they are adding salts or anything you are once again out of luck | 1 | 1 | 0 | I have a hashed password using the Hash::make() function of laravel when a user is created. I eventually need to take that hashed password, and pass it to a python script to perform a login and download of site resources. I know the hash is a one-way action, but I'd like to keep the password hashed to be security conscious if at all possible.
Any suggestions on how to accomplish this task while keeping security intact would be helpful!
Thanks,
Justin | Laravel 4 passwords and python | 1.2 | 0 | 0 | 331 |
19,039,249 | 2013-09-26T21:43:00.000 | 2 | 0 | 1 | 0 | python,ide,interpreter,pycharm | 19,066,840 | 2 | false | 0 | 0 | Its not supposed to. It did not for me. When I fist installed it, and created a new project, I just directed PyCharm to my Python installation.
You need to click one "New Project", then click on the "..." button next to the interpreter drop-down box, and then, you need to click on the + sign, choose to add "local", and then point to python.exe for whatever interpreter is installed. | 1 | 3 | 0 | I am running Mac OSX 10.8.4 with Python 2.7 and I just downloaded PyCharm Version: 3.0 Build: 131.190.
When I opened it and chose "Create New Project", in the 'Interpreter' pull-down menu there was no options to chose an interpreter (It just says '').
Just in case it was an issue with Python (although I use IDLE regularly), I downloaded Python 3.3 just to see if the new version would be identified by PyCharm, but again with no luck - the 'Interpreter' pull down menu had no options to select (It just says '').
Im sure I am just overlooking something during installation, but why can't PyCharm identify the Interpreter?
Thanks! | PyCharm cannot identify Interpreter after initial PyCharm download | 0.197375 | 0 | 0 | 19,563 |
19,041,486 | 2013-09-27T01:53:00.000 | 1 | 0 | 0 | 0 | python,optimization,scipy,gpu,multidimensional-array | 19,042,578 | 1 | true | 0 | 0 | I am not sure you can ever do it. fmin_l_bfgd_b is provided not by pure python code, but by a extension (a wrap of FORTRAN code). In Win32/64 platform it can be found at \scipy\optimize\_lbfgsb.pyd. What you want may only be possible if you can compile the extension differently or modify the FORTRAN code. If you check that FORTRAN code, it has double precision all over the place, which is basically float64. I am not sure just changing them all to single precision will do the job.
Among the other optimization methods, cobyla is also provided by FORTRAN. Powell's methods too. | 1 | 1 | 1 | I am trying to optimize functions with GPU calculation in Python, so I prefer to store all my data as ndarrays with dtype=float32.
When I am using scipy.optimize.fmin_l_bfgs_b, I notice that the optimizer always passes a float64 (on my 64bit machine) parameter to my objective and gradient functions, even when I pass a float32 ndarray as the initial search point x0. This is different when I use the cg optimizer scipy.optimize.fmin_cg, where when I pass in a float32 array as x0, the optimizer will use float32 in all consequent objective/gradient function invocations.
So my question is: can I enforce scipy.optimize.fmin_l_bfgs_b to optimize on float32 parameters like in scipy.optimize.fmin_cg?
Thanks! | How to enforce scipy.optimize.fmin_l_bfgs_b to use 'dtype=float32' | 1.2 | 0 | 0 | 1,678 |
19,044,559 | 2013-09-27T06:58:00.000 | 0 | 0 | 1 | 0 | python,slice | 19,045,010 | 2 | false | 0 | 0 | You can think like that:
with f[0:5], 0 is the start position and 5-1 the end position.
with f[0:5:-1], 5 is the start position and 0+1 the end position.
In slicing the start position must be lower than the end position.
When this is not the case the result is an empty string. Thus f[0:5:-1] returns an empty string. | 2 | 3 | 0 | I am having a problem in understanding what happens when i put negative value to step in case of slicing.
I know [::-1] reverses a string. i want to know what value it assign to start and stop to get a reverse.
i thought it would be 0 to end to string. and tried
f="foobar"
f[0:5:-1]---> it gives me no output. why?
and i have read start should not pass stop. is that true in case of negative step value also?
can anyone help me to clear my doubt. | understanding negative slice step value | 0 | 0 | 0 | 1,603 |
19,044,559 | 2013-09-27T06:58:00.000 | 4 | 0 | 1 | 0 | python,slice | 19,044,630 | 2 | true | 0 | 0 | The reason why f[0:5:-1] does not generate any output is because you are starting at 0, and trying to count backwards to 5. This is impossible, so Python returns an empty string.
Instead, you want f[5:0:-1], which returns the string "raboo".
Notice that the string does not contain the f character. To do that, you'd want f[5::-1], which returns the string "raboof".
You also asked:
I have read start should not pass stop. is that true in case of negative step value also?
No, it's not true. Normally, the start value shouldn't pass the stop value, but only if the step is positive. If the step is negative, then the reverse is true. The start must, by necessity, be higher then the stop value. | 2 | 3 | 0 | I am having a problem in understanding what happens when i put negative value to step in case of slicing.
I know [::-1] reverses a string. i want to know what value it assign to start and stop to get a reverse.
i thought it would be 0 to end to string. and tried
f="foobar"
f[0:5:-1]---> it gives me no output. why?
and i have read start should not pass stop. is that true in case of negative step value also?
can anyone help me to clear my doubt. | understanding negative slice step value | 1.2 | 0 | 0 | 1,603 |
19,056,837 | 2013-09-27T17:37:00.000 | 4 | 0 | 1 | 0 | python,multithreading | 19,057,039 | 3 | false | 0 | 0 | The OS-level file removal primitives are synchronous on both Unix and Windows, so I think you pretty much have to use a worker thread. You could have it pull files to delete off a Queue object, and then when the main thread is done with a file it can just post the file to the queue. If you're using NamedTemporaryFile objects, you probably want to set delete=False in the constructor and just post the name to the queue, not the file object, so you don't have object lifetime headaches. | 1 | 7 | 0 | I have a long running python script which creates and deletes temporary files. I notice there is a non-trivial amount of time spent on file deletion, but the only purpose of deleting those files is to ensure that the program doesn't eventually fill up all the disk space during a long run. Is there a cross platform mechanism in Python to aschyronously delete a file so the main thread can continue to work while the OS takes care of the file delete? | Can I asynchronously delete a file in Python? | 0.26052 | 0 | 0 | 4,305 |
19,058,338 | 2013-09-27T19:09:00.000 | 1 | 1 | 0 | 0 | php,python | 19,058,834 | 2 | false | 0 | 0 | There's this implementation of python called python server page that you can use to embed python into the web app directly just like php but it would have file extension .psp it is not actively developed, Google it up | 1 | 1 | 0 | I am developing a php based web app but I have a python script that I already had that I want to integrate into my system. Is it possible to embed/include the python script within the main content area of my web app on a specific page? | embedding python script in php website | 0.099668 | 0 | 0 | 1,812 |
19,058,485 | 2013-09-27T19:18:00.000 | 20 | 0 | 0 | 0 | python,matplotlib | 28,295,797 | 2 | true | 0 | 0 | For the width: legend.get_frame().set_linewidth(w)
For the color: legend.get_frame().set_edgecolor("red") | 1 | 19 | 1 | In matplotlib, how do I specify the line width and color of a legend frame? | Specifying the line width of the legend frame, in matplotlib | 1.2 | 0 | 0 | 13,797 |
19,058,491 | 2013-09-27T19:19:00.000 | 3 | 0 | 0 | 0 | python,django,postgresql,timezone | 19,076,075 | 1 | true | 1 | 0 | The issue has been solved. The problem was that I was using another naive datetime field for calculation of difference in time, whereas the DB field was an aware field. I then converted the naive to timezone aware date, which solved the issue.
Just in case some one needs to know. | 1 | 2 | 0 | datetime is stored in postgres DB with UTC. I could see that the date is 2013-09-28 00:15:52.62504+05:30 in postgres table.
But when I fetch the value via django model, I get the same datetime field as datetime.datetime(2013, 9, 27, 18, 45, 52, 625040, tzinfo=).
USE_TZ is True and TIME_ZONE is 'Asia/Kolkata' in settings.py file. I think saving to DB works fine as DB contains datetime with correct UTC of +5:30.
What am i doing wrong here?
Please help.
Thanks
Kumar | Postgres datetime field fetched without timezone in django | 1.2 | 1 | 0 | 1,585 |
19,061,299 | 2013-09-27T22:52:00.000 | 2 | 0 | 0 | 0 | python,django,django-views | 19,061,324 | 3 | false | 1 | 0 | You should disable the button with JavaScript after clicking on it.
This way the user is unable to trigger the view multiple times. | 1 | 1 | 0 | A button click in my app calls a view which does a few database changes and then redirects to a new views which renders html. When the user typically clicks on the link, he accidentally clicks on in twice to thrice in a couple of seconds. I want to block the view call if the same call was made less than 10 seconds ago. Of course I can do it by checking in the database, but I was hoping to have a faster solution by using some decorator in django. | Django Views - Block Consecutive Quick Calls | 0.132549 | 0 | 0 | 152 |
19,062,968 | 2013-09-28T03:39:00.000 | 7 | 1 | 1 | 0 | python,macos,pycrypto | 19,102,883 | 2 | true | 0 | 0 | For those having this issue on Mac, for something reason Pip, easy_install, and even doing it manually installs Crypto with a lowercase 'c' in to site-packages. By browsing in to site-packages and renaming 'crypto' to 'Crypto', it solves the issues with other libaries. | 2 | 2 | 0 | After a bit of googling around, I see this issue is pretty common but has no direct answers.
Trying to use Pycrypto on my Mac 10.8.5. Installed it through Pip, Easy_install, and manually with setup.py yet when I try to import it, it says it can't find the module.
Anyone else have an issue like this? | Python - Crypto.Cipher/Pycrypto on Mac? | 1.2 | 0 | 0 | 4,442 |
19,062,968 | 2013-09-28T03:39:00.000 | 1 | 1 | 1 | 0 | python,macos,pycrypto | 19,063,101 | 2 | false | 0 | 0 | I've had this problem before, and this is because you probably have different versions of Python. So, in fact, the package is installed, but for a separate version. What you need to do is see which executable file is linked to when python or pip is called. | 2 | 2 | 0 | After a bit of googling around, I see this issue is pretty common but has no direct answers.
Trying to use Pycrypto on my Mac 10.8.5. Installed it through Pip, Easy_install, and manually with setup.py yet when I try to import it, it says it can't find the module.
Anyone else have an issue like this? | Python - Crypto.Cipher/Pycrypto on Mac? | 0.099668 | 0 | 0 | 4,442 |
19,066,854 | 2013-09-28T12:15:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,midi | 19,071,135 | 1 | false | 0 | 0 | I'd suggest to choose any MIDI parsing Python library, and port it to Python 3. | 1 | 1 | 0 | I've been looking for a python3 module that I can use to print out eg: "note 5" etc at the correct intervals/periods from a midi file.
I have been unsuccessful in finding such a module for python3, any suggestions? | Midi analysis for python3 | 0.197375 | 0 | 0 | 556 |
19,068,730 | 2013-09-28T15:50:00.000 | 2 | 0 | 1 | 0 | ipython-notebook,auto-indent | 25,071,102 | 7 | false | 0 | 0 | In addition to adding
IPython.Cell.options_default.cm_config.indentUnit = 2;
to your custom.js file as suggested by Jakob, be sure to clear your browser cache as well before restarting ipython notebook!
Also, you may have to first create the ~/.config/ipython/profile_default/static/custom/ directory (use echo $(ipython locate default) to find your default directory) before adding the custom.js file. | 1 | 51 | 0 | I find that developing functions in IPython notebook allows me to work quickly. When I'm happy with the results I copy-paste to a file. The autoindent is 4 spaces, but the coding style for indentation at my company is 2 spaces. How do I change the autoindent to 2 spaces? | How do I change the autoindent to 2 space in IPython notebook | 0.057081 | 0 | 0 | 30,889 |
19,071,199 | 2013-09-28T20:10:00.000 | 6 | 0 | 0 | 0 | python,pandas,dataframe | 61,194,900 | 11 | false | 0 | 0 | This method does everything in place. Many of the other answers create copies and are not as efficient:
df.drop(df.columns[df.columns.str.contains('Test')], axis=1, inplace=True) | 1 | 184 | 1 | I have a pandas dataframe with the following column names:
Result1, Test1, Result2, Test2, Result3, Test3, etc...
I want to drop all the columns whose name contains the word "Test". The numbers of such columns is not static but depends on a previous function.
How can I do that? | Drop columns whose name contains a specific string from pandas DataFrame | 1 | 0 | 0 | 184,940 |
19,071,286 | 2013-09-28T20:21:00.000 | 0 | 0 | 0 | 0 | python,django,web-services,qt | 19,071,331 | 1 | false | 1 | 0 | Django is a good candidate for the website, however:
It is not a good idea to run heavy functionality from a website. it should happen in a separate process.
All functions should be asynchronous, I.E. You should never wait for something to complete.
I would personally recommend writing a separate process with a message queue and the website would only ask that process for statuses and always display a result immediatly to the user
You can use ajax so that the browser will always have the latest result.
ZeroMQ or Celery are useful for implementing the functionality.
You can implement functionality in C pretty easily. I recomment however that you write that functionality as pure c with a SWIG wrapper rather that writing it as an extension module for python. That way the functionality will be portable and not dependent on the python website. | 1 | 0 | 0 | I have to setup a program which reads in some parameters from a widget/gui, calculates some stuff based on database values and the input, and finally sends some ascii files via ftp to remote servers.
In general, I would suggest a python program to do the tasks. Write a Qt widget as a gui (interactively changing views, putting numbers into tables, setting up check boxes, switching between various layers - never done something as complex in python, but some experience in IDL with event handling etc), set up data classes that have unctions, both to create the ascii files with the given convention, and to send the files via ftp to some remote server.
However, since my company is a bunch of Windows users, each sitting at their personal desktop, installing python and all necessary libraries on each individual machine would be a pain in the ass.
In addition, in a future version the program is supposed to become smart and do some optimization 24/7. Therefore, it makes sense to put it to a server. As I personally rather use Linux, the server is already set up using Ubuntu server.
The idea is now to run my application on the server. But how can the users access and control the program?
The easiest way for everybody to access something like a common control panel would be a browser I guess. I have to make sure only one person at a time is sending signals to the same units at a time, but that should be doable via flags in the database.
After some google-ing, next to QtWebKit, django seems to the first choice for such a task. But...
Can I run a full fledged python program underneath my web application? Is django the right tool to do so?
As mentioned previously, in the (intermediate) future ( ~1 year), we might have to implement some computational expensive tasks. Is it then also possible to utilize C as it is within normal python?
Another question I have is on the development. In order to become productive, we have to advance in small steps. Can I first create regular python classes, which later on can be imported to my web application? (Same question applies for widgets / QT?)
Finally: Is there a better way to go? Any standards, any references? | Python program on server - control via browser | 0 | 0 | 0 | 367 |
19,072,787 | 2013-09-28T23:29:00.000 | 0 | 0 | 1 | 0 | c++,python | 19,073,978 | 1 | false | 0 | 1 | Python doesn't know about C++ but it does know about structures. So have Python call a pure C function that creates a structure. Instead the structure have a pointer to the C++ object. This way Python sees a pure C interface, but the implementation of the C interface creates a private internal object.
We actually do this on one of my projects. It works and it's pretty portable. | 1 | 1 | 0 | I'm writing a C++ python extension and I've been experiencing a SIGSEGV whenever I call (from C++) a virtually-inherited method of a certain class that is also a PyObject (i.e. it has a PyObject_HEAD).
I finally remembered that python doesn't know anything about C++. Sure enough, GCC is putting a vtable pointer in the first 4 bytes of my object (you can tell because the first field of PyObject_HEAD is offset 4 bytes from the address of the object). When python INCREFs the object, it's actually altering the vtable pointer.
My question: what should I do to fix this? Moving the virtual methods to a subsidiary class would solve it, but it seems like admitting defeat. Any other thoughts / experiences?
(Python 2.7, GCC 4.7.2 on mingw32 / windows 7) | I think python is overwriting my vtable (c++ extension) | 0 | 0 | 0 | 226 |
19,073,579 | 2013-09-29T01:43:00.000 | 1 | 0 | 1 | 0 | python,subprocess | 19,075,566 | 1 | false | 0 | 0 | The question exactly as you ask it has no general answer. There might be custom ways; e.g. on Linux a lot of things are actually file descriptors, and there are ways to pass them to subprocesses, but it's not nicely Pythonic: you have to give them as numbers on the command line of the subprocess, and then the subprocess rebuilds a file object around the file descriptor (see file.fileno() and os.fdopen() for regular files; I'm not sure there are ways to do it in Python for other things than regular files...).
In your problem, if everything is in Python, why do you need to make subprocesses instead of doing it all in a single process?
If you really need to, then one general way is to use os.fork() instead of the subprocess module: you'd fork the process (which creates two copies of it); in the parent copy you wait for the child copy to terminate; and in the child copy you proceed to run the particular submodule. The advantage is that at the end the child process terminates, which cleans up what it did --- while at the same time starting with its own copy of almost everything that the parent had (file descriptors, database cursors, etc.) | 1 | 2 | 0 | I've written a large program in Python that calls numerous custom modules' main methods one after another. The parent script creates some common resources, like logging instances, database cursors, and file references which I then pass around to the individual modules. The problem is that I now need to call some of these modules by means of subprocess.check_output, and I don't know how I can share the aforementioned resources across these modules. Is this possible? | Possible to share resources (logging, database, file etc.) across Python subproccesses? | 0.197375 | 0 | 0 | 68 |
19,076,464 | 2013-09-29T09:34:00.000 | 0 | 1 | 0 | 1 | python,macos,chat | 19,076,713 | 1 | false | 0 | 0 | In general, it's not possible to get a MAC address of another host (computer) on the internet without running your own program on that host, and asking.
It's possible to get the MAC addresses of the active hosts on the local network (up to the next router) from the ARP cache. It's possible to get your own MAC address(es). All this is OS-dependent. | 1 | 0 | 0 | Is there any way to find MAC address of a device (in chat system) using Python?
except uuid library | Find MAC address of system, using python (in chat system) | 0 | 0 | 0 | 221 |
19,076,762 | 2013-09-29T10:08:00.000 | 0 | 0 | 0 | 1 | python,celery | 19,090,686 | 1 | false | 1 | 0 | I prefer put configs to project root folder:
One easy style to load config.
One easy style to find and edit config.
However you should load configs manually with one of next methods: config_from_object, config_from_envvar or config_from_cmdline. | 1 | 0 | 0 | I am installing Celery through Puppet and I would like to also have a default configuration file to provide to Celery. The default name is celeryconfig.py, but I don't know where to copy it.
I am NOT using Celery with Django.
What is the location where I should copy celeryconfig.py in order for Celery to read it everytime it runs? | Celery default configuration file | 0 | 0 | 0 | 407 |
19,078,170 | 2013-09-29T12:44:00.000 | 2 | 0 | 1 | 0 | python,json,settings,config,ini | 19,078,206 | 8 | false | 0 | 0 | Save and load a dictionary. You will have arbitrary keys, values and arbitrary number of key, values pairs. | 1 | 146 | 0 | I don't care if it's JSON, pickle, YAML, or whatever.
All other implementations I have seen are not forwards compatible, so if I have a config file, add a new key in the code, then load that config file, it'll just crash.
Are there any simple way to do this? | Python: How would you save a simple settings/config file? | 0.049958 | 0 | 0 | 187,280 |
19,078,656 | 2013-09-29T13:37:00.000 | 10 | 0 | 0 | 0 | python,tkinter | 19,079,141 | 1 | true | 0 | 1 | Use the format and the increment options. Format takes a string format value; use something like %.2f for a floating point number truncated to two decimal places.
The increment option specifies the increment value; the default is 1.0.
Now, using get() on a Spinbox returns an instance of str. Typecast it to float to get a floating-point value. | 1 | 8 | 0 | Is there a way to get a float value (like 1.91, 1.92 and so on) using the tkinter Spinbox Widget in Python 3.x?
Thanks in advance | Python Tkinter Spinbox with float | 1.2 | 0 | 0 | 3,803 |
19,078,711 | 2013-09-29T13:42:00.000 | 0 | 0 | 1 | 0 | ipython,ipython-notebook | 19,083,696 | 1 | false | 0 | 0 | Until you get a complete answer this might work. When I was having issues with this (a complete cleanout and reinstall of IPython sorted it for me), I could still import files by using the drag and drop interface.
I am on Win7 64 so different OS | 1 | 1 | 0 | i'm looking at the Ipthon in depth videos. One of the first execises is import files from the dashboard. It's a very simple process, preesing the "click here" to import the file, select the ipython notebook file, and that's it. That's why i'm very frustrated. Every time i do this, the name of the file is on the list for a few seconds (with the upload button), and then disappears, without any error message on the dashboard nor in the terminal. There's is some kind of verbose output so i can see what's happen? I'm using Ipython 0.13.2 on fedora 18
Thanks! | I can't import files from the dashboard in ipython | 0 | 0 | 0 | 87 |
19,079,107 | 2013-09-29T14:20:00.000 | 1 | 0 | 0 | 0 | python,multithreading | 19,079,260 | 1 | true | 0 | 1 | A simple solution is:
to share a Queue object between the Gtk thread and the download thread
when a download is complete, you put the data in the queue (eg. a tuple with the
download URL and the downloaded contents) from the download thread
in the Gtk thread, you set up a glib timer checking periodically if something
new is in the queue (say, every 100 milliseconds for example) thanks to the "get_nowait"
method of the Queue object.
You can have multiple download threads, if needed. | 1 | 0 | 0 | Consider following problem:
I have a gtk / tk app which displays content from a website in a List(Store). I want to do the following things in order:
display the window & start downloading
show a progress bar
on completion of the downloads add the data into the list(Store)
This is the condition: the user has to be able to interact with the app while it is downloading. That means that the program is in the window's mainloop during the entire download.
What does not work:
urllib.urlopen() waits for the entire download to complete
Popen() does not allow the communication I want between the two threads
How to notify the program that the download has complete is the biggest question
Since I am event driven anyway because of Tk/Gtk I might as well use signals
My preferred way of solving this would be registering an additional signal "dl_done" and sending that signal to gtk when the download has finished. Is that even possible?
Any suggestions are apreciated! | Multithreading url requests in python | 1.2 | 0 | 1 | 97 |
19,082,397 | 2013-09-29T19:37:00.000 | 7 | 0 | 1 | 0 | ipython-notebook | 48,157,900 | 3 | false | 0 | 0 | You can just enter %run 'NotebookA.ipynb' in Notebook B and you should be good to go! | 1 | 5 | 0 | I have a notebook which I intend to use as a library for other notebooks.
It has some functions that can ready certain types of files etc.
How can I include that notebook in a new notebook that should be able to use the functions in the library notebook? | Including a notebook in another notebook in IPython? | 1 | 0 | 0 | 4,153 |
19,083,941 | 2013-09-29T22:15:00.000 | 1 | 0 | 0 | 0 | python,opengl,vbo,pyopengl | 19,084,756 | 1 | true | 0 | 1 | You really shouldn't have to worry about performance for simple sprite based 2D game. Your graphics card is capable of rendering tens or hundreds of thousands of triangles per second (or more!) which is way more than you are likely to need.
Thus, it really doesn't matter much which method you choose to use for updating sprites. If you want to have animated sprites though, I recommend you look into 3D textures. They allow you to interpolate between animation frames, which makes your animations look even better!
As a side note, you mentioned using glTranslate which is a function from OpenGL 1.x/2.x that has been removed from later versions of the API. I recommend that you try to use "modern OpenGL" (ie OpenGL 3.x/4.x). You can find plenty of information about the differences online. That said, it is still OK to use the older versions of the API if you have a specific reason for doing so, and if you do functions like glTranslate will continue to work. | 1 | 0 | 0 | I'm using PyOpenGL to implement a small 2D game engine. I'm hesitating on the way I implement a Sprite class.
I'm already keeping all the scene (tiled map) in a VBO, and all textures are kept in the same big texture. All the sprite's images are also in this texture. So I suppose that, for performance, I should include the sprite in the VBO, let say from position sprite_start_position.
The first question is : since a sprite can have several stances (images), is it better to :
setting only one entry in the VBO for the sprite, and modifying the texture coords in this entry accordingly to the stance, using glBufferSubData
setting as many entries in the VBO as there are stances, but drawing only the current one with glDrawArrays
other ?
The second is the similar with sprite position. Must I :
change the position of the right entry in the VBO with glBufferSubData
use some glTranslate before glDrawArrays(GL_QUADS, sprite_start_position, 1)
other ?
I'm relatively new to OpenGL and I still feel a little lost in this API... | Fastest way to implement a sprite | 1.2 | 0 | 0 | 442 |
19,084,860 | 2013-09-30T00:27:00.000 | 0 | 0 | 1 | 1 | python | 19,085,226 | 1 | false | 0 | 0 | Check sys.path. Does it contain the location where utility.py is kept? Does it have current directory ( an empty string )?
That could be the issue. | 1 | 1 | 0 | I have a question with loading data in Python.
Basically, I defined all Classes I need in a file called "utility.py". and I have one data file "result.data" which stores results in form of a specific class called "Solution" which is defined in "utility.py". What I want to do is to load "result.data" in another py. file (ex:new.py). From what I know, cPickle module is the one that can be used. So in new.py, I wrote "from utility import *", and "Sol=cPickle.load(open('Result.data'))". This works fine when I worked among windows based system. However, when I tried to load the result.data I generated in windows system to new.py file in linux or mac system, The error "ImportError: No module named utility" always occurs.
I'm a not a professional programmer, and I just start to code in python. Could you please give some guide on how to solve this problem? Thank you in advance. | How to load data in Python for which data is stored as a customized class | 0 | 0 | 0 | 137 |
19,085,887 | 2013-09-30T03:07:00.000 | 0 | 0 | 1 | 0 | python,module,wrapper,argparse | 19,086,137 | 3 | false | 0 | 0 | Provided that you are calling third-party modules, a possible solution is to
change sys.argv and sys.argc at runtime to reflect the correct parameters for
the module you're calling, once you're done with your own parameters. | 1 | 2 | 0 | I searched and tried following stuff but could not found any solution, please let me know if this is possible:
I am trying to develop a python module as wrapper where I call another 3rd party module with its .main() and provide the required parameter which I need to get from command line in my module. I need few parameter for my module too.
I am using argparse to parse command line for calling module and my module. The calling parameter list is huge (more than 40) which are optional but may require anytime who will use my module. Currently I have declared few important parameters in my module to parse but I need to expand with all the parameter.
I thought of providing all the parameter in my module without declaring in add_argument. I tried with parse_known_args which also require declaration of all parameter is required.
Is there any way where I can pass on all parameter to calling module without declaring in my module? If its possible please let me know how it can be done.
Thanks in advance, | Call another module with passing its command line parameter from my module using argparse in python | 0 | 0 | 0 | 2,494 |
19,086,030 | 2013-09-30T03:27:00.000 | 2 | 0 | 1 | 0 | python,licensing,virtualenv,pip,easy-install | 42,117,152 | 11 | false | 0 | 0 | With pip:
pip show django | grep License
If you want to get the PyPI classifier for the license, use the verbose option:
pip show -v django | grep 'License ::' | 1 | 42 | 0 | I'm trying to audit a Python project with a large number of dependencies and while I can manually look up each project's homepage/license terms, it seems like most OSS packages should already contain the license name and version in their metadata.
Unfortunately I can't find any options in pip or easy_install to list more than the package name and installed version (via pip freeze).
Does anyone have pointers to a tool to list license metadata for Python packages? | Can pip (or setuptools, distribute etc...) list the license used by each installed package? | 0.036348 | 0 | 0 | 22,013 |
19,086,425 | 2013-09-30T04:23:00.000 | 1 | 1 | 0 | 0 | python,importerror,xlwt | 19,756,366 | 1 | true | 1 | 0 | I think raw_input command is just not supported within CAE environment.
You can use getInput() or getInputs() instead. | 1 | 0 | 0 | Windows Machine, Python 2.4:
When I run my script in Abaqus' "Run Script...", I get an ImportError saying that xlwt module does not exist. The same script runs perfectly well in my Eclipse IDE or Python IDE. I made sure that I gave the right path to the Python Library.
Any help in this regard would be appreciated. Thanks! | Running xlwt module in Abaqus | 1.2 | 0 | 0 | 627 |
19,086,885 | 2013-09-30T05:17:00.000 | 0 | 0 | 0 | 0 | python,sql,web-applications,flask | 19,087,185 | 2 | false | 1 | 0 | You can use SQLAlchemy.It's a plug-in | 1 | 1 | 0 | I'm a complete beginner to Flask and I'm starting to play around with making web apps.
I have a hard figuring out how to enforce unique user names. I'm thinking about how to do this in SQL, maybe with something like user_name text unique on conflict fail, but then how to I catch the error back in Python?
Alternatively, is there a way to manage this that's built in to Flask? | How do I enforce unique user names in Flask? | 0 | 1 | 0 | 1,118 |
19,088,527 | 2013-09-30T07:19:00.000 | 0 | 0 | 0 | 0 | python,scipy,hierarchical-clustering,dendrogram | 21,080,034 | 1 | false | 0 | 0 | If your're really only interested in distance proportions between the fusions, you could
adapt your input linkage (cut off an offset in the third column of the linkage matrix). This will screw the absolute cophenetic distances of course.
do some normalization of your input data, before clustering it
Or you
manipulate the dendrogram axes / adapt limits (I didn't try that) | 1 | 2 | 1 | I am using scipy.cluster.hierarchy as sch to draw a dendogram after makeing an hierarchical clustering. The problem is that the clustering happens on the top of the dendogram in between 0.8 and 1.0 which is the similarity degree in the y axis. How can i "cut" all the graph from 0 to 0.6 where nothing "interesting" graphically is happening? | How to resize y axis of a dendogram | 0 | 0 | 0 | 362 |
19,088,988 | 2013-09-30T07:47:00.000 | 2 | 0 | 1 | 0 | python,types,type-safety | 19,091,389 | 2 | false | 0 | 0 | Pyhon's notion of a "semantic type" is called a class, but as mentioned, Python is dynamically typed so even using custom classes instead of tuples you won't get any compile-time error - at best you'll get runtime errors if your classes are designed in such a way that trying to use one instead of the other will fail.
Now classes are not just about data, they are about behaviour too, so if you have functions that do waveform-specific computations these functions would probably become methods of the Waveform class, and idem for the Point part, and this might be enough to avoid logical errors like passing a "waveform" tuple to a function expecting a "point" tuple.
To make a long story short: if you want a statically typed functional language, Python is not the right tool (Haskell might be a better choice). If you really want / have to use Python, try using classes and methods instead of tuples and functions, it still won't detect type errors at compile-time but chances are you'll have less type errors AND that these type errors will be detected at runtime instead of producing wrong results. | 1 | 2 | 0 | In my recent project I have the problem, that some values are often misinterpreted. For instance I calculate a wave as a sum of two waves (for which I need two amplitudes and two phase shifts), and then sample it at 4 points. I pass these tuples of four values to different functions, but sometimes I made the mistake to pass wave parameters instead of sample points.
These errors are hard to find, because all the calculations work without any error, but the values are totally meaningless in this context and so the results are just wrong.
What I want now is some kind of semantic type. I want to state that the one function returns sample points and the other function awaits sample points, and that I can do nothing that would conflict this declarations without immediately getting an error.
Is there any way to do this in python? | Semantic Type Safety in Python | 0.197375 | 0 | 0 | 686 |
19,090,568 | 2013-09-30T09:19:00.000 | 0 | 0 | 1 | 0 | python,multithreading,performance,simulation | 19,102,519 | 4 | false | 0 | 0 | Some random thoughts here:
I did rather well with several hundred threads working like this in Java; it can be done with the right language. (But I haven't tried this in Python.)
In any language, you could run the master node code in one thread; just have it loop continuously, running the code for each master in each cycle. You'll lose the benefits of multiple cores that way, though. On the other hand, you'll lose the problems of multithreading, too. (You could have, say, 4 such threads, utilizing the cores but getting the multithreading headaches back. It'll keep the thread-overhead down, too, but then there's blocking...)
One big problem I had was threads blocking each other. Enabling 100 threads to call the same method on the same object at the same time without waiting for each other requires a bit of thought and even research. I found my multithreading program at first often used only 25% of a 4-core CPU even when running flat out. This might be one reason you're running slow.
Don't have your slave nodes repeat sending data. The master nodes should come alive in response to data coming in, or have some way of storing it until they do come alive, or some combination.
It does pay to have more threads than cores. Once you have two threads, they can block each other (and will if they share any data). If you have code to run that won't block, you want to run it in its own thread so it won't be waiting for code that does block to unblock and finish. I found once I had a few threads, they started to multiply like crazy--hence my hundreds-of-threads program. Even when 100 threads block at one spot despite all my brilliance, there's plenty of other threads to keep the cores busy! | 1 | 0 | 0 | I'm working on simulating a mesh network with a large number of nodes. The nodes pass data between different master nodes throughout the network.
Each master comes live once a second to receive the information, but the slave nodes don't know when the master is up or not, so when they have information to send, they try and do so every 5 ms for 1 second to make sure they can find the master.
Running this on a regular computer with 1600 nodes results in 1600 threads and the performance is extremely bad.
What is a good approach to handling the threading so each node acts as if it is running on its own thread?
In case it matters, I'm building the simulation in python 2.7, but I'm open to changing to something else if that makes sense. | Multithreading With Very Large Number of Threads | 0 | 0 | 0 | 617 |
19,093,260 | 2013-09-30T11:38:00.000 | 6 | 0 | 0 | 1 | python,pycharm | 22,637,135 | 2 | false | 0 | 0 | I was able to the piggyback X11 forwarding through another ssh connection. Try setting the DISPLAY environment variable in your PyCharm run configuration like so:
DISPLAY=localhost:102
Check the value of DISPLAY in the other connection to see exactly what the value should be. | 1 | 5 | 0 | I want to configure PyCharm 3.0 to use a Remote Python Interpreter.
The Problem is, I have to connect over a SSH Gateway:
MyMachine -> Gateway -> Machine with Python
When I connect via Cygwin I type the following: ssh -t [email protected] "ssh [email protected]"
Is there a way to achieve this in PyCharm?
Another question, can I forward the X11 server to PyCharm (so that I can view the matplotlib plots on my machine?)
Regards,
m | Pycharm Remote Python Interpreter over SSH Gateway, X11 forwarding | 1 | 0 | 0 | 2,948 |
19,096,111 | 2013-09-30T13:56:00.000 | 0 | 0 | 0 | 0 | javascript,python,selenium,selenium-webdriver,data-driven-tests | 19,096,169 | 2 | false | 1 | 0 | I think this should $("#id").val() give you the value i guess | 1 | 0 | 0 | I'm trying to implement a data driven test approach using Selenium (Python) but I've run into an issue selecting dynamic values from multiple combo boxes. I'm currently aware of one option, using method driver.execute_script("JAVASCRIPT TO GET COMBO BOX OPTION") but hard coding the values defeats the purpose of automated data driven testing. Is there any other solution?
P.S Please let me know if there is any additional info needed.
Thanks,
Eric | Selecting combo box values | 0 | 0 | 1 | 252 |
19,097,057 | 2013-09-30T14:40:00.000 | 2 | 0 | 1 | 0 | python,virtualenv,pip | 56,443,346 | 2 | false | 0 | 0 | Note that the answer above is incorrect. The exact regex from the code is re.sub('[^A-Za-z0-9.]+', '-', name). But if you try pip install foo!bar you get a big parse error so this isn't really true either. | 1 | 29 | 0 | Somewhere underscores get changed to dashes, if you install with a git repo with "pip install -e ...".
Is there any way to stop this?
I want to automate stuff. I want repo foo_bar to be ~/src/foo_bar, not ~/src/foo-bar. | pip -e: No magic underscore to dash replacement | 0.197375 | 0 | 0 | 9,425 |
19,100,648 | 2013-09-30T17:53:00.000 | 0 | 0 | 1 | 0 | python,linux,logging,exception-handling,ioerror | 19,101,626 | 2 | true | 0 | 0 | You can create your own logging class that derives from logging but calls log within a try: ... expect: clause. | 1 | 1 | 0 | I am running a python scripts which downloads data and processes it. I am also logging some key information. My question is that how will I catch an out of memory exception if thrown by logging, as logging writes to a file. Do I have to put all logging calls within a try and except? | Catch "Out of memory error" from python logging | 1.2 | 0 | 0 | 1,258 |
19,100,800 | 2013-09-30T18:02:00.000 | 0 | 0 | 0 | 0 | python,django,api,rest,tastypie | 19,103,760 | 1 | false | 1 | 0 | Do you need to specify the format, such as: /api/v1/groups/1/?format=json ? | 1 | 0 | 0 | I'm trying to update the groups assigned to users via an API (via rest) with Tastypie.
I tried passing the group id's directly in, however it says that the URL provided is not a valid resource. I then tried passing in a URL such as '/api/v1/groups/1/' but that is saying that's not a link to a valid resource.
Any hints? I'm creating user records just fine from a standard django view/form, but I would like to do this as a REST action. | Tastypie - get user groups resource_uri | 0 | 0 | 0 | 104 |
19,104,398 | 2013-09-30T21:44:00.000 | 0 | 0 | 1 | 0 | python,portability | 19,104,420 | 4 | false | 0 | 0 | You can use new print syntax on older version of python. | 1 | 8 | 0 | The print function's syntax has been changed in newer versions of python. The problem is that at home, I have the newer version of python while at office the old one. How can I have the same program run on both the newer and old python versions? | How can I tell new Python to use the old print | 0 | 0 | 0 | 7,839 |
19,104,798 | 2013-09-30T22:12:00.000 | 6 | 0 | 1 | 1 | python,file,output | 19,105,021 | 1 | true | 0 | 0 | There are two points at which your file can buffer - Python's internal buffers and the buffers on the operating system. This is a performance boost that avoids system calls and disk writes while the buffer is filling up.
Calling file.flush() will push the internal buffer to the operating system. You can additionally call fsync to request the operating system to save to disk.
Usually you can leave the operating system to do what it knows best, so calling flush is usually enough for most applications. The same is partially true for Python's internal buffer - it knows best in terms of performance, but you may require more frequent writes and be willing to pay the additional cost. The only way to know the exact cost is to measure it both ways. | 1 | 3 | 0 | I thought writing a file gives real-time output, since it is so when I use C/C++ to write files. But when I run python program it seems the output file is always 0 byte until the whole program finished running. Even for the nohup python xxx.py &, the print stuff in the file nohup.out isn't realtime, and can only be seen after execution.
I'm now running really big program and want to see the progress in the file, how can I achieve it? | Why I do not see realtime output in the output file? | 1.2 | 0 | 0 | 2,279 |
19,107,617 | 2013-10-01T03:48:00.000 | 6 | 0 | 0 | 0 | python,numpy,scipy,linear-algebra,sparse-matrix | 19,616,987 | 4 | false | 0 | 0 | The "standard" way to solve this problem is with a cholesky decomposition, but if you're not up to using any new compiled code, then you're out of luck. The best sparse cholesky implementation is Tim Davis's CHOLMOD, which is licensed under the LGPL and thus not available in scipy proper (scipy is BSD). | 1 | 26 | 1 | I am trying to figure out the fastest method to find the determinant of sparse symmetric and real matrices in python. using scipy sparse module but really surprised that there is no determinant function. I am aware I could use LU factorization to compute determinant but don't see a easy way to do it because the return of scipy.sparse.linalg.splu is an object and instantiating a dense L and U matrix is not worth it - I may as well do sp.linalg.det(A.todense()) where A is my scipy sparse matrix.
I am also a bit surprised why others have not faced the problem of efficient determinant computation within scipy. How would one use splu to compute determinant?
I looked into pySparse and scikits.sparse.chlmod. The latter is not practical right now for me - needs package installations and also not sure sure how fast the code is before I go into all the trouble.
Any solutions? Thanks in advance. | How to compute scipy sparse matrix determinant without turning it to dense? | 1 | 0 | 0 | 6,098 |
19,108,182 | 2013-10-01T04:45:00.000 | 0 | 0 | 0 | 0 | python,testing,proxy | 19,108,770 | 1 | false | 1 | 0 | Set the HTTP_PROXY environment variable (and export it), and Python will honour that (as far as the standard library is used). | 1 | 0 | 0 | the idea is that, say a developer has a set of tests to run against locahost:8000 and he has hardcoded that in his tests.
When we setup a proxy in a browser, the browser handles the proxy so that users only care about typing localhost:8000 instead of localhost:proxy_port. Browser actually sends request and receives response from the proxy port.
Can we simulate such so that the tests don't have to change to localhost:proxy_port (and the proxy server knows to route to port 8000). instead, the developer can continue to run as localhost:8000 in his tests, but when he's running his tests, the request automatically goes through the proxy server.
PS: Also without changing the port of the server. Since the assumption is that the port 8000 is running as application server and changing it to another port can break other things! So saying "change proxy server port to 8000 and my webapp server to 80001" doesn't solve the whole problem. | Can we simulate a browser proxy mechanism in a Python script? | 0 | 0 | 1 | 486 |
19,109,538 | 2013-10-01T06:36:00.000 | 1 | 0 | 0 | 1 | python,windows,qt,winapi,ubuntu | 19,109,762 | 3 | false | 0 | 0 | Afaik Qt itself will not allow you to do this, at least it did not in prior versions. To solve this for windows you will have to use win-api EnumProcesses while in linux you could use the /proc filesystem, which holds information about running processes | 1 | 1 | 0 | I want to get the list of programs that shows in the Applications tab of Windows Task Manager(include the application icon and its name) , I wonder which Windows APIs should I use ?
If I want to do the same thing on Ubuntu ,then which Ubuntu APIs should I use ? | get the list of programs currently running on Windows or Ubuntu | 0.066568 | 0 | 0 | 1,538 |
19,114,113 | 2013-10-01T10:40:00.000 | 0 | 0 | 0 | 0 | python,http,cors,gevent | 21,741,160 | 1 | false | 1 | 0 | Practically the only option in this situation is to switch to using WSGI. I ended up switching to pywsgi.WSGIServer, and the problem solved itself.
It's important to understand that switching to WSGI in reality introduces very little (if any) overhead, giving you so many benefits that the practical pros far outweigh the hypothetical cons. | 1 | 1 | 0 | I am building a gevent application in which I use gevent.http.HTTPServer. The application must support CORS, and properly handle HTTP OPTIONS requests. However, when OPTIONS arrives, HTTPServer automatically sends out a 501 Not Implemented, without even dispatching anything to my connection greenlet.
What is the way to work around this? I would not want to introduce an extra framework/web server via WSGI just to be able to support HTTP OPTIONS. | CORS with gevent | 0 | 0 | 1 | 374 |
19,120,229 | 2013-10-01T15:30:00.000 | 1 | 0 | 0 | 1 | python,wing-ide | 19,120,750 | 1 | true | 0 | 0 | The location of the python.exe for Python 3.3 can vary depending on how you installed it. Probably the best bet is to search w/ Spotlight for python.exe, press "Show All" in the drop down menu, change to "File Name" instead of "Contents" search and then click on results to see the full path at the bottom of the search results window. You'll get at least 2-3 results and the full path should make clear which is the correct one. Then enter that into Python Executable in the Configure Python dialog, accessed from the Source menu in Wing 101. You'll need to restart the Python Shell in Wing 101 from its Options menu before it switches to the new Python version. | 1 | 2 | 0 | I am relatively new to programming, and I am using Wing101 version: 5.0.0-b8 (rev 29847).
The Python Shell within Wing101, is version 2.7.2, how do I configure it to open python 3.3.2.
I have downloaded Python 3.3.2 and I need the custom Python Executable. I previously tried "/usr/bin/python" as my custom python executable, but it doesn't work.
I am on a Mac 10.8.3 | Wing101 - Configure python 3.3.2 from 2.7.2 on a mac | 1.2 | 0 | 0 | 10,844 |
19,123,609 | 2013-10-01T18:46:00.000 | 0 | 1 | 0 | 1 | python,unit-testing,continuous-integration,gitlab | 27,779,548 | 1 | false | 0 | 0 | -i usually replicate the problem by using a docker container only for the runner and running the tests inside it, dont know if you have it setup like this =(.
-Normally the test doesnt actually fail if you log in the container you will see he actually does everything but doesnt report back to the Gilab CI, dont freak out it does it job it simply does not say it.
PS: you can see if its actually running by checking the processes on the machine.
example:
im running a gitlab ci with java and docker:
gitlab ci starts doing its thing then hangs at a download,meanwhile i log in the container and check that he is actually working and manages to upload my compiled docker image. | 1 | 2 | 0 | I'm using gitlab-ci to automatically build a C++ project and run unit-tests written in python (it runs the daemon, and then communicates via the network/socket based interface).
The problem I'm finding is that when the tests are run by the GitLab-CI runner, they fail for various reasons (with one test, it stalls indefinitely on a particular network operation, on the other it doesn't receive a packet that should have been sent).
BUT: When I open up SSH and run the tests manually, they all work successfully (the tests also succeed on all of our developers' machines [linux/windows/OSX]).
At this point I've been trying to replicate enough of the build/test conditions that gitlab-ci is using but I don't really know any exact details, and none of my experiments have reproduced the problem.
I'd really appreciate help with either of the following:
Guidance on running the tests manually outside of gitlab-ci, but replicating its environment so I can get the same errors/failures and debug the daemon and/or tests, OR
Insight into why the test would fail when ran by GitLab-CI-Runner
Sidetrack 1:
For some reason, not all the (mostly debugging) output that would normally be sent to the shell shows up in the gitlab-ci output.
Sidetrack 2:
I also played around setting it up with jenkins, but one of the tests fails to even connect to the daemon, while the rest do it fine. | Tests fail ran by gitlab-ci, but not ran in bash | 0 | 0 | 0 | 1,059 |
19,126,139 | 2013-10-01T21:19:00.000 | 0 | 0 | 1 | 0 | python | 19,126,225 | 3 | false | 0 | 0 | Both work, it all comes down to which you're more comfortable using; windows or cygwin. | 2 | 2 | 0 | I have a windows 7 computer and was wondering whether to use the windows version of python or the one in cygwin. Especially with regard to modules that do not come pre-installed, which one is easier to install new modules? | Should I use python on windows or cygwin? | 0 | 0 | 0 | 958 |
19,126,139 | 2013-10-01T21:19:00.000 | 3 | 0 | 1 | 0 | python | 19,126,151 | 3 | false | 0 | 0 | ActivePython works just fine on Win7. Cygwin would add an unnecessary layer of complexity. | 2 | 2 | 0 | I have a windows 7 computer and was wondering whether to use the windows version of python or the one in cygwin. Especially with regard to modules that do not come pre-installed, which one is easier to install new modules? | Should I use python on windows or cygwin? | 0.197375 | 0 | 0 | 958 |
19,127,224 | 2013-10-01T22:40:00.000 | 8 | 0 | 0 | 0 | python,django,newrelic | 19,128,804 | 1 | true | 1 | 0 | For the Python agent and monitoring of a Django web application, the overhead per request is driven by how many functions are executed within a specific request that are instrumented. This is because full profiling is not being done. Instead only specific functions of interest are instrumented. It is therefore only the overhead of having a wrapper being executed for that one function call, not nested calls, unless those nested functions were in turn ones which were being instrumented.
Specific functions which are instrumented in Django are the middleware and view handler function, plus template rendering and the function within the template renderer which deals with each template block. Distinct from Django itself, you have instrumentation on the low level database client module functions for executing a query, plus memcache and web externals etc.
What this means is that if for a specific web request execution only passed through 100 instrumented functions, then it is only the execution of those which incur an extra overhead. If instead your view handler performed a large number of distinct database queries, or you have a very complicated template being rendered, the number of instrumented functions could be a lot more and as such the overhead for that web request will be more. That said, if your view handler is doing more work, then it already would generally have a longer response time than a less complex one.
In other words, the per request overhead is not fixed and depends on how much work is being done, or more specifically how many instrumented functions are invoked. It is not therefore possible to quantify things and give you a fixed per request figure for the overhead.
That all said, there will be some overhead and the general target range being aimed at is around 5%.
What generally happens though is that the insight which is gained from having the performance metrics means that for most customers there are usually some quite easy improvements that can be found almost immediately. Having made such changes, response times can quite quickly be brought down to be below what they were before you started monitoring, so you end up being ahead of where you were to start with when you had no monitoring. With further digging and tuning, improvements can be even much more dramatic. Pay attention to certain aspect of the performance metrics being provided and you can also better tune your WSGI server and perhaps better utilise it and reduce the number of hosts required and so reduce your hosting costs. | 1 | 8 | 0 | I am working on a large Django (v1.5.1) application that includes multiple application servers, MySQL servers etc. Before rolling out NewRelic onto all of the servers I want to have an idea of what kind of overhead I will incur per transaction.
If possible I'd like to even distinguish between the application tracking and the server monitoring that would be ideal.
Does anyone know of generally accepted numbers for this? Perhaps a site that is doing this sort of investigation or steps so that we can do the investigation on our own. | Looking to quantify the performance overhead of NewRelic monitoring in python django app | 1.2 | 0 | 0 | 2,293 |
19,128,188 | 2013-10-02T00:31:00.000 | 0 | 0 | 0 | 0 | android,python,appium | 19,128,261 | 3 | false | 0 | 1 | A simple VM emulate an Android device. Not dificult. | 2 | 0 | 0 | I am using appium and using Sudoku app for Android on a Windows 7 machine using Python. If someone can help me find out what the app activity is to opening this and how they were able to figure that out | What is the android activity for opening sudoku and how do I find how to? | 0 | 0 | 0 | 103 |
19,128,188 | 2013-10-02T00:31:00.000 | 0 | 0 | 0 | 0 | android,python,appium | 51,240,166 | 3 | true | 0 | 1 | You just need to run adb shell command in command prompt. After that , Open the Suduko app in your device(make sure your device is connected to your laptop/pc) and go back to the command prompt and run the below command :
dumpsys window windows | grep -E 'mCurrentFocus'
The Above command will give you the package name & activity name of the currently focused app. | 2 | 0 | 0 | I am using appium and using Sudoku app for Android on a Windows 7 machine using Python. If someone can help me find out what the app activity is to opening this and how they were able to figure that out | What is the android activity for opening sudoku and how do I find how to? | 1.2 | 0 | 0 | 103 |
19,129,052 | 2013-10-02T02:22:00.000 | 2 | 1 | 0 | 0 | python,r,igraph | 19,132,512 | 1 | true | 0 | 0 | The two interfaces use different data models to store the graph attributes, so I think there is no safe and sane way to access an igraph object in R from Python or vice versa, apart from saving it and then loading it back. Using the GraphML format is probably your safest bet as it preserves all the attributes that are basic data types (numbers and strings). | 1 | 1 | 0 | I have a graph object created with the igraph R package.
If I understand the architecture of the igraph software package correctly the igraph R package is an interface to use igraph from R. Then since there is also a igraph Python interface I wonder if it is possible to access my igraph object created with R via Python directly. Or if the only way to access and igraph R object from Python is to export the igraph R object with write.graph() in R and then import it with the igraph R package. | Access igraph R objects from Python | 1.2 | 0 | 0 | 249 |
19,130,365 | 2013-10-02T05:12:00.000 | 1 | 0 | 0 | 0 | python,video,rgb,gstreamer | 19,220,952 | 1 | false | 0 | 0 | If this really is raw rgb video, there is no (realistic) way to detect the start of the frame. I would assume your video would come as whole frames, so one buffer == one frame, and hence no need for such detection. | 1 | 0 | 1 | I have raw-rgb video coming from PAL 50i camera. How can I detect the start of frame, just like I would detect the keyframe of h264 video, in gstreamer? I would like to do that for indexing/cutting purposes. | How to detect start of raw-rgb video frame? | 0.197375 | 0 | 0 | 234 |
19,130,630 | 2013-10-02T05:40:00.000 | 2 | 0 | 0 | 0 | python,django,django-templates,django-cms | 19,136,318 | 2 | true | 1 | 0 | As you've seen, you can't truncate a placeholder, as a placeholder's job is simply to render content plugins that are added to it.
Your only viable option is to truncate the field in the render template of the plugin, or to add a separate field on your model that can store the truncated text. Such a field could be populated automatically using a post_save signal handler. | 1 | 0 | 0 | I'm creating my own Django CMS blog plugin. I'm using a placeholder to hold the full content of the blog entry and I'm trying to figure out how to automatically create an excerpt from this placeholder.
If it were simply a text field I know I could use "|truncatechars:15" in the template, but I don't know how to do this for a placeholder.
Is there something I can use in the template or in the 'views.py' in order to truncate the placeholder?
Thanks in advance. | Truncate Django CMS Placeholder | 1.2 | 0 | 0 | 321 |
19,131,736 | 2013-10-02T07:15:00.000 | 0 | 0 | 0 | 0 | python,django,amazon-s3,amazon-ec2,zip | 19,131,908 | 1 | false | 1 | 0 | from my understanding you can't zip files directly on s3. you would have to download the files you'd want to zip, zip them up, then upload the zipped file. i've done something similar before and used s3cmd to keep a local synced copy of my s3bucket, and since you're on an ec2 instance network speed and latency will be pretty good. | 1 | 1 | 0 | I have an application where in I need to zip folders hosted on S3.The zipping process will be triggered from the model save method.The Django app is running on an EC2 instance.Any thoughts or leads on the same?
I tried django_storages but haven't got a breakthrough | Zip a folder on S3 using Django | 0 | 0 | 0 | 657 |
19,133,021 | 2013-10-02T08:38:00.000 | 0 | 0 | 1 | 0 | python,printing,dictionary | 19,133,176 | 3 | false | 0 | 0 | Dictionaries in general are unordered, but this is only under the understanding that it is not ordered in the way you set it out to be. Python shuffles them about so that it can use a hash table to search for keys quicker than it would through a list or tuple.
This means that the dictionary does have an order, but it is not immediately obvious and does not need to be understood by the user. | 1 | 1 | 0 | So when you're printing key-value pairs in a dictionary, do they get printed in any particular order? (Python) | When printing key-value pairs in a dictionary, do they get printed in any particular order? (Python) | 0 | 0 | 0 | 120 |
19,135,867 | 2013-10-02T11:28:00.000 | 3 | 0 | 1 | 0 | python,node.js,pip | 50,723,842 | 9 | false | 0 | 0 | I am using this small command line to install a package and save its version in requirements.txt :
pkg=package && pip install $pkg && echo $(pip freeze | grep -i $pkg) >> requirements.txt | 1 | 284 | 0 | In nodejs, I can do npm install package --save-dev to save the installed package into the package.
How do I achieve the same thing in Python package manager pip? I would like to save the package name and its version into, say, requirements.pip just after installing the package using something like pip install package --save-dev requirements.pip. | What is pip's equivalent of `npm install package --save-dev`? | 0.066568 | 0 | 0 | 108,948 |
19,138,535 | 2013-10-02T13:50:00.000 | 4 | 0 | 0 | 0 | python,django,session,cookies,subdomain | 19,138,823 | 1 | true | 1 | 0 | Just remove the SESSION_COOKIE_DOMAIN setting or set it to None. Django will automatically use the current domain. | 1 | 3 | 0 | My Django app handles multiple subdomains like "first.domain.com", "second.domain.com" etc.
My SESSION_COOKIE_DOMAIN is ".domain.com" to handle multiple subdomains.
So when I access my app from first.domain.com or second.domain.com, I can see the same session cookie from both subdomains.
So my question is; is it possible to set SESSION_COOKIE_DOMAIN to "first.domain.com" when it's being accessed from "first.domain.com" and "second.domain.com" when it's being accessed from "second.domain.com" ? | Django multiple sessions cookie domain for multiple subdomains | 1.2 | 0 | 0 | 2,292 |
19,142,497 | 2013-10-02T16:56:00.000 | 1 | 0 | 1 | 0 | python,database,orm | 19,142,716 | 4 | false | 0 | 0 | What about storing the objects in JSON?
You could write a function that serialize your object before storing it into the database.
If you have a specific identifier for your objects, I would suggest to use it as index so that you can easily retrieve it. | 1 | 1 | 0 | I have a program which calculates a set of plain interlinked objects (the objects consist of properties which basically are either String, int or link to another object).
I would like to have the objects stored in a relational database for easy SQL querying (from another program).
Moreover, the objects (classes) tend to change and evolve. I would like to have a generic solution not requiring any changes in the 'persistence layer' whenever the classes evolve.
Do you see any way to do that? | Store Python objects in a database for easy quering | 0.049958 | 1 | 0 | 60 |
19,143,345 | 2013-10-02T17:43:00.000 | 9 | 0 | 0 | 0 | python,mysql,session,notifications,sqlalchemy | 54,821,257 | 6 | false | 0 | 0 | I just had this issue and the existing solutions didn't work for me for some reason. What did work was to call session.commit(). After calling that, the object had the updated values from the database. | 1 | 37 | 0 | I am dealing with a doubt about sqlalchemy and objects refreshing!
I am in the situation in what I have 2 sessions, and the same object has been queried in both sessions! For some particular thing I cannot to close one of the sessions.
I have modified the object and commited the changes in session A, but in session B, the attributes are the initial ones! without modifications!
Shall I implement a notification system to communicate changes or there is a built-in way to do this in sqlalchemy? | About refreshing objects in sqlalchemy session | 1 | 1 | 0 | 54,352 |
19,143,658 | 2013-10-02T18:02:00.000 | 1 | 0 | 0 | 1 | python,background,raw-input | 19,143,751 | 2 | false | 0 | 0 | You probably want to run the script in the foreground, but then call os.fork() after the user has input the value. | 1 | 1 | 0 | I have a python script that has a raw input command, but I would like to run it in the background after the user inputs the raw_input part. The problem I have is if I try running the script in the background using &, the raw input pops up as a linux command and the python script doesn't recognize it.
Any tips? | Running a python script in background with raw_input command | 0.099668 | 0 | 0 | 1,513 |
19,144,319 | 2013-10-02T18:40:00.000 | 0 | 0 | 1 | 0 | python,hex,ordinal-indicator | 19,144,428 | 2 | false | 0 | 0 | You can also use map: map(lambda s:int(s.lower().replace('0x','').replace('h',''), 16),x.split(', ')) | 1 | 0 | 0 | I've got string like x='0x08h, 0x0ah' in Python, wanting to convert it to [8,10] (like unsigned ints). I could split and index it like [int(a[-3:-1],16) for a in x.split(', ')] but is there a better way to convert it to a list of ints?
Would it matter if I had y='080a'?
edit (for plus points:).) what (sane) string-based hexadecimal notations have python support, and which not? | Converting "0x08h, 0x8ah" to [int,int] in Python | 0 | 0 | 0 | 285 |
19,145,803 | 2013-10-02T20:01:00.000 | 3 | 0 | 1 | 0 | python,python-2.7,installation,enthought | 19,145,907 | 1 | true | 0 | 0 | You install a 64bit Operating System.
And you should install a 32bit Python version only if:
The libraries you intend to use does not support a 64bit Python version.
You intend to build your .py into a .exe
You're testing something specific related to the 32bit arch.
Otherwise, install a 64bit Python and a newer python version (Python 3.X) if possible. | 1 | 0 | 0 | I am running 32 bit Python on a 64 bit system. Which enthought canopy distribution should I install? The 32 bit or the 64 bit?
In other words, do I match the operating system I am using or do I match the Python I am using? | Which enthought canopy distribution should I install? | 1.2 | 0 | 0 | 512 |
19,149,840 | 2013-10-03T02:32:00.000 | 0 | 1 | 0 | 0 | python,class,unit-testing,testing,aptana | 21,667,402 | 2 | false | 0 | 0 | Just righ-click the test file and select Run As -> Python unit-test for the first time, on subsecuent runs just press CTL + F11 | 1 | 0 | 0 | How do you run a Unit test on your class creation in aptana studio 3 on a python format class.
I am wondering if I am supposed to add something to my code or is there a function in aptana studio that does it for you. | Unit test in aptana studio 3 | 0 | 0 | 0 | 461 |
19,158,339 | 2013-10-03T11:44:00.000 | 4 | 0 | 1 | 0 | python,global-variables,side-effects | 49,894,327 | 4 | false | 0 | 0 | They are essential, the screen being a good example. However, in a multithreaded environment or with many developers involved, in practice often the question arises: who did (erraneously) set or clear it? Depending on the architecture, analysis can be costly and be required often. While reading the global var can be ok, writing to it must be controlled, for example by a single thread or threadsafe class. Hence, global vars arise the fear of high development costs possible by the consequences for which themselves are considered evil. Therefore in general, it's good practice to keep the number of global vars low. | 1 | 156 | 0 | I'm trying to find out why the use of global is considered to be bad practice in python (and in programming in general). Can somebody explain? Links with more info would also be appreciated. | Why are global variables evil? | 0.197375 | 0 | 0 | 104,033 |
19,159,142 | 2013-10-03T12:22:00.000 | 0 | 0 | 0 | 0 | python,mysql,database,session,sqlalchemy | 49,755,122 | 2 | false | 0 | 0 | Had a similar problem, for some reason i had to commit both sessions. Even the one that is only reading.
This might be a problem with my code though, cannot use same session as it the code will run on different machines. Also documentation of SQLalchemy says that each session should be used by one thread only, although 1 reading and 1 writing should not be a problem. | 1 | 2 | 0 | I'm currently using SQLAlchemy with two distinct session objects. In one object, I am inserting rows into a mysql database. In the other session I am querying that database for the max row id. However, the second session is not querying the latest from the database. If I query the database manually, I see the correct, higher max row id.
How can I force the second session to query the live database? | How to force SQLAlchemy to update rows | 0 | 1 | 0 | 1,599 |
19,161,501 | 2013-10-03T14:10:00.000 | 1 | 0 | 1 | 0 | python,json,unicode | 19,161,687 | 2 | true | 0 | 0 | It looks your JSON hasn't the right encoding because neither \u00c5 nor \u0082aw yields the characters you're expecting in any encoding.
But you'd maybe try to encode this value in UTF8 or UTF16 | 1 | 5 | 0 | One of the values in a JSON file I'm parsing is Wroc\u00c5\u0082aw. How can I turn this string into a unicode object that yields "Wrocław" (which is the correct decoding in this case)? | Reading JSON: what encoding is "\u00c5\u0082"? How do I get it to a unicode object? | 1.2 | 0 | 1 | 10,963 |
19,163,563 | 2013-10-03T15:47:00.000 | 1 | 0 | 1 | 0 | python,matplotlib,canopy | 19,165,436 | 1 | false | 0 | 0 | You can restart the python shell in canopy by selecting Run -> Restart Kernel. | 1 | 0 | 0 | Unlike other editors (I use TextWrangler and TextMate on the Mac, and Spyder on the PC), Enthought's Canopy runs the Python programs in an interactive python shell. Most of the time this is nice, but sometimes I would like to run the program in a fresh python shell. For example, I am writing a script to collect frames a high-speed camera. If the script does not run all the way to the end (which happens a lot), then the next time I run the script, it crashes the python shell if I am using Canopy (no problems with other editors or running straight from the command line).
So, is there a way that I can have the program run in a new python shell each time, or maybe reset the python shell before running?
Thanks! | Run python script in Canopy in new python shell | 0.197375 | 0 | 0 | 654 |
19,164,797 | 2013-10-03T16:48:00.000 | 0 | 0 | 1 | 0 | python | 19,165,091 | 1 | false | 0 | 0 | No, not directly: SPOJ won't tell you - it won't even tell you which non-zero exit code you got :-(
The slow way around this is to submit your program many times, changing it each time to exit at a later line. For example, call sys.exit() after your first line. If you don't get a NZEC complaint, you know your first line wasn't the cause. Then move sys.exit() down a line and try again. Etc. It can be a real PITA.
Note: you can do a form of binary search this way to find the offending line much faster. | 1 | 2 | 0 | I've been trying to submit a solution in Python to a problem in Spoj, but I keep getting an NZEC runtime error.
Is it possible to find out which line the error is occurring at? | How to find line number of error in Spoj? | 0 | 0 | 0 | 74 |
19,167,257 | 2013-10-03T19:07:00.000 | 0 | 0 | 1 | 1 | ipython,ipython-parallel | 19,647,089 | 1 | false | 0 | 0 | My solution to this was to use the ipengine to start a new subprocess which completes the desired operations. This subprocess has its own memory. Not ideal, but provides the desired functionality. | 1 | 2 | 0 | My workflow is: start ipcontroller/ipengines, then run 'python test_script.py' several times with different parameters. This script includes a map_async call. The ipengines don't recognize changes to the code between calls to the script, and static class variables are not reset to their defaults. It seems like a magic %reset call would do the trick, but attempting to execute this command on the ipengines does not seem to do anything. | Is it possible to force ipengines to completely reset all local variables and imports? | 0 | 0 | 0 | 182 |
19,168,743 | 2013-10-03T20:32:00.000 | 4 | 0 | 1 | 0 | python,excel,python-2.7 | 19,168,779 | 1 | false | 0 | 0 | Save the Excel sheet in .csv format. Then use the Python csv module to read it. | 1 | 0 | 0 | I need help on a project of mine. I have an excel worksheet that has two columns of values that I need to convert into lists in python. Is there a way to read and copy excel columns into python? | Converting a list of numbers in Excel to Python | 0.664037 | 0 | 0 | 772 |
19,171,483 | 2013-10-04T00:58:00.000 | 5 | 0 | 0 | 0 | python,web-crawler,scrapy | 19,171,556 | 2 | true | 1 | 0 | +1 votes for Scrapy. For the past several weeks I have been writing crawlers of massive car forums, and Scrapy is absolutely incredible, fast, and reliable. | 1 | 0 | 0 | I need to make a Web Crawling do requests and bring the responses complete and quickly, if possible.
I come from the Java language. I used two "frameworks" and neither fully satisfied my intent.
The Jsoup had the request/response fast but wore incomplete data when the page had a lot of information. The Apache HttpClient was exactly the opposite of this, reliable data but very slow.
I've looked over some of Python modules and I'm testing Scrapy. In my searches, I was unable to conclude whether it is the fastest and brings the data consistently, or is there some other better, even more verbose or difficult.
Second, Python is a good language for this purpose?
Thank you in advance. | Python Crawling - Requests faster | 1.2 | 0 | 1 | 724 |
19,171,822 | 2013-10-04T01:40:00.000 | 1 | 0 | 0 | 0 | python,numpy,matrix,pandas | 19,389,797 | 1 | false | 0 | 0 | After some research I found that both pandas and Scipy have structures to represent sparse matrix efficiently in memory. But none of them have out of box support for compute similarity between vectors like cosine, adjusted cosine, euclidean etc. Scipy support this on dense matrix only. For sparse, Scipy support dot products and others linear algebra basic operations. | 1 | 2 | 1 | I have to compute massive similarity computations between vectors in a sparse matrix. What is currently the best tool, scipy-sparse or pandas, for this task? | Scipy or pandas for sparse matrix computations? | 0.197375 | 0 | 0 | 839 |
19,172,175 | 2013-10-04T02:21:00.000 | 2 | 0 | 1 | 0 | python | 19,172,185 | 4 | false | 0 | 0 | They're the same. The only time it ever matters is that you have to escape the delimiter character: "\"" vs '"'.
Personally, I usually use ' for strings that aren't "user-visible" and " for strings that are, but I'm not completely consistent with that and I don't think it's common practice. | 2 | 1 | 0 | I have been always mixing these two notations, regarding them both as a string in Python.
What are the differences between them?
Under what circumstances can we only use one of them? | Python - difference between 'a' and "a"? | 0.099668 | 0 | 0 | 296 |
19,172,175 | 2013-10-04T02:21:00.000 | 1 | 0 | 1 | 0 | python | 19,172,200 | 4 | false | 0 | 0 | They are the same, though I prefer to use 'single quotes'as they're easier to read | 2 | 1 | 0 | I have been always mixing these two notations, regarding them both as a string in Python.
What are the differences between them?
Under what circumstances can we only use one of them? | Python - difference between 'a' and "a"? | 0.049958 | 0 | 0 | 296 |
19,172,262 | 2013-10-04T02:31:00.000 | 0 | 0 | 0 | 0 | python,pyqt,pyqt4,qwidget,qspinbox | 27,464,660 | 2 | false | 0 | 1 | To do what you want, just set the "prefix" attribute of the spinbox widget to "000". This will then pad values to be 0001, 0002, etc. | 1 | 0 | 0 | My objective is to use a QSpinBox to display the numbers from 0 to 9999 with the increasement of 1 using 4-digits format.
I managed to set the Maximum value 9999 by using setMaximum command. But I can't seems to find a way to display the values in 4digits format (eg. 0000, 0001,0002). Whenever i set the value to 0000 using setValue , the SpinBox display as 0.
How do i display the numbers in 4-digits format (i.e adding leading zero as required) in QSpinBox? | PyQt4 QSpinBox value format | 0 | 0 | 0 | 2,829 |
19,173,616 | 2013-10-04T05:03:00.000 | 4 | 0 | 1 | 1 | python | 19,173,661 | 3 | false | 0 | 0 | Well-written pure Python programs (just .py files) are extraordinarily portable across all platforms. If you're using some way of packaging your program in a Windows executable (.exe file), then you have worlds of other possible problems.
There are cases where a 64-bit program won't work on a 32-bit system, such as if your program uses massive data structures and you simply run out of address space on a 32-bit system. But, barring things like that, you should be fine.
If you want more specifics, I'm afraid you'll need to be more specific ;-) | 2 | 0 | 0 | I have developed a python application with 64-bit Windows 8 (the non metro version which looks like Windows 7 interface). I want to distribute it to all version of 64-bit Windows such as Windows XP, Windows 7 and etc. Is it possible for program developed with python to do that? Also, can the software run on 32-bit Windows os as well? | Can Python program developed on 64-bit Windows run on all version of Windows? | 0.26052 | 0 | 0 | 111 |
19,173,616 | 2013-10-04T05:03:00.000 | 0 | 0 | 1 | 1 | python | 19,173,646 | 3 | false | 0 | 0 | If you have not used any 64 bit specific items the your code should run fine on all versions of windows from source code with a minimum installation of python and the dependencies. | 2 | 0 | 0 | I have developed a python application with 64-bit Windows 8 (the non metro version which looks like Windows 7 interface). I want to distribute it to all version of 64-bit Windows such as Windows XP, Windows 7 and etc. Is it possible for program developed with python to do that? Also, can the software run on 32-bit Windows os as well? | Can Python program developed on 64-bit Windows run on all version of Windows? | 0 | 0 | 0 | 111 |
19,179,621 | 2013-10-04T11:03:00.000 | 2 | 0 | 0 | 0 | python,django,django-cms,six-python | 19,180,957 | 2 | true | 1 | 0 | You should have Django version >=1.4.5. It worked for me. | 1 | 3 | 0 | Am Using django 1.3 and django-cms 2.2 and when i run i get an error as follows:
django.template.base.TemplateSyntaxError: 'cms_tags' is not a valid tag library: ImportError raised loading cms.templatetags.cms_tags: cannot import name six | Cannot import name six Django-CMS | 1.2 | 0 | 0 | 5,466 |
19,181,895 | 2013-10-04T13:02:00.000 | 0 | 1 | 1 | 0 | python,eclipse,command-line,pydev | 19,182,148 | 2 | false | 0 | 0 | Open up terminal and type python then it should load python shell then type
import numpy
I have used pydev and find its easier just to use terminal to run small commands | 1 | 0 | 0 | I just setup PyDev with Eclipse, but I'm a little confused. I thought that in the console I would be able to type certain commands such as print("Hello World") and directly observe the result without having to incorporate that in any sort of file.
The reason I would like this is because it would allow me to test functions real quick before using them in scripts, and I'm also following a tutorial which tells me to check if NumPy is installed by typing import NumPy in the command line.
Thanks! | How can I test commands in Python? (Eclipse/PyDev) | 0 | 0 | 0 | 121 |
19,181,984 | 2013-10-04T13:06:00.000 | 4 | 0 | 1 | 0 | c++,python,c,dll,cython | 19,182,661 | 1 | false | 0 | 1 | First of all, let me disband a few misconceptions that you seem to have.
Calling a library from another program will speed up your library.
No, no, no, no, no. This makes about as much sense as saying "driving a car at a set speed is slower than having a F1 racer drive a car at the same speed". It just makes no sense. When Python loads your library, it loads and processes it similar to how the kernel loads and processes it (in fact, the kernel does that in Python's case too). In fact, this "double loading" (which wasn't the original design for dynamic libraries) can slow down your library. I should emphasise that this is a tiny difference, and should not concern the ordinary programmer.
Cython "wraps" Python code into C
It doesn't. It compiles the python code into C, which is then compiled into a dynamic library for Python to load later. This may optimise your Python code somewhat, and give you the ability to interface with atomic C data types, with Python's magic sauce on top. While this is pretty cool, it doesn't give your code any "magical" abilities.
I would also like to add that some tests have proven that Java is (drum roll) actually faster than C, C++, Python and other languages because the JVM is very optimised. That doesn't mean you should use Java (because it has other problems), but it should give perspective. | 1 | 14 | 0 | I am trying to use Cython to code my project.
My plan is to write .dll in C++, and call them from Python via Cython. So I can have high computational performance of C++, while keeping the simplicity of development of Python.
As I go further, I am a bit confused. As I understand, Cython wraps python code into C. The performance is improved since C has better calculation performance. Am I correct at this?
If I am right above, then is it necessary to write .dll in C++ and call it from Python in order to improve the performance?
If I write python code and wrap it into C, then call it from Python, does it perform better than calling .dll that written in C++? | Cython VS C++ Performance Comparison? | 0.664037 | 0 | 0 | 17,220 |
19,183,172 | 2013-10-04T13:59:00.000 | 1 | 1 | 0 | 0 | python,io,migration,fortran,legacy | 19,229,356 | 2 | false | 0 | 0 | In general, unless your particular compiler and available toolset does especially counter-productive things, one programming language is able to do IO as fast as another. In many programming languages, a naive approach may be sub-optimal - like all performance-related aspects of programming, this is something that is solved by appropriate design, and appropriate use of the available tools (such as parallel processing, use of buffered, threaded IO, for example).
Python isn't especially bad at IO, offers buffered IO and threading capabilities, and is easy to extend with C (and therefore probably not that hard to interact with Fortran). Python is likely to be a completely reasonable technology to incrementally replace parts of your codebase - indeed, if you can first make IO fast in python, you can probably compile an extension which ultimately calls your Fortran code. | 1 | 9 | 0 | Strange question this, I know.
I have a code base in fortran 77 that for the most part parses large non-binary files, does some manipulation to these files and then does a lot of file writing. The code base does not do any matrix manipulation or number crunching. This legacy code is in fortran because a lot of other code bases do require serious number crunching. This was originally just written in fortran because there was knowledge of fortran.
My proposal is to re-write this entirely in python (most likely 3.3). Maintenance of the fortran code is just as difficult as you would expect, and the tests are as poor as you can imagine. Obviously python would help a lot here.
Is there any performance hits (or even gains) in terms of the file handling speed in python? Currently the majority of run time of this system is in reading/writing the files.
Thanks in advance | File handling speed of python 3.3 compared to fortran 77 | 0.099668 | 0 | 0 | 515 |
19,184,975 | 2013-10-04T15:22:00.000 | 4 | 0 | 0 | 0 | python,random,numpy,scipy,distribution | 19,185,124 | 1 | false | 0 | 0 | How about numpy.convolve? It takes two arrays, rather than two functions, which seems ideal for your use. I'll also mention the ECDF function in the statsmodels package in case you really want to turn your observations into (step) functions. | 1 | 3 | 1 | I have a continuous random variable given by its density distribution function or by cumulative probability distribution function.
The distribution functions are not analytical. They are given numerically (for example as a list of (x,y) values).
One of the things that I would like to do with these distributions is to find a convolution of two of them (to have a distribution of a sum of two random properties).
I do not want to write my own function for that if there is already something standard and tested. Does anybody know if it is the case? | Is there a standard way to work with numerical probability density functions in Python? | 0.664037 | 0 | 0 | 287 |
19,185,466 | 2013-10-04T15:47:00.000 | 0 | 0 | 1 | 1 | python,debugging | 19,185,691 | 1 | false | 0 | 0 | IPython supports embedding a “kernel” which can then connect to an external front-end, such as a Qt one (qtconsole).
For working with another tty, I’d suggest connecting the debugger with another tty either via a pair of pipes or a pty (pseudo terminal), although you’d probably have to write the “other half” to display in the terminal, whereas the qtconsole is already ready to use as-is.
You install the Debian package ipython-qtconsole (or the Py3k version ipython3-qtconsole), then just run “ipython qtconsole” on the command line to get a GUI window containing the debugger.
Embedding is also possible: you can modify your program to call the ipython “kernel” at some point which is like setting a breakpoint. | 1 | 2 | 0 | I am writing a Python code using curses library under Linux. Are there any debugger does not share the same terminal, so I can debug alone with the code running?
EDIT:
I tried WinPDB, but it works only with python 2.7, and I am using 3.3 | How to debug Python curses code using two terminal windows | 0 | 0 | 0 | 459 |
19,187,759 | 2013-10-04T17:57:00.000 | 3 | 0 | 1 | 0 | python-3.x,python-idle,backspace | 19,188,881 | 2 | true | 0 | 0 | Edit:
Apparently the carriage return \r and the backspace \b won't actually work within Idle because it uses a text control that doesn't render return/backspace properly.
You might be able to write some sort of patch for Idle, but it might be more trouble than it's worth (unless you really like Idle) | 1 | 1 | 0 | There are several folks on here looking for backspace answers in Python. None of the questions I have searched have answered this for me, so here goes:
The Simple Goal: be able to print out a status string on one line, where the next status overwrites the first. Similar to a % complete status, where instead of scrolling a long line of 1%\n, 2%, ... etc. we just overwrite the first line with the newest value.
Now the question. When I type this in idle: print("a\bc") I get this as output: ac with what looks like an odd box with a circle between the 'a' and 'c'. The same thing happens when using sys.stdout.write().
Is this an Idle editor setting/issue? Does anyone even know if what I am trying is possible in the Idle Shell?
Thanks for any insight.
PS: Running Python 3.3.2 Idle on Windows 7, 64-bit system.
EDIT: Copying the output in Notepad++ is revealing that Python is printing out a 'backspace' character, and not actually going back a space. Perhaps what I am trying to accomplish is not possible? | Implementing a backspace in Python 3.3.2 Shell using Idle | 1.2 | 0 | 0 | 3,365 |
19,191,509 | 2013-10-04T22:04:00.000 | 0 | 0 | 1 | 0 | python,function,project,output | 19,191,612 | 3 | false | 0 | 0 | You will need an algorithm, which separates your input string by whitespaces. Then you would take the last of those separated strings and add it to your output string. You will have to add a comma, if more names are following. After that, take the other strings, starting with the first one, and check if they are already in the form "[A-Z].". If not, transform them to that form. Otherwise just add them to your output. Thats it. I could be more precise, but you asked not to be :) | 2 | 0 | 0 | This question is for a school project so don't give my exact answer XD
But please tell me how I would start it. I have been trying it for a couple of hours but I just can't get it. Here is the question:
Create a function in python that accepts names in standard form and prints them in the form:
E.g. INPUT to OUTPUT
Santa Claus to Claus, S.
Michael J. Fox to Fox, M. J.
Madonna to Madonna
William Henry Richard Charles Windsor to Windsor, W. H. R. C. | I need help on creating a name changing program | 0 | 0 | 0 | 44 |
19,191,509 | 2013-10-04T22:04:00.000 | 0 | 0 | 1 | 0 | python,function,project,output | 19,191,615 | 3 | false | 0 | 0 | Ok, then use String.Split() To change String into List , then use len() to count the elements. Use loop for and create new string . | 2 | 0 | 0 | This question is for a school project so don't give my exact answer XD
But please tell me how I would start it. I have been trying it for a couple of hours but I just can't get it. Here is the question:
Create a function in python that accepts names in standard form and prints them in the form:
E.g. INPUT to OUTPUT
Santa Claus to Claus, S.
Michael J. Fox to Fox, M. J.
Madonna to Madonna
William Henry Richard Charles Windsor to Windsor, W. H. R. C. | I need help on creating a name changing program | 0 | 0 | 0 | 44 |
19,194,261 | 2013-10-05T05:13:00.000 | 2 | 0 | 1 | 0 | python,python-imaging-library | 19,200,152 | 1 | true | 0 | 0 | Use numpy. Put the image data in 2 numpy float arrays, then just do the difference between the two arrays. | 1 | 0 | 0 | In python I am computing the difference between two images uses ImageChops.difference is there a faster way to do this computation? Since it's relatively slow on 720p images, I let it run for about 6 loops, and it took about 30 seconds (using the line_profiler) for analysis. | Algorithms Python - Difference Between Two Images | 1.2 | 0 | 0 | 284 |
19,196,105 | 2013-10-05T09:24:00.000 | 3 | 1 | 0 | 1 | python,port,netstat | 20,727,394 | 13 | false | 0 | 0 | Netstat tool simply parses some /proc files like /proc/net/tcp and combines it with other files contents. Yep, it's highly platform specific, but for Linux-only solution you can stick with it. Linux kernel documentation describes these files in details so you can find there how to read them.
Please also notice your question is too ambiguous because "port" could also mean serial port (/dev/ttyS* and analogs), parallel port, etc.; I've reused understanding from another answer this is network port but I'd ask you to formulate your questions more accurately. | 1 | 95 | 0 | How can I know if a certain port is open/closed on linux ubuntu, not a remote system, using python?
How can I list these open ports in python?
Netstat:
Is there a way to integrate netstat output with python? | How to check if a network port is open? | 0.046121 | 0 | 0 | 151,151 |
19,198,166 | 2013-10-05T13:08:00.000 | 1 | 0 | 1 | 0 | python,module,package | 56,387,348 | 4 | false | 0 | 0 | I will try to answer this without using terms the earliest of beginners would use,and explain why or how they used differently, along with the most "official" and/or most understood or uniform use of the terms.
It can be confusing, and I confused myself thinking to hard, so don't think to much about it. Anyways context matters, greatly.
Library- Most often will refer to the general library or another collection created with a similar format and use. The General Library is the sum of 'standard', popular and widely used Modules, witch can be thought of as single file tools, for now or short cuts making things possible or faster. The general library is an option most people enable when installing Python. Because it has this name "Python General Library" it is used often with similar structure, and ideas. Witch is simply to have a bunch of Modules, maybe even packages grouped together, usually in a list. The list is usually to download them. Generally it is just related files, with similar interests. That is the easiest way to describe it.
Module- A Module refers to a file. The file has script 'in it' and the name of the file is the name of the module, Python files end with .py. All the file contains is code that ran together makes something happen, by using functions, strings ect.
Main modules you probably see most often are popular because they are special modules that can get info from other files/modules.
It is confusing because the name of the file and module are equal and just drop the .py. Really it's just code you can use as a shortcut written by somebody to make something easier or possible.
Package- This is a termis used to generally sometimes, although context makes a difference. The most common use from my experience is multiple modules (or files) that are grouped together. Why they are grouped together can be for a few reasons, that is when context matters.
These are ways I have noticed the term package(s) used. They are a group of Downloaded, created and/or stored modules. Which can all be true, or only 1, but really it is just a file that references other files, that need to be in the correct structure or format, and that entire sum is the package itself, installed or may have been included in the python general library. A package can contain modules(.py files) because they depend on each other and sometimes may not work correctly, or at all. There is always a common goal of every part (module/file) of a package, and the total sum of all of the parts is the package itself.
Most often in Python Packages are Modules, because the package name is the name of the module that is used to connect all the pieces. So you can input a package because it is a module, also allows it to call upon other modules, that are not packages because they only perform a certain function, or task don't involve other files. Packages have a goal, and each module works together to achieve that final goal.
Most confusion come from a simple file file name or prefix to a file, used as the module name then again the package name.
Remember Modules and Packages can be installed. Library is usually a generic term for listing, or formatting a group of modules and packages. Much like Pythons general library. A hierarchy would not work, APIs do not belong really, and if you did they could be anywhere and every ware involving Script, Module, and Packages, the worl library being such a general word, easily applied to many things, also makes API able to sit above or below that. Some Modules can be based off of other code, and that is the only time I think it would relate to a pure Python related discussion. | 1 | 119 | 0 | I have background in Java and I am new to Python. I want to make sure I understand correctly Python terminology before I go ahead.
My understanding of a module is: a script which can be imported by many scripts, to make reading easier. Just like in java you have a class, and that class can be imported by many other classes.
My understanding of a library is: A library contains many modules which are separated by its use.
My question is: Are libraries like packages, where you have a package e.g. called food, then:
chocolate.py
sweets.py
biscuts.py
are contained in the food package?
Or do libraries use packages, so if we had another package drink:
milk.py
juice.py
contained in the package. The library contains two packages?
Also, an application programming interface (API) usually contains a set of libraries is this at the top of the hierarchy:
API
Library
Package
Module
Script
So an API will consist off all from 2-5? | Whats the difference between a module and a library in Python? | 0.049958 | 0 | 1 | 108,126 |
19,203,395 | 2013-10-05T22:29:00.000 | 0 | 0 | 0 | 0 | python,django | 19,231,096 | 1 | true | 1 | 0 | If your user is already logged in with a username a password you simply need to allow them to follow the same steps they would when signing up with a social account and that social account will be automatically associated with their django account | 1 | 1 | 0 | I'm currently trying to add an 'associate google account' button to my django 1.4.8 project. I've never worked with python-social-auth before, and I'm a bit confused about only associating accounts --as opposed to authenticating against--, and how to use credentials for accessing Google Drive services.
Thanks!
A. | Python social auth account association only | 1.2 | 0 | 0 | 325 |
19,203,678 | 2013-10-05T23:11:00.000 | 5 | 0 | 1 | 0 | python,function,optimization,styles | 19,203,897 | 1 | false | 0 | 0 | I'm sure a lot of people have strong opinions about this, but for new programmers a good rule of thumb is to try and keep it below 10-20 lines. A better rule of thumb is that a function should do one thing and do that one thing well. If it becomes really long, it is likely doing more than one thing and can be broken down into several functions. | 1 | 1 | 0 | Perhaps this is not the correct place to ask this question, and part of me thinks that there is no real answer to it, but I'm interested to see what experienced Python users have to say on the subject:
For maximum readability, concision, and utility, what is a range for an optimal length of a Python function? (Assuming that this function will be used in combination with other functions to do something useful.)
I recognize that this is incredibly dependent on the task at hand, but as a Sophomore Comp. Sci. major, one of the most consistent instructions from professors is to write programs that are comprised of short functions so as to break them up into "simple", discrete tasks.
I've done a big of digging, including through the Python style guide, but I haven't come up with a good answer. If there are any experienced Python users that would like to weigh in on this subject, I would appreciate the insight. Thanks. | Optimal Length of a Python Function (Style) | 0.761594 | 0 | 0 | 2,575 |
19,205,614 | 2013-10-06T05:05:00.000 | 3 | 0 | 1 | 0 | for-loop,python-3.x | 19,205,666 | 3 | false | 0 | 0 | You could use range(startValue,startValue+(increment*numberofValues),increment). | 2 | 0 | 0 | I'm a beginner to Python and I'm having some trouble with this. I have to make a for loop out of this problem. Can anyone explain how I would go about this?
nextNValues (startValue, increment, numberOfValues)
This function creates a string of numberOfValues values, starting with startValue and
counting by increment. For example, nextNValues (5, 4, 3) would generate a string of
(not including the comments):
5 - the start value
9 - counting by 4, the increment
13 - stopping after 3 lines of output, the numberOfValues | Basic for loop in Python 3 | 0.197375 | 0 | 0 | 142 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.