Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I have a directory containing N subdirectories each of which contains setup.py file. I want to write a python script that iterates through all subdirectories, issues python setup.py bdist_egg --dist-dir=somedir, and finally removes build and *.egg-info from each subdirectory and I have two questions:
Can I invoke bdist_egg without using os.system? Some python interface would be nicer.
Can I tell bdist_egg not to generate build and *.egg-info or is there any complementary command for setup.py that cleans this for me? | 0 | python,setuptools,distutils,setup.py,distribute | 2013-06-06T15:26:00.000 | 1 | 16,966,095 | I would use subprocess. I believe setup.py command line arguments should be your interface.
Check setup.py clean --all | 0 | 97 | false | 0 | 1 | automated build of python eggs | 16,966,255 |
2 | 2 | 0 | 0 | 0 | 1 | 1.2 | 0 | I have a directory containing N subdirectories each of which contains setup.py file. I want to write a python script that iterates through all subdirectories, issues python setup.py bdist_egg --dist-dir=somedir, and finally removes build and *.egg-info from each subdirectory and I have two questions:
Can I invoke bdist_egg without using os.system? Some python interface would be nicer.
Can I tell bdist_egg not to generate build and *.egg-info or is there any complementary command for setup.py that cleans this for me? | 0 | python,setuptools,distutils,setup.py,distribute | 2013-06-06T15:26:00.000 | 1 | 16,966,095 | It turned out that Fabric is the right way! | 0 | 97 | true | 0 | 1 | automated build of python eggs | 17,346,341 |
1 | 4 | 0 | 9 | 28 | 1 | 1 | 0 | When I create a unittest.TestCase, I can define a setUp() function that will run before every test in that test case. Is it possible to skip the setUp() for a single specific test?
It's possible that wanting to skip setUp() for a given test is not a good practice. I'm fairly new to unit testing and any suggestion regarding the subject is welcome. | 0 | python,unit-testing,testing,python-unittest | 2013-06-07T19:48:00.000 | 0 | 16,991,901 | In setUp(), self._testMethodName contains the name of the test that will be executed. It's likely better to put the test into a different class or something, of course, but it's in there. | 0 | 10,928 | false | 0 | 1 | Is it possible to skip setUp() for a specific test in python's unittest? | 24,315,867 |
1 | 1 | 0 | 4 | 4 | 1 | 1.2 | 0 | I'm having an issue which seems to be related with the way Python & PyV8's garbage collection interact. I've temporarily solved the issue by disabling python's garbage collection, and calling gc.collect and PyV8.JSEngine.collect together every few seconds when no JavaScript is being run. However, this seems like a pretty hackish fix... in particular, I'm worried PyV8 might decide to collect at an inopportune time and cause problems, anyway. Is there any way to disable PyV8's automatic garbage collection for good, at least until I have a few days to spend figuring out exactly what is going on and thus to actually fix the issue? | 0 | javascript,python,garbage-collection,v8,pyv8 | 2013-06-09T03:30:00.000 | 0 | 17,006,134 | It's possible to disable garbage collection for good by changing the source code of V8.
In V8's source, edit src/heap.cc, and put a return statement in the beginning of Heap::CollectGarbage.
Other than that, it's not possible (AFAICT): V8 will always invoke garbage collection when it's about to run out of memory. There is no (configurable) way to not have it do that. | 0 | 482 | true | 1 | 1 | PyV8 disable automatic garbage collection | 17,597,335 |
1 | 2 | 0 | 0 | 10 | 1 | 0 | 0 | After installation, I would like to make soft-links to some of the configuration & data files created by installation.
How can I determine the location of a new package's files installed from within the package's setup.py?
I initially hard-coded the path "/usr/local/lib/python2.7/dist-packages", but that broke when I tried using a virtual environment. (Created by virtualenv.)
I tried distutils.sysconfig.get_python_lib(), and that works inside the virtualenv. When installed on the real system, however, it returns "/usr/lib/python2.7/dist-packages" (Note the "local" directory isn't present.)
I've also tried site.getsitepackages():
Running a Python shell from the base environment:
import site
site.getusersitepackages()
'/home/sarah/.local/lib/python2.7/site-packages'
site.getsitepackages()
['/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages']
Running a Python shell from a virtual environment "testenv":
import site
site.getsitepackages()
Traceback (most recent call last):
File "", line 1, in
AttributeError: 'module' object has no attribute 'getsitepackages'
I'm running "Python 2.7.3 (default, Aug 1 2012, 05:14:39)" with "[GCC 4.6.3] on linux2" on Ubuntu. I can probably cobble something together with try-except blocks, but it seems like there should be some variable set / returned by distutils / setuptools. (I'm agnostic about which branch to use, as long as it works.)
Thanks. | 0 | python,path,installation,setup.py | 2013-06-10T18:21:00.000 | 0 | 17,030,327 | This will probably not answer your question, but if you need to access the source code of a package you have installed, or any other file within this package, the best way to do it is to install this package in develop mode (by downloading the sources, putting it wherever you want and then running python setup.py develop in the base directory of the package sources). This way you know where the package is found. | 0 | 3,205 | false | 0 | 1 | Detect python package installation path from within setup.py | 17,296,790 |
1 | 4 | 0 | 0 | 6 | 1 | 0 | 0 | I am looking for an easy-to-use tool which can visualize the 'inner working' of a class, written e.g. in PHP. What I would like to see are the different class methods, and how they are related (method A calls method B etc). Is there such a tool to create such a graph?
In a further step, maybe there is a tool which also visualizes the 'inner working' of a class (in a reverse-engineering way) of really how the workflow is, i.e. with all if-else decisions etc, what methods are called in what case?
If anyone can refer me to such a tool (preferably for PHP and Python) I would appreciate it. | 0 | php,python,code-analysis | 2013-06-11T15:54:00.000 | 0 | 17,048,540 | Although a lot of suggestions point towards pycallgraph and phpcallgraph I don't think these will do what you want to do - these are for runtime analysis, whereas what it sounds like you want to do static analysis.
I'm not aware of any tools for this, but, given that you're only interested in the workings of a single class and the relationships within that class, with a little effort you should be able to hack something together in your scripting language of choice which
Parses all function names and variable declarations inside the class and stores them somewhere
Uses the information from step 1 to identify variable usages, variable assignments and function calls, along with the functions in which these occur.
Convert this information into the graph format used by dot and then use dot to generate a directed graph showing dependencies.
Given the effort involved, if the class is not too large I would be tempted just to do it by hand!
Good luck, and if you do find a solution would love to see it. | 0 | 885 | false | 0 | 1 | What tools are available to visualize in-class dependencies (e.g. for PHP)? | 17,223,867 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I'm using the Python C API, and numerous times now I've tried using PySys_SetPath() to redirect the interpreter to a path where I've stored all of my scripts. Yet, every time I try it, I get the following error:
Unhandled exception at 0x1e028482 in app.exe: 0xC0000005: Access violation reading location 0x00000004.
I use it in the following syntax: PySys_SetPath("/Python/"). Is that incorrect? Why does it keep crashing? Thanks in advance. | 0 | c++,python,c,api | 2013-06-11T23:25:00.000 | 1 | 17,055,472 | I had the same problem but when I fixed all the \ to / and added a . at the beginning of the path it worked ie. the path should look sth like that PySys_SetPath("./Python/") or PySys_SetPath("C:/full/path/Python/") | 0 | 3,054 | false | 0 | 1 | Why won't PySys_SetPath() work? | 44,001,898 |
2 | 4 | 0 | 1 | 26 | 0 | 0.049958 | 0 | We currently have pytest with the coverage plugin running over our tests in a tests directory.
What's the simplest way to also run doctests extracted from our main code? --doctest-modules doesn't work (probably since it just runs doctests from tests). Note that we want to include doctests in the same process (and not simply run a separate invocation of py.test) because we want to account for doctest in code coverage. | 0 | python,testing,pytest,doctest | 2013-06-12T00:55:00.000 | 0 | 17,056,138 | Could you try with the repo version of pytest and paste a session log? I'd think --doctest-modules should pick up any .py files. | 0 | 7,851 | false | 0 | 1 | How to make pytest run doctests as well as normal tests directory? | 17,083,687 |
2 | 4 | 0 | 0 | 26 | 0 | 0 | 0 | We currently have pytest with the coverage plugin running over our tests in a tests directory.
What's the simplest way to also run doctests extracted from our main code? --doctest-modules doesn't work (probably since it just runs doctests from tests). Note that we want to include doctests in the same process (and not simply run a separate invocation of py.test) because we want to account for doctest in code coverage. | 0 | python,testing,pytest,doctest | 2013-06-12T00:55:00.000 | 0 | 17,056,138 | worked with doctest as well as with plain tests in one module. for a non-doctest test to be picked up, standard py.test discovery mechanism applies: a module name with test prefix, test function with test prefix. | 0 | 7,851 | false | 0 | 1 | How to make pytest run doctests as well as normal tests directory? | 53,343,424 |
1 | 3 | 0 | 1 | 7 | 1 | 0.066568 | 0 | It is always said that Python is not so efficient as other languages such as C/C++, Java etc. And it is also recommended to write the bottleneck part in C. But I've never run into such problems, maybe it's because most of the time it is the way you solve the problem rather than the efficiency of the language to bother.
Can anybody illustrate any real circumstances? Some simple codes will be great. | 0 | python,performance | 2013-06-12T08:38:00.000 | 0 | 17,061,118 | There isn't a specific set of circumstances in which C or C++ win. Pretty much any CPU-heavy code you write in C or C++ will run many times faster than the equivalent Python code.
If you haven't noticed, it's simply because, for the problems you've had to solve in Python, performance has never been an issue. | 0 | 1,860 | false | 0 | 1 | Any real examples to show python's inefficiency? | 17,061,230 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | This question just came up in a discussion and i am curious to know .
Is it possible to have a debugger that can debug 2 languages . For example . If i have Java program that references/opens/accesses/ a script (Perl or Python) then is it possible to have a debugger to be able to debug the Perl/Python script ?
Note : Logging is not an acceptable debugging technique here . | 0 | java,python,perl,debugging | 2013-06-12T10:26:00.000 | 0 | 17,063,168 | Yes, alter the host program (the java program ) so that it runs the perl program in a debugger
You might well run into problems with the way that debuggers "attach" in different programming language environments. perl -d assumes a tty is there for interactive commands whereas java does something completely different | 0 | 39 | false | 0 | 1 | Dual debugger Java + (Perl/Python) script | 17,064,348 |
1 | 7 | 0 | 4 | 8 | 0 | 0.113791 | 0 | I was looking at the option of embedding python into fortran90 to add python functionality to my existing fortran90 code. I know that it can be done the other way around by extending python with fortran90 using the f2py from numpy. But, i want to keep my super optimized main loop in fortran and add python to do some additional tasks / evaluate further developments before I can do it in fortran, and also to ease up code maintenance. I am looking for answers for the following questions:
1) Is there a library that already exists from which I can embed python into fortran? (I am aware of f2py and it does it the other way around)
2) How do we take care of data transfer from fortran to python and back?
3) How can we have a call back functionality implemented? (Let me describe the scenario a bit....I have my main_fortran program in Fortran, that call Func1_Python module in python. Now, from this Func1_Python, I want to call another function...say Func2_Fortran in fortran)
4) What would be the impact of embedding the interpreter of python inside fortran in terms of performance....like loading time, running time, sending data (a large array in double precision) across etc.
Thanks a lot in advance for your help!!
Edit1: I want to set the direction of the discussion right by adding some more information about the work I am doing. I am into scientific computing stuff. So, I would be working a lot on huge arrays / matrices in double precision and doing floating point operations. So, there are very few options other than fortran really to do the work for me. The reason i want to include python into my code is that I can use NumPy for doing some basic computations if necessary and extend the capabilities of the code with minimal effort. For example, I can use several libraries available to link between python and some other package (say OpenFoam using PyFoam library). | 0 | python,fortran,embed | 2013-06-12T21:09:00.000 | 0 | 17,075,418 | There is a very easy way to do this using f2py. Write your python method and add it as an input to your Fortran subroutine. Declare it in both the cf2py hook and the type declaration as EXTERNAL and also as its return value type, e.g. REAL*8. Your Fortran code will then have a pointer to the address where the python method is stored. It will be SLOW AS MOLASSES, but for testing out algorithms it can be useful. I do this often (I port a lot of ancient spaghetti Fortran to python modules...) It's also a great way to use things like optimised Scipy calls in legacy fortran | 1 | 9,062 | false | 0 | 1 | Embed python into fortran 90 | 23,725,918 |
1 | 2 | 0 | 0 | 2 | 0 | 0 | 1 | I'm using Selenium with Python to test a web application. The app has a Flash component that I'd like to test. The only references I've seen to using Selenium with Flash refer to Flash-Selenium which hasn't been updated in several years. Is testing Flash with Selenium even possible? | 0 | python,flash,selenium | 2013-06-13T19:00:00.000 | 0 | 17,094,940 | As long as you have access to the flash source code it is possible (although it requires some work). To do that you have to expose the flash actions you want to test using selenium. This requires that you make the methods available in Flash to execute via Javascript. Once you can do that, you should be able to automate the process with using selenium's ability to execute javascript. | 0 | 3,667 | false | 0 | 1 | Selenium/Python/Flash - How? | 17,096,754 |
1 | 2 | 0 | 1 | 2 | 0 | 0.099668 | 0 | I need to send out Instant Messages to a Lync/OCS server from Linux programmatically as an alerting mechanism.
I've looked into using python dbus and pidgin-sipe with finch or pidgin, but they aren't really good for sending one-off instant messages (finch and pidgin need to be running all the time).
Ideally, I'd have a python script or java class that could spit out Instant Messages to users when needed. | 0 | java,python,sip,lync,office-communicator | 2013-06-14T01:00:00.000 | 1 | 17,099,581 | Well, if you are on Lync 2013, you can have a look at UCWA ucwa.lync.com. It's a web service that allows to log in to Lync and use IM, presence, etc.
You can use then any language you want. I played with it using Node on Mac OS X, for example. | 0 | 3,151 | false | 0 | 1 | Sending out IMs to Lync/OCS programmatically | 17,125,543 |
1 | 1 | 0 | 3 | 3 | 1 | 0.53705 | 0 | My project currently uses NumPy, only for memory-efficient arrays (of bool_, uint8, uint16, uint32).
I'd like to get it running on PyPy which doesn't support NumPy. (failed to install it, at any rate)
So I'm wondering: Is there any other memory-efficient way to store arrays of numbers in Python? Anything that is supported by PyPy? Does PyPy have anything of it's own?
Note: array.array is not a viable solution, as it uses a lot more memory than NumPy in my testing. | 0 | python,arrays,numpy,pypy | 2013-06-14T01:39:00.000 | 0 | 17,099,850 | array.array is a memory efficient array. It packs bytes/words etc together, so there is only a few bytes of extra overhead for the entire array.
The one place where numpy can use less memory is when you have a sparse array (and are using one of the sparse array implementations)
If you are not using sparse arrays, you simply measured it wrong.
array.array also doesn't have a packed bool type, so you can implement that as wrapper around an array.array('I') or a bytearray() or even just use bit masks with a Python long | 1 | 919 | false | 0 | 1 | PyPy and efficient arrays | 17,101,084 |
1 | 4 | 0 | 1 | 3 | 0 | 0.049958 | 0 | I was trying to install autoclose.vim to Vim. I noticed I didn't have a ~/.vim/plugin folder, so I accidentally made a ~/.vim/plugins folder (notice the extra 's' in plugins). I then added au FileType python set rtp += ~/.vim/plugins to my .vimrc, because from what I've read, that will allow me to automatically source the scripts in that folder.
The plugin didn't load for me until I realized my mistake and took out the extra 's' from 'plugins'. I'm confused because this new path isn't even defined in my runtime path. I'm basically wondering why the plugin loaded when I had it in ~/.vim/plugin but not in ~/.vim/plugins? | 0 | python,vim,plugins | 2013-06-15T23:28:00.000 | 1 | 17,128,878 | All folders in the rtp (runtimepath) option need to have the same folder structure as your $VIMRUNTIME ($VIMRUNTIME is usually /usr/share/vim/vim{version}). So it should have the same subdirectory names e.g. autoload, doc, plugin (whichever you need, but having the same names is key). The plugins should be in their corresponding subdirectory.
Let's say you have /path/to/dir (in your case it's ~/.vim) is in your rtp, vim will
look for global plugins in /path/to/dir/plugin
look for file-type plugins in /path/to/dir/ftplugin
look for syntax files in /path/to/dir/syntax
look for help files in /path/to/dir/doc
and so on...
vim only looks for a couple of recognized subdirectories† in /path/to/dir. If you have some unrecognized subdirectory name in there (like /path/to/dir/plugins), vim won't see it.
† "recognized" here means that a subdirectory of the same name can be found in /usr/share/vim/vim{version} or wherever you have vim installed. | 0 | 938 | false | 0 | 1 | Vim plugins don't always load? | 17,131,966 |
2 | 4 | 1 | 1 | 0 | 0 | 0.049958 | 0 | I really would like to start getting into Objective C coding, specifically so I can write applications for iOS.
My coding background is that I have written C# .NET GUI Windows apps and PHP web scripts for years; I've also become a very good Python coder in the past year. I have written hundreds of useful command-line Python scripts, and also a few GUI apps using wxPython successfully. I also wrote VB6 GUI apps way back in the day, and of course, I cut my teeth on QuickBASIC in DOS. ;-)
I understand OOP concepts: I understand classes, methods, properties and the like. I use OOP a lot in Python, and obviously use it extensively in C#.
I haven't actually taken the time to really get good at C or C++, however I am able to write simple "test" programs to accomplish small tasks. The problem is that I understand the syntax just fine, but the APIs can be very different depending on platform, and accomplishing the same thing in C on Linux at the command line is totally different than accomplishing it in Windows in a GUI.
I've looked over a few books out there for iOS coding but they seem to assume little to no programming knowledge and quickly bore me, and I can't easily find the information I really need buried among all of the "here's what an object is" or "this is called a class and a method" stuff...
I also tried the Stanford lectures on iTunes U, but I found myself struggling with the MVC concepts and the idea of setting up different files for "implementation" and "header" and all of that...
Is there any resources that you guys can think of that would be good for me to get started with iOS?
It's also worth noting I have dabbled with PyObjC a little on Mac and therefore do understand a LITTLE about the NS foundation classes and such, and I've also looked at Apple's reference documentation and I'm sure that once I get the basics down I could put good use to it, but I still don't know how to actually get a functional iOS app that does something useful going. | 0 | python,.net,ios | 2013-06-16T22:38:00.000 | 0 | 17,138,389 | I learned to write iOs apps from the CS 193P iPhone Application Development course on iTunes U. It's fantastic and I highly recommend it if you are sure iOs is what you want to do. | 0 | 166 | false | 0 | 1 | Best intro to iOS for Python/PHP/C# Coder | 17,138,453 |
2 | 4 | 1 | 1 | 0 | 0 | 0.049958 | 0 | I really would like to start getting into Objective C coding, specifically so I can write applications for iOS.
My coding background is that I have written C# .NET GUI Windows apps and PHP web scripts for years; I've also become a very good Python coder in the past year. I have written hundreds of useful command-line Python scripts, and also a few GUI apps using wxPython successfully. I also wrote VB6 GUI apps way back in the day, and of course, I cut my teeth on QuickBASIC in DOS. ;-)
I understand OOP concepts: I understand classes, methods, properties and the like. I use OOP a lot in Python, and obviously use it extensively in C#.
I haven't actually taken the time to really get good at C or C++, however I am able to write simple "test" programs to accomplish small tasks. The problem is that I understand the syntax just fine, but the APIs can be very different depending on platform, and accomplishing the same thing in C on Linux at the command line is totally different than accomplishing it in Windows in a GUI.
I've looked over a few books out there for iOS coding but they seem to assume little to no programming knowledge and quickly bore me, and I can't easily find the information I really need buried among all of the "here's what an object is" or "this is called a class and a method" stuff...
I also tried the Stanford lectures on iTunes U, but I found myself struggling with the MVC concepts and the idea of setting up different files for "implementation" and "header" and all of that...
Is there any resources that you guys can think of that would be good for me to get started with iOS?
It's also worth noting I have dabbled with PyObjC a little on Mac and therefore do understand a LITTLE about the NS foundation classes and such, and I've also looked at Apple's reference documentation and I'm sure that once I get the basics down I could put good use to it, but I still don't know how to actually get a functional iOS app that does something useful going. | 0 | python,.net,ios | 2013-06-16T22:38:00.000 | 0 | 17,138,389 | I have gotten more from Erica Sadun's books than any of the others, personally. iOS apps use a lot of animation and graphics, by necessity, and her code examples are clean and concise. They aren't beginner's books but you sound as though you're not a beginning coder. They hit on a lot of topics it is hard to find much on.
If you're willing to work through the sample programs, I found iPad iOS 6 Development Essentials to be comprehensive (Neil Smith). However, it tends to focus on the visual IDE of xCode which I think is lousy and chose not to use at all; if you plan to use it, then that would be a good resource imo. Also, I got a book that covered Objective C only (Aaron Hillegass) which I thought was good. The iOS book from the same author was not good for me, because it depended on you working prior chapter examples to proceed to later chapters, which for me was a waste of time, so I bailed out of it quickly. I also got Pro Core Data (Privat and Warner) which I found to be of limited (actually, little) value for the same reason as the Hillegass iOS book -- the examples are too big and not to the point.
And, of course, Google. | 0 | 166 | false | 0 | 1 | Best intro to iOS for Python/PHP/C# Coder | 17,140,686 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | I'm looking at using inotify to watch about 200,000 directories for new files. On creation, the script watching will process the file and then it will be removed. Because it is part of a more compex system with many processes, I want to benchmark this and get system performance statistics on cpu, memory, disk, etc while the tests are run.
I'm planning on running the inotify script as a daemon and having a second script generating test files in several of the directories (randomly selected before the test).
I'm after suggestions for the best way to benchmark the performance of something like this, especially the impact it has on the Linux server it's running on. | 0 | python,linux,benchmarking,inotify | 2013-06-16T23:09:00.000 | 1 | 17,138,569 | I would try and remove as many other processes as possible in order to get a repeatable benchmark. For example, I would set up a separate, dedicated server with an NFS mount to the directories. This server would only run inotify and the Python script. For simple server measurements, I would use top or ps to monitor CPU and memory.
The real test is how quickly your script "drains" the directories, which depends entirely on your process. You could profile the script and see where it's spending the time. | 0 | 780 | false | 0 | 1 | Benchmarking System performance of Python System | 17,139,897 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | I'm writing an app that converts different images to JPG. It operates over a complex directory structure. There, a directory may include other directories, image files (JPG, GIF, PNG, TIFF), PDF files, RAR/ZIP archives, which in turn may include anything of the above. The app finds everything that can be converted to an image and places the resulting JPGs into a separate folder.
How do i write integration tests to test the conversion of images? Specifically, how should i fake the complex directory structure with all the files?
Currently i just store a sample directory structure, which i manually assembled out of various image, PDF and archive files, in a tests/ directory. In a setUp method i put this sample directory in place of the actual data and run the code. I had an idea to generate all these sample files myself (generate JPGs via Imagemagick, for example), but it proved hard.
How integration testing on images is usually done? | 0 | python,image,unit-testing,language-agnostic,integration-testing | 2013-06-18T14:42:00.000 | 0 | 17,171,843 | Do you write your own library to convert images of you just use existing library? In the latter case you simply do not test it. Author has already tested it somehow. You just need to create an abstraction layer between your code and the image library you use. Then you can simply check if your code calls the library with desired parameters.
If you really insist on testing pictures then you need to make the transformation deterministic (and compare actual result with expected result) or you need to make comparison a bit less strict (from ignoring date fields to OCR recognizing the image).
Testing files is way easier (you do not need probability based OCR).Check if your program placed all files in expected location. | 0 | 271 | true | 0 | 1 | Integration testing and images | 21,843,350 |
1 | 1 | 0 | 1 | 6 | 0 | 0.197375 | 0 | I have the following setup:
Django-Celery project A registers task foo
Project B: Uses Celery's send_task to call foo
Project A and project B have the same configuration: SQS, msgpack
for serialization, gzip, etc.
Each project lives on a different github repository
I've unit-tested calls to "foo" in project A, without using Celery at all, just foo(1,2,3) and assert the result. I know that it works.
I've unit-tested that send_task in project B sends the right parameters.
What I'm not testing, and need your advise on is the integration between the two projects. I would like to have a unittest that would:
Start a worker in the context of project A
Send a task using the code of project B
Assert that the worker started in the first step gets the task, with the parameters I sent in the second step, and that the foo function returned the expected result.
It seems to be possible to hack this by using python's subprocess and parsing the output of the worker, but that's ugly. What's the recommended approach to unit-testing in cases like this? Any code snippet you could share? Thanks! | 0 | python,unit-testing,integration-testing,celery | 2013-06-19T02:20:00.000 | 1 | 17,181,923 | I'm not sure if it's worthwhile to explicitly test the transportation mechanism (i.e. the sending of the task parameters through celery) using a unit test. Personally, I would write my test as follows (can be split up in several unit tests):
Use the code from project B to generate a task with sample parameters.
Encode the task parameters using the same method used by Celery (i.e. pickling the parameters or encoding them as JSON).
Decode the task parameters again, checking that no corruption occured.
Call the task function, making sure that it produces the correct result.
Perform the same encoding/decoding sequence for the results of the task function.
Using this method, you will be able to test that
The task generation works as intended
The encoding & decoding of the task parameters and results works as expected
If necessary, you can still independently test the functioning of the transportation mechanism using a system test. | 0 | 774 | false | 1 | 1 | Running a Celery worker in unittest | 18,316,377 |
1 | 5 | 0 | 3 | 25 | 1 | 0.119427 | 0 | I would like to have a function in my class, which I am going to use only inside methods of this class. I will not call it outside the implementations of these methods. In C++, I would use a method declared in the private section of the class. What is the best way to implement such a function in Python?
I am thinking of using a static decorator for this case. Can I use a function without any decorators and the self word? | 0 | python,oop,static-methods | 2013-06-19T14:08:00.000 | 0 | 17,193,457 | Python just doesn't do private. If you like you can follow convention and precede the name with a single underscore, but it's up to other coders to respect that in a gentlemanly† fashion
† or gentlewomanly | 0 | 48,236 | false | 0 | 1 | Private methods in Python | 17,193,588 |
1 | 2 | 0 | 1 | 0 | 0 | 1.2 | 0 | The business case...
The app server (Ubuntu/nginx/postgresql/python) that I use writes gzipped system log files as root to /var/log
I need to present data from these log files to users' browsers
My approach
I need to do a fair bit of searching and string manipulation server side so I have a python script that deals with the opening and processing and then returns a nicely formatted JSON result set. The python (cgi) script is then called using ajax from the web page.
My problem
The script works perfectly when called from the command line as SU but (...obviously) the file opening method I'm using ( gzip.open(filename) ) is failing when invoked as user www-data by the webserver.
Other useful info
The app server concerned is (contractually rather than physically) a bit of a black box - I have SU access, I can write scripts, I can read anything but I can't change file permissions, add additional python libs or or mess with config.
The subset of users who can would use this log extract also have the SU password so could be presented with a login dialog that I could pass to the script.
Given the restrictions I have, how would you go about it? | 0 | python,file-io,permissions | 2013-06-20T07:05:00.000 | 1 | 17,207,280 | One option would be to do this somewhat sensitive "su" work in a background process that is disconnected from the web.
Likely running via cron, this script would take the root owned log files, possibly change them to a format that the web-side code could deal with easily like loading them into a database, or merely unzipping them and placing them into a different location with slightly more laxed permissions.
Then the web-side code could easily have access to the data without having to jump through the "su" hoops.
From my perspective this plan does not seem to violate your contractual rules. The web server config, permissions, etc remain intact. | 0 | 746 | true | 0 | 1 | Open root owned system files for reading with python | 17,207,385 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I would like to run the command python abc.py in the windows command prompt when the button on html page is clicked.The python file is located at C:/abc.py> I would like to know how to code the html page to do this process.Thank you for the help. | 0 | html,windows,python-2.7,command,command-prompt | 2013-06-20T15:49:00.000 | 0 | 17,218,183 | I believe the correct answer is you cannot. Feel free to let me know otherwise if you find out a way to do it. | 0 | 6,859 | false | 1 | 1 | Running the Command on Windows Command prompt using HTML button | 17,218,531 |
1 | 3 | 0 | 20 | 40 | 1 | 1 | 0 | I was curious if there was any indication of which of operator.itemgetter(0) or lambda x:x[0] is better to use, specifically in sorted() as the key keyword argument as that's the use that springs to mind first. Are there any known performance differences? Are there any PEP related preferences or guidance on the matter? | 0 | python,python-2.7,python-3.x | 2013-06-21T20:16:00.000 | 0 | 17,243,620 | Leaving aside the speed issue, which is often based on where you make the itemgetter or lambda function, I personally find that itemgetter is really nice for getting multiple items at once: for example, itemgetter(0, 4, 3, 9, 19, 20) will create a function that returns a tuple of the items at the specified indices of the listlike object passed to it. To do that with a lambda, you'd need lambda x:x[0], x[4], x[3], x[9], x[19], x[20], which is a lot clunkier. (And then some packages such as numpy have advanced indexing, which works a lot like itemgetter() except built in to normal bracket notation.) | 0 | 10,179 | false | 0 | 1 | operator.itemgetter or lambda | 17,243,722 |
3 | 5 | 0 | 1 | 5 | 1 | 0.039979 | 0 | I come from a PHP (as well as a bunch of other stuff) background and I am playing around with Python. In PHP when I want to include another file I just do include or require and everything in that file is included.
But it seems the recommended way to do stuff in python is from file import but that seems to be more for including libraries and stuff? How do you separate your code amongst several files? Is the only way to do it, to have a single file with a whole bunch of function calls and then import 15 other files? | 0 | python,python-2.7,import,module | 2013-06-22T19:39:00.000 | 0 | 17,254,603 | Interesting question. As you know, in PHP, you can separate your code by using include, which literally takes all the code in the included file and puts it wherever you called include. This is convenient for writing web applications because you can easily divide a page into parts (such as header, navigation, footer, etc).
Python, on the other hand, is used for way more than just web applications. To reuse code, you must rely on functions or good old object-oriented programming. PHP also has functions and object-oriented programming FYI.
You write functions and classes in a file and import it in another file. This lets you access the functions or use the classes you defined in the other file.
Lets say you have a function called foo in file file1.py. From file2.py, you can write
import file1. Then, call foo with file1.foo(). Alternatively, write from file1 import foo and then you can call foo with foo(). Note that the from lets you call foo directly. For more info, look at the python docs. | 0 | 7,295 | false | 0 | 1 | Little confused with import python | 17,254,692 |
3 | 5 | 0 | 0 | 5 | 1 | 0 | 0 | I come from a PHP (as well as a bunch of other stuff) background and I am playing around with Python. In PHP when I want to include another file I just do include or require and everything in that file is included.
But it seems the recommended way to do stuff in python is from file import but that seems to be more for including libraries and stuff? How do you separate your code amongst several files? Is the only way to do it, to have a single file with a whole bunch of function calls and then import 15 other files? | 0 | python,python-2.7,import,module | 2013-06-22T19:39:00.000 | 0 | 17,254,603 | There's an execfile() function which does something vaguely comparable with PHP's include, here, but it's almost certainly something you don't want to do. As others have said, it's just a different model and a different programming need in Python. Your code is going to go from function to function, and it doesn't really make a difference in which order you put them, as long as they're in an order where you define things before you use them. You're just not trying to end up with some kind of ordered document like you typically are with PHP, so the need isn't there. | 0 | 7,295 | false | 0 | 1 | Little confused with import python | 17,257,186 |
3 | 5 | 0 | 1 | 5 | 1 | 0.039979 | 0 | I come from a PHP (as well as a bunch of other stuff) background and I am playing around with Python. In PHP when I want to include another file I just do include or require and everything in that file is included.
But it seems the recommended way to do stuff in python is from file import but that seems to be more for including libraries and stuff? How do you separate your code amongst several files? Is the only way to do it, to have a single file with a whole bunch of function calls and then import 15 other files? | 0 | python,python-2.7,import,module | 2013-06-22T19:39:00.000 | 0 | 17,254,603 | On a technical level, a Python import is very similar to a PHP require, as it will execute the imported file. But since Python isn't designed to ultimately generate an HTML file, the way you use it is very different.
Typically a Python file will on the module level not include much executable code at all, but definitions of functions and classes. You them import them and use them as a library.
Hence having things like header() and footer() makes no sense in Python. Those are just functions. Call them like that, and the result they generate will be ignored.
So how do you split up your Python code? Well, you split it up into functions and classes, which you put into different files, and then import. | 0 | 7,295 | false | 0 | 1 | Little confused with import python | 17,255,642 |
1 | 1 | 0 | 2 | 1 | 1 | 0.379949 | 0 | I want to suppress certain warning messages when Python is running in a test context.
Is there any way to detect this globally in Python? | 0 | python,testing | 2013-06-25T22:16:00.000 | 0 | 17,308,521 | No, you can't really detect whether or not you're in a test context, or you'd do it with a lot of unnecessary processing. For example: having a state variable in the testing package that you set up when you're running your tests. But then you would include that module (or variable) in all of your modules, which would be far from being elegant. Globals are evil.
The best way to implement filtering output based on the execution context is to use the logging module and make all unnecessary warning messages at a low level (like DEBUG) and ignore them when you run your tests.
Another option would be to add a level for all of the messages you explicitly ignore when running the tests. | 0 | 61 | false | 0 | 1 | Is there a way to detect that Python is running a test? | 17,308,545 |
1 | 3 | 0 | 3 | 4 | 1 | 0.197375 | 0 | I've written a Python script, but running it is taking a lot longer than I had anticipated, and I've no obvious candidate for particuklar lines in the script taking up runtime.
Is there anything I can put in my code to check how long its taking to run through each line?
Many thanks. | 0 | python,performance,profiling | 2013-06-26T07:46:00.000 | 0 | 17,314,366 | timeit is a standard module since python 2.3 take a look at the documentation for it. | 0 | 6,436 | false | 0 | 1 | Check running time per line in python | 17,314,426 |
2 | 3 | 0 | 0 | 2 | 0 | 0 | 0 | Is there any way to algorithmically determine audio quality from a .wav or .mp3 file?
Basically I have users with diverse recording setups (i.e. they are from all over the world and I have no control over them) recording audio to mp3/wav files. At which point the software should determine whether their setup is okay or not (tragically, for some reason they are not capable of making this determination just by listening to their own recordings, and so occasionally we get recordings that are basically impossible to understand due to low volume or high noise).
I was doing a volume check to make sure the microphone level was okay; unfortunately this misses cases where the volume is high but the clarity is low. I'm wondering if there is some kind of standard scan I can do (ideally in Python) that detects when there is a lot of background noise.
I realize one possible solution is to ask them to record total silence and then compare to the spoken recording and consider the audio "bad" if the volume of the "silent" recording is too close to the volume of the spoken recording. But that depends on getting a good sample from the speaker both times, which may or may not be something I can depend on.
So I'm wondering if instead there's just a way to scan through an audio file (these would be ~10 seconds long) and recognize whether the sound file is "noisy" or clear. | 0 | python,audio,noise | 2013-06-26T14:37:00.000 | 0 | 17,323,142 | Not quite my field but I suspect that if you get a spectrum, (do a Fourier transform maybe), and compare "good" and "noisy" recordings you will find that the noise contributes to a cross spectrum level that is higher in the bad recordings than the good. Take a look at the signal processing section in SciPy - this can probably help. | 0 | 3,067 | false | 1 | 1 | Determining sound quality from an audio recording? | 17,323,482 |
2 | 3 | 0 | 1 | 2 | 0 | 0.066568 | 0 | Is there any way to algorithmically determine audio quality from a .wav or .mp3 file?
Basically I have users with diverse recording setups (i.e. they are from all over the world and I have no control over them) recording audio to mp3/wav files. At which point the software should determine whether their setup is okay or not (tragically, for some reason they are not capable of making this determination just by listening to their own recordings, and so occasionally we get recordings that are basically impossible to understand due to low volume or high noise).
I was doing a volume check to make sure the microphone level was okay; unfortunately this misses cases where the volume is high but the clarity is low. I'm wondering if there is some kind of standard scan I can do (ideally in Python) that detects when there is a lot of background noise.
I realize one possible solution is to ask them to record total silence and then compare to the spoken recording and consider the audio "bad" if the volume of the "silent" recording is too close to the volume of the spoken recording. But that depends on getting a good sample from the speaker both times, which may or may not be something I can depend on.
So I'm wondering if instead there's just a way to scan through an audio file (these would be ~10 seconds long) and recognize whether the sound file is "noisy" or clear. | 0 | python,audio,noise | 2013-06-26T14:37:00.000 | 0 | 17,323,142 | It all depends on what your quality problems are, which is not 100% clear from your question, but here are some suggestions:
In the case where volume is high and clarity is low, I'm guessing the problem is that the user has the input gain too high. After the recording, you can simply check for distortion. Even better, you can use Automatic Gain Control (AGC) durring recording to prevent this from happening in the first place.
In the case of too much noise, I'm assuming the issue is that the speaker is too far from the mike. In this case Steve's suggestion might work, but to make it really work, you'd need to do a ton of work comparing sample recordings and developing statistics to see how you can discriminate. In practice, I think this is too much work. A simpler alternative that I think will be easier and more likely to work (although not necessarily guaranteed) would be to create an envelope of your signal, then create a histogram from that and see how the histogram compares to existing good and bad recordings. If we are talking about speech only, you could divide the signal into three frequency bands (with a time-domain filter, not an FFT) to give you an idea of how much is noise (the high and low bands) and how much is sound you care about (the center band).
Again, though, I would use an AGC durring recording and if the AGC finds it needs to set the input gain too high, it's probably a bad recording. | 0 | 3,067 | false | 1 | 1 | Determining sound quality from an audio recording? | 17,326,588 |
1 | 2 | 0 | 3 | 4 | 0 | 0.291313 | 0 | How can you change the font settings for the input boxes and message text in EasyGUI? I know you have to edit a file somewhere, but that's about it. Exactly how to do it and what to edit would be appreciated.
Thanks in advance. | 0 | python-3.x,easygui | 2013-06-26T16:14:00.000 | 0 | 17,325,299 | In addition to what @Benjooster answered previously:
Apparently sometimes the font settings are not in easygui.py, but rather in
Python27\Lib\site-packages\easygui\boxes\state.py | 0 | 6,064 | false | 0 | 1 | Python EasyGUI module: how to change the font | 32,529,223 |
1 | 1 | 0 | 3 | 3 | 0 | 1.2 | 0 | I have a Python code calling some C code (.so file).
Is there a way, from within the C code, go get the line number it has been called from at the Python side? | 0 | python,c,python-c-api | 2013-06-27T16:29:00.000 | 0 | 17,348,492 | I eventually found the PyFrame_GetLineNumber(PyFrameObject *f) C function, whose source is located in frameobject.c. | 0 | 208 | true | 0 | 1 | Python calling C: how could C send Python's line number it has been called from? | 17,471,190 |
1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | I am trying to use C++ lib with python using SWIG,
my problem is that the main class symbol is missing,
$ ldd -r -d _rf24.so 2>&1|grep RF24
undefined symbol: _ZN4RF24C1Ehh (./_rf24.so)
$ objdump -t librf24-bcm.so.1.0 |grep RF24
.
.
.
000032cc g F .text 00000044 _ZN4RF24C1Ehhj
000032cc g F .text 00000044 _ZN4RF24C2Ehhj
.
.
.
python exception:
ImportError: ./_rf24.so: undefined symbol: _ZN4RF24C1Ehh
I tried using the lib objs from the original Makefile or tried to compile them with some flags but the result is the same
build lines:
$ gcc -c RF24_wrap.cxx -I/usr/include/python2.7
$ gcc -lstdc++ -shared bcm2835.o RF24.o RF24_wrap.o -o _rf24.so
RF24.i (the SWIG file):
%module rf24
%{
#include "RF24.h"
%}
%include "RF24.h"
//%include "bcm2835.h"
%include "carrays.i"
%array_class(char, byteArray);
RF24.h (relevant part of the class header file):
.
.
.
// bla bla bla enums...
class RF24
{
private:
// bla bla bla
protected:
// bla bla bla
public:
RF24(uint8_t _cepin, uint8_t _cspin);
RF24(uint8_t _cepin, uint8_t _cspin, uint32_t spispeed )
//bla bla bla | 0 | c++,python,swig,porting,symbols | 2013-06-28T10:44:00.000 | 0 | 17,362,909 | Problem Solved! After using c++filt, I have found out that one of the constructors in the lib wasn't defined, after deleting it problem solved | 0 | 215 | false | 0 | 1 | Missing / wrong signature whan converting c++ library to python using SWIG | 37,594,931 |
1 | 2 | 0 | 1 | 3 | 0 | 0.099668 | 1 | I'd like to scrape contact info from about 1000-2000 different restaurant websites. Almost all of them have contact information either on the homepage or on some kind of "contact" page, but no two websites are exactly alike (i.e., there's no common pattern to exploit). How can I reliably scrape email/phone # info from sites like these without specifically pointing the Python script to a particular element on the page (i.e., the script needs to be structure agnostic, since each site has a unique HTML structure, they don't all have, e.g., their contact info in a "contact" div).
I know there's no way to write a program that will be 100% effective, I'd just like to maximize my hit rate.
Any guidance on this—where to start, what to read—would be much appreciated.
Thanks. | 0 | python,web-scraping,beautifulsoup,screen-scraping | 2013-06-28T14:03:00.000 | 0 | 17,366,528 | In most countries the telephone number follows one of a very few well defined patterns that can be matched with a simple regexp - likewise email addresses have an internationally recognised format - simply scrape the homepage, contacts or contact us page and then parse with regular expressions - you should easily achieve better than 90% accuracy.
Alternatively of course you simply submit the restaurant name and town to the local equivalent of the Yellow Pages web site. | 0 | 2,766 | false | 1 | 1 | Scraping Contact Information from Several Unique Sites with Python | 17,366,729 |
1 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I am new at pyramid framework and I recently started to play with it. However, I'm a bit confused about how a tarball created with 'sdist' gets installed in a production virtual environment. My scenario is as follows:
After finishing a project I created in pyramid called 'myapp', I run: python setup.py sdist in order to create the distribution tarball.
The tarball gets created under 'dist' folder and it contains all my project sources as well as the .ini files (development and production).
I then create a new production virtual environment by executing: virtualenv --no-site-packages envprod
To install the 'myapp' distribution tarball I execute: envprod/bin/easy_install src/myapp/dist/myapp0-0.tar.gz.
It then starts to download and install all the requirements for the project and it also installs the sources of my application under envprod/lib/python2.7/site-packages/myapp
The problem is that neither development.ini nor production.ini are installed in the new prod environment so I have no way to execute 'pserve' since it needs the .ini file.
Am I doing something wrong? Or is there a way to start serving 'myapp' without the .ini files?
Thanks! | 0 | python,pyramid | 2013-06-28T21:53:00.000 | 0 | 17,374,249 | As stated by Mikhail, code and configuration are note the same.
You may want to deploy your package manytimes and not to overwrite already installed configuration and data.
Please note that the db, if present and on file system (sqlite), is not distributed inside the package as well. I guess it's done to allow you to update the code easily.
If your intent is to deploy the package in production environment all you need to do is to copy both the ini you want to use and the database (if sqlite) or to run the initilize_db script (that is installed in bin) before starting the app.
Note that it's always a good idea to test the production ini in a non production environment to be sure that settings are good for you, in particular about logging, because you'll have no console logging.
Though it's good enough for dev/prod environment, it may be a problem for distribution to 3rd party.
I'm just trying to address similar problems and I think that the main point is to properly configure setup.py and MANIFEST.in, to include what you need in the egg and properly extract them when installing.
The problem seems to be that easy_install skip all files outside your app folder (so ini files, that are one dir back).
A workaround for that, is to skip easy_install, and just untaz your tarball and then enter your project folder and use:
pip install -e . --pre
(the --pre is only required if you included pre-release package in your project, maybe because they are a dependency of formalchemy, as I did).
This seems the easiest way toi distribute to other people.
You may want to create the database somehow, anyway, to have it work, unless you include it in the distribution explicitly adding it to MANIFEST file. | 0 | 637 | false | 0 | 1 | Deploy a pyramid application with sdist | 27,862,172 |
1 | 1 | 0 | 2 | 5 | 0 | 0.379949 | 0 | I made a small application that prints unicode special characters(i.e. superscript, subscript...). When it runs locally there are no problems but when it runs in a ssh session I always get a UnicodeEncodeError.
Specifically: UnicodeEncodeError 'ascii' can't encode characters in position 0-1: ordinal not in range(128)
I tried different ssh clients, computers and double checked the sessions encoding but the result is the same.
This is really weird. Why does this happen? Is this really related to ssh? | 0 | python,unicode,encoding,python-3.x,ssh | 2013-06-28T22:22:00.000 | 0 | 17,374,526 | The problem might be not your Python code, check your ssh ENV. LANG should be en_US.UTF-8 (containing UTF-8) not ASCII | 0 | 1,171 | false | 0 | 1 | UnicodeEncodeError when using python from ssh | 17,374,821 |
1 | 1 | 0 | 1 | 0 | 1 | 1.2 | 0 | I've written some python modules that I'd like to be able to import anytime on Mac OS X. I've done some googling and I've gotten some mixed responses so I'd like to know what the "best" practice is for storing those files safely.
I'm running Python2.7 and I want to make sure I don't mess with the Mac install of Python or anything like that. Thanks for the help | 0 | python,macos,python-2.7 | 2013-07-01T14:46:00.000 | 1 | 17,407,276 | The standard directory which is already searched by python depends on the version of python.
For the Apple installed python 2.7 it is /Library/Python/2.7/site-packages
the README in that directory says
This directory exists so that 3rd party packages can be installed
here. Read the source for site.py for more details. | 0 | 498 | true | 0 | 1 | Good location to store .py files on Mac | 17,407,429 |
1 | 1 | 0 | 2 | 2 | 0 | 1.2 | 0 | I'm using Eclipse / PyDev and PyUnit on OSX for development. It was recommended to me that I use Nose to execute our suite of tests.
When I configure Nose as the test runner, however, output from the interactive console (either standalone or during debugging) disappears. I can type commands but do not see any output.
Is this normal, or am I missing some configuration? | 0 | python,eclipse,nose,python-unittest | 2013-07-01T16:18:00.000 | 1 | 17,409,127 | I eventually found in the Preferences > PyDev > PyUnit menu that adding -s to the Parameters for test running stopped this. The parameter prevents the capture of stdout that nose does by default.
The alternate --nocapture parameter should work too. | 0 | 969 | true | 0 | 1 | Where does console output go when Eclipse PyUnit Test Runner configured to use Nose | 19,227,424 |
1 | 3 | 0 | 0 | 3 | 0 | 0 | 0 | I'm using Libsvm in a 5x2 cross validation to classify a very huge amount of data, that is, I have 47k samples for training and 47k samples for testing in 10 different configurations.
I usually use the Libsvm's script easy.py to classify the data, but it's taking so long, I've been waiting for results for more than 3 hours and nothing, and I still have to repeat this procedure more 9 times!
does anybody know how to use the libsvm faster with a very huge amount of data? does the C++ Libsvm functions work faster than the python functions? | 0 | python,c++,svm,libsvm | 2013-07-03T20:24:00.000 | 0 | 17,457,460 | easy.py is a script for training and evaluating a classifier. it does a metatraining for the SVM parameters with grid.py. in grid.py is a parameter "nr_local_worker" which is defining the mumber of threads. you might wish to increase it (check processor load). | 1 | 3,432 | false | 0 | 1 | Large training and testing data in libsvm | 18,509,671 |
1 | 1 | 0 | 1 | 1 | 1 | 0.197375 | 0 | The other day I was coding when suddenly I discovered myself struggling with a simple problem but confuse solution (at least in a pythonic way to go).
The code was supposed to just download some files, for that, it would call some DownloadController passing it a callback so to received events such as init, progress, error and success.
However, my code didn't need at all these events. With this came to my mind some solutions
Change DownloadController to have a default callback=None and check for it so to ignore sending events in this case
Have NullCallbackImpl which adheres to callback interface but do nothing (just pass on each event)
First approach didn't like it because the code would be kind of messy and not well-design.
So, I stick with the second approach... Questions:
How good (maybe 'how bad') would it be to have a null_callback = mock.Mock()? (using python mock library from Michael Foord)
Is there any library that do this?
Or should I stick with creating a NullCallbackImpl implementing each method with a simple pass? | 0 | python,design-patterns,mocking | 2013-07-05T00:10:00.000 | 0 | 17,479,480 | You discovered a new use case for DownloadController - "Let the user customize the callback". It sounds like you have control over the Downloadcontroller source. It could define a DownloadCallback class that exposes the events as methods but does nothing with them. The Controller would accept None (do nothing) or anything that implements the DownloadController interface.
I think using mock for real code is more than a bit odd... it creates yet another dependency that needs to be met for users of your module. | 0 | 67 | false | 0 | 1 | Which is the recommended way to use the Null Pattern in Python? | 17,479,627 |
2 | 3 | 0 | 2 | 1 | 1 | 0.132549 | 0 | In Python 3, when I opened a text file with mode string 'rb', and then did f.read(), I was taken aback to find the file contents enclosed in single quotes after the character 'b'.
In Python 2 I just get the file contents.
I'm sure this is well known, but I can't find anything about it in the doco. Could someone point me to it? | 0 | python,file,python-3.x | 2013-07-05T09:49:00.000 | 0 | 17,485,920 | First of all, the Python 2 str type has been renamed to bytes in Python 3, and byte literals use the b'' prefix. The Python 2 unicode type is the new Python 3 str type.
To get the Python 3 file behaviour in Python 2, you'd use io.open() or codecs.open(); Python 3 decodes text files to Unicode by default.
What you see is that for binary files, Python 3 gives you the exact same thing as in Python 2, namely byte strings. What changed then, is that the repr() of a byte string is prefixed with b and the print() function will use the repr() representation of any object passed to it except for unicode values.
To print your binary data as unicode text with the print() function., decode it to unicode first. But then you could perhaps have opened the file as a text file instead anyway.
The bytes type has some other improvements to reflect that you are dealing with binary data, not text. Indexing individual bytes or iterating over a bytes value gives you int values (between 0 and 255) and not characters, for example. | 0 | 2,045 | false | 0 | 1 | Python 3 file input change in binary mode | 17,486,121 |
2 | 3 | 0 | 0 | 1 | 1 | 0 | 0 | In Python 3, when I opened a text file with mode string 'rb', and then did f.read(), I was taken aback to find the file contents enclosed in single quotes after the character 'b'.
In Python 2 I just get the file contents.
I'm sure this is well known, but I can't find anything about it in the doco. Could someone point me to it? | 0 | python,file,python-3.x | 2013-07-05T09:49:00.000 | 0 | 17,485,920 | Sometimes we need (needed?) to know whether a text file had single-character newlines (0A) or double character newlines (0D0A).
We used to avoid confusion by opening the text file in binary mode, recognising 0D and 0A, and treating other bytes as regular text characters.
One could port such code by finding all binarymode reads and replacing them with a new function oldread() that stripped off the added material, but it’s a bit painful.
I suppose the Python theologians thought of keeping ‘rb’ as it was, and adding a new ‘rx’ or something for the new behaviour. It seems a bit high-handed just to abolish something.
But, there it is, the question is certainly answered by a search for ‘rb’ in Lennert’s document. | 0 | 2,045 | false | 0 | 1 | Python 3 file input change in binary mode | 17,494,245 |
1 | 1 | 0 | 4 | 2 | 1 | 1.2 | 0 | is PEP8 simply a style guide, or does it actually help the interpreter make optimizations to run your code faster? I'm simply curious since I really like PEP8 and wanted to know of any other benefits other than more readable code. | 0 | python | 2013-07-06T03:17:00.000 | 0 | 17,499,343 | There is a single item in PEP8 that clearly has potential performance consequences:
Code should be written in a way that does not disadvantage other implementations of Python (PyPy, Jython, IronPython, Cython, Psyco, and such).
That is, PEP8 recommends that code be written such that it performs well across a variety of Python implementations. This is a bit hand-wavy, of course (do you have to try all the available implementations?).
Other than that, nothing in PEP8 stands out as likely to impact performance or anything measurable apart from the storage space required for the code itself (e.g. four-space indentation). | 0 | 247 | true | 0 | 1 | Does following the PEP8 guidelines make your code run faster than if you did not follow it? | 17,499,406 |
1 | 2 | 1 | 3 | 8 | 0 | 1.2 | 0 | I know that many large-scale applications such as video games are created using multiple langages. For example, it's likely the game/physics engines are written in C++ while gameplay tasks, GUI are written in something like Python or Lua.
I understand why this division of roles is done; use lower-level languages for tasks that require extreme optimization, tweaking, efficiency and speed, while using higher-level languages to speed up production time, reduce nasty bugs ect.
Recently, I've decided to undertake a larger personal project and would like to divy-up parts of the project similar to above. At this point in time, I'm really confused about how this interoperability between languages (especially compiled vs interpreted) works.
I'm quite familiar with the details of going from ANSCII code test to loading an executable, when written in something like C/C++. I'm very curious at how something like a video game, built from many different languages, works. This is a large/broad question, but specifically I'm interested in:
How does the code-level logic work? I.e. how can I call Python code from a C++ program? Especially since they don't support the same built-in types?
What does the program image look like? From what I can tell, a video game is running in a single process, so what does the runtime image look like when running a C/C++ program that calls a Python function?
If calling code from an interpreted language from a compiled program, what are the sequence of events that occur? I.e If I'm inside my compiled executable, and for some reason have a call to an interpreted language inside a loop, do I have to wait for the interpreter every iteration?
I'm actually finding a hard time finding information on what happening at the machine-level, so any help would be appreciated. Although I'm curious in general about interoperation of software, I'm specifically interested in C++ and Python interaction.
Thank you very much for any insight, even if it's just pointing me to where I can find more information. | 0 | c++,python,game-engine,language-interoperability | 2013-07-06T20:49:00.000 | 0 | 17,507,004 | In the specific case of python, you have basically three options (and this generally applies across the board):
Host python in C++: From the perspective of the C++ programme, the python interpreter is a C library. On the python side, you may or may not need to use something like ctypes to expose the C(++) api.
Python uses C++ code as DLLs/SOs - C++ code likely knows nothing of python, python definitely has to use a foreign function interface.
Interprocess communication - basically, two separate processes run, and they talk over a socket. These days you'd likely use some kind of web services architecture to accomplish this. | 0 | 1,114 | true | 0 | 1 | How interoperability works | 17,507,117 |
1 | 1 | 0 | 1 | 1 | 1 | 0.197375 | 0 | I wonder how I can create a PyObject in C++ and then return it to Python.
Sadly the documentation is not very explicit about it.
There is no PyObject_Create so I wonder whether allocating sizeof(PyObject) via PyObject_Malloc and initializing the struct is sufficient.
For now I only need an object with functions attached. | 0 | python,python-3.x,pyobject | 2013-07-07T10:26:00.000 | 0 | 17,511,310 | Do you really want a (1) PyObject, as in what Python calls object, or (2) an object of some subtype? That you "need an object with functions attached" seems to indicate you want either methods or attributes. That needs (2) in any case. I'm no expert on the C API, but generally you'd define your own PyTypeObject, then create an instance of that via PyObject_New (refcount and type field are initialized, other fields you might add are not). | 0 | 360 | false | 0 | 1 | Create a PyObject with attached functions and return to Python | 17,511,486 |
1 | 1 | 0 | 2 | 1 | 1 | 1.2 | 0 | I have the source code to a python package that is typically installed using pup or easy-install. How do I locally install the code after I've made changes? I'd like to be able to run commands on the terminal as if I've installed it with pip and then reinstall/have it detect code changes to try it again. | 0 | python | 2013-07-08T03:00:00.000 | 1 | 17,518,614 | You can use pip -e install <path-to-package> to install a package in editable mode. Then, you can make changes to the source code, and not have to install it again.
This is best done, as always, in a virtualenv, so it is isolated from the rest of your system. | 0 | 78 | true | 0 | 1 | Testing Python Packages | 17,518,734 |
1 | 2 | 0 | 5 | 8 | 1 | 1.2 | 0 | As a biology undergrad i'm often writing python software in order to do some data analysis. The general structure is always :
There is some data to load, perform analysis on (statistics, clustering...) and then visualize the results.
Sometimes for a same experiment, the data can come in different formats, you can have different ways to analyses them and different visualization possible which might or not depend of the analysis performed.
I'm struggling to find a generic "pythonic" and object oriented way to make it clear and easily extensible. It should be easy to add new type of action or to do slight variations of existing ones, so I'm quite convinced that I should do that with oop.
I've already done a Data object with methods to load the experimental data. I plan to create inherited class if I have multiple data source in order to override the load function.
After that... I'm not sure. Should I do a Analysis abstract class with child class for each type of analysis (and use their attributes to store the results) and do the same for Visualization with a general Experiment object holding the Data instance and the multiple Analysis and Visualization instances ? Or should the visualizations be functions that take an Analysis and/or Data object(s) as parameter(s) in order to construct the plots ? Is there a more efficient way ? Am I missing something ? | 0 | python,oop,scientific-computing | 2013-07-08T08:53:00.000 | 0 | 17,522,492 | Your general idea would work, here are some more details that will hopefully help you to proceed:
Create an abstract Data class, with some generic methods like load, save, print etc.
Create concrete subclasses for each specific form of data you are interested in. This might be task-specific (e.g. data for natural language processing) or form-specific (data given as a matrix, where each row corresponds to a different observation)
As you said, create an abstract Analysis class.
Create concrete subclasses for each form of analysis. Each concrete subclass should override a method process which accepts a specific form of Data and returns a new instance of Data with the results (if you think the form of the results would be different of that of the input data, use a different class Result)
Create a Visualization class hierarchy. Each concrete subclass should override a method visualize which accepts a specific instance of Data (or Result if you use a different class) and returns some graph of some form.
I do have a warning: Python is abstract, powerful and high-level enough that you don't generally need to create your own OO design -- it is always possible to do what you want with mininal code using numpy, scipy, and matplotlib, so before start doing the extra coding be sure you need it :) | 0 | 2,039 | true | 0 | 1 | Object-oriented scientific data processing, how to cleverly fit data, analysis and visualization in objects? | 17,598,977 |
2 | 5 | 1 | 2 | 15 | 0 | 0.07983 | 0 | Could you please tell me what is the closest data type in C++ to python list? If there is nothing similar, how would you build it in C++? | 0 | c++,python,list | 2013-07-08T14:03:00.000 | 0 | 17,528,657 | There is no real equivalent, and it would be extremely difficult
to provide one. Python and C++ are radically different
languages, and providing one really wouldn't make much sense in
the context of C++. The most important differences are that
everything in Python is dynamically allocated, and is an
"object", and that Python uses duck typing.
FWIW: one very early library (before templates) in C++ did offer
containers of Object*, with derived classes to box int,
double, etc. Actual experience showed very quickly that it
wasn't a good idea. (And I'm curious: does any one else
remember it? And particularly, exactly what it was
called---something with NHS in it, but I can't remember more.) | 0 | 21,803 | false | 0 | 1 | Python list equivalent in C++? | 17,529,258 |
2 | 5 | 1 | 5 | 15 | 0 | 0.197375 | 0 | Could you please tell me what is the closest data type in C++ to python list? If there is nothing similar, how would you build it in C++? | 0 | c++,python,list | 2013-07-08T14:03:00.000 | 0 | 17,528,657 | Actually no C++ container is equivalent to Python's list, which is partially a result of the very different object models of C++ and Python. In particular, the suggested and upvoted std::list is IMHO not even close to Python's list type, a I'd rather suggest std::vector or maybe std::deque. That said, it isn't clear what exactly it is that you want and how to "build it" strongly depends on what exactly "it" is, i.e. what you expect from the container.
I'd suggest you take a look at the C++ containers std::vector, std::deque and std::list to get an overview. Then look at things like Boost.Any and Boost.Variant that you can combine with them, maybe also one of the smart pointers and Boost.Optional. Finally, check out Boost.Container and Boost.Intrusive. If the unlikely case that none of these provide a suitable approximation, you need to provide a better explanation of what your actual goals are. | 0 | 21,803 | false | 0 | 1 | Python list equivalent in C++? | 17,529,154 |
2 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I'm trying to read the arguments passed to a Python script called with Ti.Process.createProcess.
When I run the following code:
import sys
sys.argv
I get the error:
File "", line 2, in <module>
AttributeError: 'module' object has no attribute 'argv'
It looks like the sys object doesn't have an argv attribute.
Am I doing something wrong? any suggestions? | 0 | python,argv,tidesdk,sys | 2013-07-08T22:05:00.000 | 0 | 17,536,779 | Looks like you have another sys.py in your python path. | 0 | 283 | false | 0 | 1 | TideSDK Python get arguments with sys.argv | 17,544,560 |
2 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I'm trying to read the arguments passed to a Python script called with Ti.Process.createProcess.
When I run the following code:
import sys
sys.argv
I get the error:
File "", line 2, in <module>
AttributeError: 'module' object has no attribute 'argv'
It looks like the sys object doesn't have an argv attribute.
Am I doing something wrong? any suggestions? | 0 | python,argv,tidesdk,sys | 2013-07-08T22:05:00.000 | 0 | 17,536,779 | I found the error.
I was testing the code using this code:
<script type="text/python" src="script.py"></script>
and
<script type="text/python">
import sys
print sys.argv
</script>
And I received the error:
File "", line 2, in <module>
AttributeError: 'module' object has no attribute 'argv'
But when I run:
var path = Ti.API.getApplication().getResourcesPath();
var p = Ti.Process.createProcess(['python', path + '\search_client.py', param1, param2]);
p.setOnReadLine(function(data){doStuff(data)});
p.launch();
I get the correct result.
So, in TideSDK, a python script has access to the sys.argv element only when it is excecuted as a "Process" but not when it's excecuted as a "<script>". | 0 | 283 | false | 0 | 1 | TideSDK Python get arguments with sys.argv | 17,558,670 |
1 | 3 | 0 | 34 | 26 | 0 | 1.2 | 0 | This is not a technical question at all really. However, I can not locate my .HTML report that is supposed to be generated using:
py.test --cov-report html pytest/01_smoke.py
I thought for sure it would place it in the parent location, or the test script location. Does neither and I have not been able to locate. So I am thinking it is not being generated at all? | 0 | python,reporting,pytest | 2013-07-09T20:41:00.000 | 0 | 17,557,813 | I think you also need to specify the directory/file you want coverage for like py.test --cov=MYPKG --cov-report=html after which a html/index.html is generated. | 0 | 41,617 | true | 1 | 1 | Py.Test : Reporting and HTML output | 17,595,687 |
2 | 3 | 0 | 1 | 4 | 1 | 0.066568 | 0 | I'm having trouble understanding just why I would want to use bitwise operators in a high-level language like Python. From what I have learned of high- vs low-level languages is that high-level ones are typically designed to that you don't have to worry too much about the machine code going into a computer. I don't see the point of manipulating a program bit-by-bit in a language that, to my knowledge, was designed to avoid it. | 0 | python,bit-manipulation,low-level,high-level | 2013-07-10T07:05:00.000 | 0 | 17,564,385 | Almost anything using transmission protocols will end up using bitwise operation at some point - a set of flags that are all true or false will normally be combined into a single byte for up to 8 flags to save on bandwidth and short of having a dictionary of 255 values to decode it - don't laugh I have seen it done in more than one language in one case with very large number of if else statments - when at most a few bitwise operations were all that was needed. | 0 | 1,812 | false | 0 | 1 | What are the advantages to using bitwise operations over boolean operations in Python? | 17,564,913 |
2 | 3 | 0 | 4 | 4 | 1 | 0.26052 | 0 | I'm having trouble understanding just why I would want to use bitwise operators in a high-level language like Python. From what I have learned of high- vs low-level languages is that high-level ones are typically designed to that you don't have to worry too much about the machine code going into a computer. I don't see the point of manipulating a program bit-by-bit in a language that, to my knowledge, was designed to avoid it. | 0 | python,bit-manipulation,low-level,high-level | 2013-07-10T07:05:00.000 | 0 | 17,564,385 | There's definitely a use for bitwise operations in Python. Aside from or-ing flags, like mgilson mentions, I used them myself for composing packet headers for CAN messages. Very often, the headers for a lower-level message protocol are composed of fields that have a length that is not a multiple of 8 bits, so you would need bitwise operators if you want to change one field only.
Python being a higher-level language does not mean you cannot do low-level stuff with it! | 0 | 1,812 | false | 0 | 1 | What are the advantages to using bitwise operations over boolean operations in Python? | 17,564,596 |
2 | 11 | 0 | 5 | 71 | 1 | 0.090659 | 0 | I am using pytest. I have two files in a directory. In one of the files there is a long running test case that generates some output. In the other file there is a test case that reads that output. How can I ensure the proper execution order of the two test cases? Is there any alternative other than puting the test cases in the same file in the proper order? | 0 | python,pytest | 2013-07-10T13:06:00.000 | 0 | 17,571,438 | It's important to keep in mind, while trying to fix pytest ordering "issue", that running tests in the same order as they are specified seems to be the default behavior of pytest.
It turns out that my tests were out of that order because of one of these packages - pytest-dependency, pytest-depends, pytest-order. Once I uninstalled them all with pip uninstall package_name, the problem was gone. Looks like they have side effects | 0 | 68,944 | false | 0 | 1 | Test case execution order in pytest | 66,893,962 |
2 | 11 | 0 | -7 | 71 | 1 | -1 | 0 | I am using pytest. I have two files in a directory. In one of the files there is a long running test case that generates some output. In the other file there is a test case that reads that output. How can I ensure the proper execution order of the two test cases? Is there any alternative other than puting the test cases in the same file in the proper order? | 0 | python,pytest | 2013-07-10T13:06:00.000 | 0 | 17,571,438 | Make Sure you have installed pytest-ordering package.
To confirm go to Pycharm settings>>Project Interpreter >> and look for pytest-ordering :
If it is not available install it.
Pycharm settings>>Project Interpreter >> Click on + button and search pytest-ordering install it.
Voila!! It will definitely work. | 0 | 68,944 | false | 0 | 1 | Test case execution order in pytest | 55,065,859 |
1 | 1 | 0 | 2 | 1 | 0 | 0.379949 | 0 | I have a working conjungate gradient method implementation in pycuda, that I want to optimize. It uses a self written matrix-vector-multiplication and the pycuda-native gpuarray.dot and gpuarray.mul_add functions
Profiling the program with kernprof.py/line_profiler returned most time (>60%) till convergence spend in one gpuarray.dot() call. (About .2 seconds)
All following calls of gpuarray.dot() take about 7 microseconds. All calls have the same type of input vectors (size: 400 doubles)
Is there any reason why? I mean in the end it's just a constant, but it is making the profiling difficult.
I wanted to ask the question at the pycuda mailing list. However I wasn't able to subscribe with an @gmail.com adress. If anyone has either an explanation for the strange .dot() behavior or my inability to subscribe to that mailing list please give me a hint ;) | 0 | python,cuda,pycuda,mailing-list | 2013-07-10T15:19:00.000 | 0 | 17,574,547 | One reason would be that Pycuda is compiling the kernel before uploading it. As far as I remember thought that should happen only the very first time it executes it.
One solution could be to "warm up" the kernel by executing it once and then start the profiling procedure. | 1 | 748 | false | 0 | 1 | pycuda.gpuarray.dot() very slow at first call | 17,581,063 |
1 | 2 | 0 | 4 | 5 | 1 | 0.379949 | 0 | One of my clients is a large media organization that does a lot of Python development for its own in-house business process management. We need to weigh in the pros and cons of switching the entire code base from Python 2.7 to Python 3, and also for doing any new development using Python 3.
My question is: How would you sell Python 3? What are some tangible benefits that we could get out of using it?
A quick google didn't turn up many concrete benefits, other than the occasional rather vague "it might speed up your code in some cases". Perhaps I'm not looking where I should be, so I would also appreciate pointers to resources where this is discussed. | 0 | python,python-3.x | 2013-07-11T12:49:00.000 | 0 | 17,593,840 | python 3 is gaining popularity, but changing code base is always a hassle
python 3 advantages:
the GIL has been improved a lot so it locks up much less.
built ins return generator expressions
python 3 disadvantages:
some libraries have yet to be ported to python 3
I like python 3 but the fear of finding a cool python 2 only library is what keeps my boss from daring changing to python 3...
if you were starting from scratch it might make sense as a long term investment to code in python 3 but I think it is to early to switch as python 2 has many years of support left and it will probably have better library support for the next 3 years as well | 0 | 5,518 | false | 0 | 1 | What are the benefits / advantages of using Python 3? | 17,594,117 |
1 | 2 | 0 | 1 | 1 | 1 | 0.099668 | 0 | To do a work i want to identify the type of file. But the files are without extension.The files may be txt,jpeg,mp3,pdf etc. Using c or c++ or python how can i check whether it is a jpeg or pdf or mp3 file? | 0 | c++,python,c | 2013-07-11T15:44:00.000 | 0 | 17,597,842 | Some files, such as .exe, .jpg, .mp3, contain a header (first few bytes of the file). You can inspect the header and infer the file type from that.
Of course, some files, such as raw text, depending on their encoding, may have no header at all. | 0 | 250 | false | 0 | 1 | how to identify the type of files having no extension? | 17,597,883 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | Has anyone successfully installed TurboGears or CherryPy on BlueHost? There are listings on the web, but none of them are viable or the links to the scripts are broken.
However, Bluehost Tech support claims that some folks are running TurboGears successfully on their shared hosting.
Anyone who has a setup or knows how, to install TurboGears or CherryPy on Bluehost, will be very appreciated if he/she could share their know-how.
Alternatively, if anyone knows another pythonic method that can be installed on Bluehost is welcome to share it with me.
Many thanks,
DK | 0 | python,apache,cherrypy,turbogears | 2013-07-12T09:50:00.000 | 0 | 17,612,117 | From what I can see from their website, bluehost supports using FastCGI.
In that case you can deploy your applications using FLUP
flup.server.fcgi.WSGIServer permits to mount any WSGI application (like TurboGears apps) and use them with fastcgi. | 0 | 72 | true | 1 | 1 | Turbogears on bluehost | 17,742,500 |
2 | 3 | 0 | 5 | 3 | 1 | 0.321513 | 0 | I'm working by myself right now, but am looking at ways to scale my operation.
I'd like to find an easy way to version my Python distribution, so that I can recreate it very easily. Is there a tool to do this? Or can I add /usr/local/lib/python2.7/site-packages/ (or whatever) to an svn repo? This doesn't solve the problems with PATHs, but I can always write a script to alter the path. Ideally, the solution would be to build my Python env in a VM, and then hand copies of the VM out.
How have other people solved this? | 0 | python | 2013-07-12T19:53:00.000 | 1 | 17,622,992 | You want to use virtualenv. It lets you create an application(s) specific directory for installed packages. You can also use pip to generate and build a requirements.txt | 0 | 98 | false | 0 | 1 | Is there a way to "version" my python distribution? | 17,623,026 |
2 | 3 | 0 | 0 | 3 | 1 | 0 | 0 | I'm working by myself right now, but am looking at ways to scale my operation.
I'd like to find an easy way to version my Python distribution, so that I can recreate it very easily. Is there a tool to do this? Or can I add /usr/local/lib/python2.7/site-packages/ (or whatever) to an svn repo? This doesn't solve the problems with PATHs, but I can always write a script to alter the path. Ideally, the solution would be to build my Python env in a VM, and then hand copies of the VM out.
How have other people solved this? | 0 | python | 2013-07-12T19:53:00.000 | 1 | 17,622,992 | For the same goal, i.e. having the exact same Python distribution as my colleagues, I tried to create a virtual environment in a network drive, so that everybody of us would be able to use it, without anybody making his local copy.
The idea was to share the same packages installed in a shared folder.
Outcome: Python run so unbearably slow that it could not be used. Also installing a package was very very sluggish.
So it looks there is no other way than using virtualenv and a requirements file. (Even if unfortunately often it does not always work smoothly on Windows and it requires manual installation of some packages and dependencies, at least at this time of writing.) | 0 | 98 | false | 0 | 1 | Is there a way to "version" my python distribution? | 42,163,489 |
1 | 1 | 0 | 2 | 1 | 0 | 1.2 | 1 | I need to append a meta data to each message when publishing to the queue. The question is which method is more efficient?
Add custom fields to every message body
Add custom headers to every message
Just in case:
Publisher is on AWS m1.small
Messages rate is less than 500 msgs/s
Rabbit library: pika (python) | 0 | python,rabbitmq,pika | 2013-07-12T20:22:00.000 | 0 | 17,623,431 | Efficiency in terms of speed, there is probably no answer to your question, since there are efficient parsing methods available to extract the meta data from your messages after they leave RabbitMQ.
But in case of using the meta data to filter your messages, it would be more efficient to do that in RabbitMQ, since you can do that filtering inside of RabbitMQ by using header exchange. | 0 | 377 | true | 0 | 1 | What is more efficient: add fields to the message or create a custom header? RabbitMQ | 17,647,896 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 1 | Was wondering if anyone knew how if it was possible to enable programmatic billing for Amazon AWS through the API? I have not found anything on this and I even went broader and looked for billing preferences or account settings through API and still had not luck. I assume the API does not have this functionality but I figured I would ask. | 0 | python,amazon-web-services,boto | 2013-07-13T05:51:00.000 | 0 | 17,627,389 | Currently, there is no API for doing this. You have to log into your billing preference page and set it up there. I agree that an API would be a great feature to add. | 0 | 356 | true | 1 | 1 | Enable programmatic billing for Amazon AWS through API (python) | 17,630,560 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | Recently, I've been attempting to figure out how I can find out what an unlabeled POST is, and send to it using Python.
The issue of the matter is I'm attempting to make a chat bot entirely in Python in order to increase my knowledge of the language. For said bot, I'm attempting to use a chat-box that runs entirely on jQuery. The issue with this is it has no knowledgeable POST or GET statements associated with the chat-box submissions.
How can I figure out what the POST and GET statements being sent when a message is submitted, and somehow use that to my advantage to send custom POST or GET statements for a chat-bot?
Any help is appreciated, thanks. | 0 | python,post,get,chat | 2013-07-15T03:16:00.000 | 0 | 17,646,259 | You need a server in order to be able to receive any GET and POST requests, one of the easier ways to get that is to set up a Django project, ready in minutes and then add custom views to handle the request you want properly. | 0 | 60 | false | 0 | 1 | Python Send to and figure out POST | 17,646,320 |
2 | 3 | 0 | 0 | 2 | 0 | 0 | 0 | I am attempting to deploy a flask app on Heroku and it always errors at the same place. GCC fails to install and compile the Bcrypt module so I removed it from my requirements.txt (it is not used in the app). When I view the requrements.txt file, there is no mention of Bcrypt but when I push to heroku, it still tries to install it. I have committed the most recent version of requirements.txt to Git. Any help would be greatly appreciated. | 0 | python,flask | 2013-07-15T16:32:00.000 | 0 | 17,659,176 | Add cffi or cryptography in requirements.txt .That solved the problem in my case. | 0 | 773 | false | 1 | 1 | Heroku - Flask - Bcrypt error | 32,994,904 |
2 | 3 | 0 | 0 | 2 | 0 | 0 | 0 | I am attempting to deploy a flask app on Heroku and it always errors at the same place. GCC fails to install and compile the Bcrypt module so I removed it from my requirements.txt (it is not used in the app). When I view the requrements.txt file, there is no mention of Bcrypt but when I push to heroku, it still tries to install it. I have committed the most recent version of requirements.txt to Git. Any help would be greatly appreciated. | 0 | python,flask | 2013-07-15T16:32:00.000 | 0 | 17,659,176 | I was able to get around it kind of, by successfully installing the following: "Successfully installed z3c.bcrypt python-bcrypt py-bcrypt-w32". Installing one of these (likely the second one) is what probably included the main bcrypt library that I guess needed to be compiled? I not 100% sure... I noticed this post is from July, I was able to download those libraries all using PIP. | 0 | 773 | false | 1 | 1 | Heroku - Flask - Bcrypt error | 20,009,155 |
1 | 1 | 0 | 5 | 8 | 1 | 0.761594 | 0 | So, I've released a small library on pypi, more as an exercise (to "see how it's done") than anything else.
I've uploaded the documentation on readthedocs, and I have a test suite in my git repo.
Since I figure anyone who might be interested in running the test will probably just clone the repo, and the doc is already available online, I decided not to include the doc and test directories in the released package, and I was just wondering if that was the "right" thing to do.
I know answers to this question will be rather subjective, but I felt it was a good place to ask in order to get a sense of what the community considers to be the best practice. | 0 | python,packaging | 2013-07-15T23:01:00.000 | 0 | 17,665,330 | It is not required but recommended to include documentation as well as unit tests into the package.
Regarding documentation:
Old-fashioned or better to say old-school source releases of open source software contain documentation, this is a (de facto?) standard (have a look at GNU software, for example). Documentation is part of the code and should be part of the release, simply because once you download the source release you are independent. Ever been in the situation where you've been on a train somewhere, where you needed to have a quick look into the documentation of module X but didn't have internet access? And then you relievedly realized that the docs are already there, locally.
Another important point in this regard is that the documentation that you bundle together with the code for sure applies to the code version. Code and docs are in sync.
One more thing especially regarding Python: you can write your docs using Sphinx and then build beautiful HTML output based on the documentation source in the process of installing the package. I have seen various Python packages doing exactly this.
Regarding tests:
Imagine the tests are bundled in the source release and are easy to be run by the user (you should document how to do this). Then, if the user observes a problem with your code which is not so easy to track down, he can simply run the unit tests in his environment and see if at least those are passing. If not, you've probably made a wrong assumption when specifying the behavior of your code, which is good to know about. What I want to say is: it can be very good for you as a developer if you make it very simple for the user to execute unit tests. | 0 | 384 | false | 0 | 1 | Releasing a python package - should you include doc and tests? | 17,722,381 |
1 | 1 | 0 | 2 | 2 | 0 | 0.379949 | 0 | Is it possible, on a linux box, to import dstat and use it as an api to collect OS metrics and then compute stats on them?
I have downloaded the source and tried to collect some metrics, but the program seems to be optimized for command line usage.
Any suggestions as to how to get my desired functionality either using Dstat or any another library? | 0 | python,linux,operating-system,profiling | 2013-07-16T04:11:00.000 | 1 | 17,667,871 | The Dstat source-code includes a few sample programs using Dstat as a library. | 0 | 468 | false | 0 | 1 | DSTAT as a Python API ? | 17,770,525 |
1 | 1 | 0 | 5 | 2 | 0 | 1.2 | 1 | I'm trying to get Python to send the EOF signal (Ctrl+D) via Popen(). Unfortunately, I can't find any kind of reference for Popen() signals on *nix-like systems. Does anyone here know how to send an EOF signal like this? Also, is there any reference of acceptable signals to be sent? | 0 | python-2.7,posix,popen,eof | 2013-07-16T14:00:00.000 | 1 | 17,678,620 | EOF isn't really a signal that you can raise, it's a per-channel exceptional condition. (Pressing Ctrl+D to signal end of interactive input is actually a function of the terminal driver. When you press this key combination at the beginning of a new line, the terminal driver tells the OS kernel that there's no further input available on the input stream.)
Generally, the correct way to signal EOF on a pipe is to close the write channel. Assuming that you created the Popen object with stdin=PIPE, it looks like you should be able to do this. | 0 | 3,348 | true | 0 | 1 | Trying to send an EOF signal (Ctrl+D) signal using Python via Popen() | 17,712,430 |
1 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 1 | I need to create an application that will do the following:
Accept request via messaging system ( Done )
Process request and determine what script and what type of instance is required for the job ( Done )
Launch an EC2 instance
Upload custom script's (probably from github or may be S3 bucket)
Launch a script with a given argument.
The question is what is the most efficient way to do steps 3,4,5? Don't understand me wrong, right now I'm doing the same thing with script that does all of this
launch instance,
use user_data to download necessary dependencies
than SSH into instance and launch a script
My question is really: is that the only option how to handle this type of work? or may be there is an easy way to do this?
I was looking at OpsWork, and I'm not sure if this is the right thing for me. I know I can do steps 3 and 4 with it, but how about the rest? :
Launch a script with a given argument
Triger an OpsWork to launch an instance when request is came in
By the way I'm using Python, boto to communicate with AWS services. | 0 | python,amazon,chef-infra,boto,aws-opsworks | 2013-07-16T21:04:00.000 | 0 | 17,686,939 | You can use knife-bootstrap. This can be one way to do it. You can use AWS SDK to do most of it
Launch an instance
Add a public IP (if its not in VPC)
Wait for instance to come back online
use knife bootstrap to supply script, setup chef-client, update system
Then use chef cookbook to setup your machine | 0 | 218 | false | 1 | 1 | Do I need to SSH into EC2 instance in order to start custom script with arguments, or there are some service that I don't know | 17,691,484 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I'm being trying to
Log into a server using SHH (with Paramiko)
Use that connection like a proxy and route network traffic through it and out to the internet. So say I could set it as my proxy in Urllib2, Mechanize, Firefox, etc.).
Is the second part possible or will I have to have some sort of proxy server running on the server to get this to work? | 0 | python,ssh,proxy,paramiko,tunnel | 2013-07-17T01:51:00.000 | 0 | 17,689,822 | You could implement a SOCKS proxy in the paramiko client that routes connections across the SSH tunnel via paramiko's open_channel method. Unfortunately, I don't know of any out-of-the-box solution that does this, so you'd have to roll your own. Alternatively, run a SOCKS server on the server, and just forward that single port via paramiko. | 0 | 1,036 | false | 0 | 1 | Python Proxy Through SSH | 17,690,326 |
1 | 1 | 0 | 1 | 0 | 1 | 0.197375 | 0 | The problem that I have right now is that I require a specific version of python in order for the source code that I have to work. To make this source code more accessible to everyone, I don't want people to have to go through the hassle to downloading the right python version. Instead is there a way to incorporate the right python version right into my program or any way to localize Python? | 0 | python,python-2.7 | 2013-07-17T16:22:00.000 | 0 | 17,705,204 | Not sure how this would work out, but the only thing I can think of is creating a virtual environment with the required python version, and then sharing that with people. Not the ideal solution, and I'm sure others can suggest something better. | 0 | 53 | false | 0 | 1 | Is there anyway to make a localized python version? | 17,705,340 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | Is it bad practice to execute a script execfile(XX.py), rather than import XX as a module? The reason I'm interested, is that executing the file puts the functions directly into __main__, and then globals are available without needing to explicitly pass them. But, I'm not sure if this creates trouble... Thanks! | 0 | python,module | 2013-07-17T18:00:00.000 | 0 | 17,706,953 | Yes, it's bad practice, for the very reason that everything ends up in __main__. If you have two modules which have any variable with the same name, one will overwrite the other. | 0 | 37 | false | 0 | 1 | Importing Modules vs. Executing Scripts for Global Variables | 17,706,983 |
1 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I would like to have a class written in C++ that acts as a remote procedure call server.
I have a large (over a gigabyte) file that I parse, reading in parameters to instantiate objects which I then store in a std::map. I would like the RPC server to listen for calls from a client, take the parameters passed from the client, look up an appropriate value in the map, do a calculation, and return the calculated value back to the client, and I want it to serve concurrent requests -- so I'd like to have multiple threads listening. BTW, after the map is populated, it does not change. The requests will only read from it.
I'd like to write the client in Python. Could the server just be an HTTP server that listens for POST requests, and the client can use urllib to send them?
I'm new to C++ so I have no idea how to write the server. Can anyone point me to some examples? | 0 | c++,python,map,concurrency,rpc | 2013-07-17T21:53:00.000 | 0 | 17,710,943 | Maybe factors can affect the selection.
One solution is to use fastcgi.
Client sends HTTP request to HTTP server that has fastCGI enabled.
HTP server dispatch the request to your RPC server via the fastcgi mechanism.
RPC server process and generates response, and sends back to http server.
http server sends the response back to your client. | 0 | 674 | false | 0 | 1 | Concurrent Remote Procedure Calls to C++ Object | 17,711,039 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 1 | Currently I am using Web Application(LAMP stack) with REST API to communicate with clients(Python Desktop Application).
Clients can register with server and send states to server through REST API.
Now I need to push notifications to selected clients from web application(server).
My question is how can I send push notifications from server(php) and read it from clients(python). | 0 | php,python,rest | 2013-07-18T09:16:00.000 | 0 | 17,719,300 | So basically you can query your server from client in some interval (interval ~ 0 == realtime) and ask, if it has some news.
Normally apache can't deal with long waiting connection, because of its thread/fork request processing model.
You can try switch to nginx, because it is using socket multiplexing (select/epoll/kqueue), so it can deal with many concurrent long waiting connections).
Or you can think about node.js and replace your php app with it, which is absolutely done for this purposes.
Nice solution is too some web framework/language + redis pub/sub functionality + node.js. You can normal requests your web application, but have too open connection to node.js server and node.js server will notice your client when needed. If you want to tell node.js about informing some clients, you can do it from your web app through redis pub/sub. | 0 | 1,317 | false | 1 | 1 | Push Notifications server to selected clients | 17,719,764 |
1 | 3 | 0 | 0 | 3 | 1 | 0 | 0 | I created an executable of my Python Software via Py2Exe, which creates two new directories and multiple files in them. I also created a new Python File for doing this, called setup.py.
Whenever I open up Git GUI it shows the only uncommitted changes are in my .idea\workspace.xml file (this comes up with every commit), and setup.py. My other directories and files that I created do not show up. I've tripled checked that the files are in the correct directory (../Documents/GitHub/..), does anyone know of this happening before, or a solution to it?
EDIT: When trying to add the files, I get the error:
fatal: 'C:\Users\me\Documents\GitHub\Project\SubDir\build' is outside repository
EDIT: Fixed the problem, I wasn't able to add the directories on Friday, but today it let me for what ever reason. | 0 | python,git,github,git-commit | 2013-07-19T19:32:00.000 | 0 | 17,754,093 | I'm going to go out on a limb here and say that if the new files aren't showing up in git, then they are in fact not in the right directory. Make sure the directory your files are being created in has a .git directory.
If that is already the case, you want to look at the output of git status on your local repo to see the current status of things.
If the files are showing up in the results of git status but still not in your bizarre GUI tool, try a git add . on your repository directory.
If that still doesn't work then you need to sit down and question why you're using a GUI for git in the first place. | 0 | 148 | false | 0 | 1 | Git not commiting files | 17,754,166 |
1 | 2 | 0 | 0 | 2 | 0 | 0 | 0 | I would like to draw a great circle arc on maps.
The function drawgreatcircle() in Basemap is usefull as long as you know the latitude and longitude of the starting and the destination points on the map.
My problem is that I have the starting point and a bearing in degrees referenced to the North pole.
Applications : Radio Direction Finding by triangulation. | 0 | python,matplotlib,matplotlib-basemap | 2013-07-21T12:43:00.000 | 0 | 17,772,477 | I think there's no equivalent function for your needs. You could however, derive the lon/lat-values of the end point mathematically (knowing the northpole-referenced bearing) and assign these to the drawgreatcircle()- function. | 0 | 1,784 | false | 0 | 1 | drawing great circle arc with Basemap knowing starting point and a bearing | 18,180,819 |
1 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 0 | During our build , we call ANT from our Python .For code checkout,We have two options to checkout code from Git.
Pull code directly from Python script
Make an ANT target to pull code , Call that target from Python.
Can anyone please brief about pros and cons of both approach. I am new to all three tech.
Thanks | 0 | python,git,ant,build | 2013-07-22T10:16:00.000 | 0 | 17,785,000 | I would make with ant - it standard tool for Java-platform. May be generate ant from python or other one. Lack of this manner is not flexible of ant by comparison with Python.
Lack of Python-based solution is greater complexity of deployment in future: for linux good practice make packet, for Windows you have install python and libs manually. | 0 | 106 | false | 1 | 1 | Git check-out from ant target or python | 17,787,947 |
2 | 3 | 0 | 0 | 2 | 0 | 0 | 0 | I'm a designer who writes mostly Sass, Less (CSS pre-processors), HTML, Javascript and usually starts off with static site generators such as Jekyll, Yeoman, etc. While working with developers who code in Python, Ruby, Clojure, I help with the templates. In my free time, I design wordpress themes and write plugins in PHP. I run grunt regularly and bower helps me with components that I need for my designs.
This means my system is littered with Ruby Gems, Python libraries, Node Modules. They are either installed via gem installations, pip, brew or npm. Now you realize that my system is a mess even though it works. I really want to do stuffs in a sane manner, the right way.
So, what are the best practices for installation and management of all the libraries, core tools, etc. for a developer on Mac OS X. Point me to resources that I can read, ponder and practice.
Here is the scenario. You're a seasoned developer and I'm your friend who just got a new Mac OS X system. I'm a designer who will work with Python (mostly with Django), Ruby (with Rails), Clojure, PHP, Sass, Less, Compass, CoffeeScript, Git, NodeJS, Grunt, Bower, Jekyll, Yeoman and alike. As a friend, you know that I'm not a 'programmer' but a developer-friendly 'designer'. How can you help me setup my Mac? And I don't want to come back again when I get a new Mac in future, I should be able to just transition smoothly from my old setup.
Thanking in anticipation. | 0 | python,ruby,macos | 2013-07-23T09:09:00.000 | 0 | 17,805,952 | If all that you are worried about is quickly setting up a new machine, use a backup software to setup the new machine. You can also try to use a custom time machine setup with just the folders that you are interested it. | 0 | 673 | false | 1 | 1 | Best practices for management of a developer system on Mac OS X for non-developers | 17,810,311 |
2 | 3 | 0 | 0 | 2 | 0 | 0 | 0 | I'm a designer who writes mostly Sass, Less (CSS pre-processors), HTML, Javascript and usually starts off with static site generators such as Jekyll, Yeoman, etc. While working with developers who code in Python, Ruby, Clojure, I help with the templates. In my free time, I design wordpress themes and write plugins in PHP. I run grunt regularly and bower helps me with components that I need for my designs.
This means my system is littered with Ruby Gems, Python libraries, Node Modules. They are either installed via gem installations, pip, brew or npm. Now you realize that my system is a mess even though it works. I really want to do stuffs in a sane manner, the right way.
So, what are the best practices for installation and management of all the libraries, core tools, etc. for a developer on Mac OS X. Point me to resources that I can read, ponder and practice.
Here is the scenario. You're a seasoned developer and I'm your friend who just got a new Mac OS X system. I'm a designer who will work with Python (mostly with Django), Ruby (with Rails), Clojure, PHP, Sass, Less, Compass, CoffeeScript, Git, NodeJS, Grunt, Bower, Jekyll, Yeoman and alike. As a friend, you know that I'm not a 'programmer' but a developer-friendly 'designer'. How can you help me setup my Mac? And I don't want to come back again when I get a new Mac in future, I should be able to just transition smoothly from my old setup.
Thanking in anticipation. | 0 | python,ruby,macos | 2013-07-23T09:09:00.000 | 0 | 17,805,952 | I am not sure what you meant by "How can you help me setup my Mac?". It seems that you are very much comfortable installing all the dependencies(gems and all) for your projects. If you want to automate all these environment installation setup then you may go ahead and write a generic shell script to install ruby, python and other stuff and reuse when you have a new machine :) and it has nothing to do with Mac OSX or any other OS. You just need to put correct package/version to fetch and install/compile accordingly in the script.
Would be great if you can put a specific question here in case you are facing technical problem installing any of the above packages. | 0 | 673 | false | 1 | 1 | Best practices for management of a developer system on Mac OS X for non-developers | 17,808,568 |
2 | 2 | 0 | 3 | 0 | 1 | 1.2 | 0 | I'm planning to use python to build a couple of programs to be used as services, run from PHP code later on.
In terms of performance, which is faster, to compile the python code into a binary file using cx_freeze or to run the python interpreter each time I run the program?
Deployment environment:
OS: Arch Linux ARM
Hardware: Raspberry Pi [700MHz ARMv6 CPU, 256MB RAM, SD Card filesystem]
Python Interpreter: Python2.7
App calls frequency: High | 0 | php,python | 2013-07-23T19:34:00.000 | 0 | 17,819,464 | You need to test it, because there's no single right answer. All cx_freeze does is wrap up the bytecode into an executable, vs. the interpreter reading from its cached .pyc on disk.
In theory the packaged executable could be quicker because it's reading fewer files, but on the other hand the interpreter could be quicker because it could already be in the disk cache.
There's likely to be little to choose, and whatever the difference is, it's not down to "compiled" vs. "interpreted". | 0 | 139 | true | 0 | 1 | Compiled binaries VS interpreted code in python | 17,819,545 |
2 | 2 | 0 | 3 | 0 | 1 | 0.291313 | 0 | I'm planning to use python to build a couple of programs to be used as services, run from PHP code later on.
In terms of performance, which is faster, to compile the python code into a binary file using cx_freeze or to run the python interpreter each time I run the program?
Deployment environment:
OS: Arch Linux ARM
Hardware: Raspberry Pi [700MHz ARMv6 CPU, 256MB RAM, SD Card filesystem]
Python Interpreter: Python2.7
App calls frequency: High | 0 | php,python | 2013-07-23T19:34:00.000 | 0 | 17,819,464 | cx_freeze (and the various competitors for "compiling" Python code to executables) don't actually compile anything. They're just a convenient way of packaging the app in such a way that it can be run directly. In other words, there's no performance difference.
Depending on what you need to do in your Python script, you could consider using Pypy to improve your performance. | 0 | 139 | false | 0 | 1 | Compiled binaries VS interpreted code in python | 17,819,536 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | I'm using ubuntu 12.04 and I wanted to enable ubuntu one at first. So I ran ubuntuone-control-panel-qt. Then it said ImportError: No module named pkg_resources. The solutions I found on Internet said it could be fixed by reinstalling the python distribute package. So I used curl -O http://python-distribute.org/distribute_setup.yp and then python distribute_setup.pyBut an annoying message appeared, saying that IOError: CRC check failed 0x77057d99 != 0xec0a9eeLI was almost driven crazy. How can I solve the problem? | 0 | python,ubuntu,crc | 2013-07-25T13:33:00.000 | 0 | 17,859,553 | The downloaded file is most likely corrupted, try re-downloading it again. (Also note the distribute_setup.yp extension typo). | 0 | 785 | true | 0 | 1 | Manually install a python distribute package occuring IOError: CRC check failed | 17,859,633 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I am running a simulation and transmitting data through XML-RPC to a remote client. I'm using a thread to run the XML-RPC part.
But for some reason, the program runs really slow until I a make a request from any of clients that connect. And after I run the very first request, the program then runs fine.
I have a class that inherits from Threading, and that I use in order to start the XML-RPC stuff
I cannot really show you the code, but do you have any suggestions as to why this is happening?
Thanks, and I hope my question is clear enough | 0 | python,multithreading,performance,request,xml-rpc | 2013-07-25T22:15:00.000 | 0 | 17,869,727 | In Python, due to the GIL, threads doesn't really execute in parallel. If the RPC part is waiting in an active way (loop poling for connection instead of waiting), you most probably will have the behavior you are describing. However, without seeing any code, this is just wild guess. | 0 | 171 | false | 0 | 1 | XML-RPC Python Slow before first request | 17,869,783 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | I am currently using Hudson for continuous integration with python and sonar plugin for code analysis. Since I prefer pyflakes to pylint, in the build option, I've written a pyflakes command to generate a txt file. In the report violations section, I've redirected the pylint option to this txt (in the XML filename pattern). So the Hudson status is successfully showing the correct number of pyflakes-based violations in its report. But sonar is conducting its own analysis through pylint and showing pylint-based analysis. How do I redirect the pyflakes txt file to Sonar so that it doesn't use pylint and instead just analyse whatever has been mentioned in the pyflakes txt file? Which configurations or files would I have to tweak to make it possible? | 0 | python,hudson,sonarqube,pylint | 2013-07-26T09:01:00.000 | 0 | 17,877,139 | There is no "reuse report" feature on the SonarQube python plugin so for now you can't prevent SonarQube to start a new pylint analysis.
I suggest you ask for the creation of a JIRA feature request on SonarQube user mailing list.
In the meantime you can try to use sonar.python.pylint parameter to make SonarQube run pyflakes instead of pylint as it seems output report are compatible (at least for Hudson). But I can't assure it will work. | 0 | 740 | true | 0 | 1 | How to configure Hudson with sonar plugin for python so that the sonar report shows pyflakes-based analysis instead of pylint-based analysis? | 18,736,027 |
1 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I have a python script "cost_and_lead_time.py" in C:\Desktop\A. This script imports 3 other scripts and a "cost_model.json" file, all of which are in folder A.
Now, I have a simulation result in say C:\Desktop\Model\Results. I have a one line batch file in this folder of "call C:\Desktop\A\cost_and_lead_time.py", but it returns an error when it tries to open the cost_model.json file. It doesn't appear to have an issue importing the 3 other scripts as those appear before the json is opened.
My question is, is there any way to keep this cost_model.json file in that directory and run the script through the batch file without copy/pasting the json file into the results folder? The only way I can think of is to hard code the full path of the file in the python script, but that isn't ideal for me. I'm looking for code to add to the batch file, not python script.
Thanks | 0 | python,batch-file | 2013-07-26T17:10:00.000 | 0 | 17,887,193 | Ultimately there needs to be some way to tell your script what you are needing to process. There's a variety of ways for that, but it really just needs to happen. I think the most obvious thing to do in the batch file is to copy the target file in place before running your Python script.
However, a cleaner solution might be to take a command line argument into the python script (via sys.argv) which tells the script which file it needs to process. | 0 | 1,303 | false | 0 | 1 | bat file running py script in different directory | 17,887,277 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I'm developing a task where I need to have a few pieces of information specific to the environment.
I setup the ~/.fabricrc file but when I run the task via command line, the data is not in the env variable
I don't really want to add the -c config to simplify the deployment.
in the task, I'm calling
env.cb_account
and I have in ~/.fabricrc
cb_account=foobar
it throws AttributeError
Has anybody else run into this problem?
I found the information when I view env outside of my function/task. So now the question is how do I get that information into my task? I already have 6 parameters so I don't think it would be wise to add more especially when those parameters wouldn't change. | 0 | python,fabric | 2013-07-26T18:15:00.000 | 1 | 17,888,244 | Overrode the "env" variable via parameter in the function. Dumb mistake. | 0 | 675 | false | 0 | 1 | Python Fabric config file (~/.fabricrc) is not used | 17,929,568 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I have a flask application that I would like to operate different when in production, unit testing, functional testing, and performance testing. Flasks one debug operation doesn't cover what I want to do, I was wondering if there is any way to pass parameters to flasks __init__.py.
I have several different scripts which build my app and create my data structures.
I know I can do this using environment variables but I was hoping for a better solution. | 0 | python-2.7,configuration,flask | 2013-07-27T00:39:00.000 | 0 | 17,893,099 | The solution hybrid solution between my inital plan and Seans suggestion. I use multiple config files and set a environment variable before each kind of app instance. This means that you need to use
from os import environ
environ["APP_SETTINGS"] = "config.py"
before every import app call. The best approach to this problem is to use flask-script as Sean suggests and to have a python manage.py request where request could range from
run_unit_tests to run_server
and that manage script sets the environment variable (as well as builds the database, sets up a profiler, or anything else you need). | 0 | 236 | true | 1 | 1 | Passing paramaters to flask __init__.py | 17,984,932 |
3 | 3 | 0 | 1 | 0 | 0 | 0.066568 | 0 | I heard that this was possible using the new modules features for the google app engine, but this will require two different modules, which basically is like two different apps. I would like to be able to run my python and php in the same application. Im getting some results via python and I want to parse them using php to get an API that is able to communicate with my other webapplications online. it will be like a proxy between my python scripts and webapplication.
Is there any way to achieve this? | 0 | php,python,google-app-engine,runtime | 2013-07-28T15:15:00.000 | 1 | 17,909,688 | Quite simply, no. You'll have to use separate modules, or pick one language and use it for both of the things you describe. | 0 | 431 | false | 1 | 1 | Run both php and python at the same time on google app engine | 17,911,235 |
3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | I heard that this was possible using the new modules features for the google app engine, but this will require two different modules, which basically is like two different apps. I would like to be able to run my python and php in the same application. Im getting some results via python and I want to parse them using php to get an API that is able to communicate with my other webapplications online. it will be like a proxy between my python scripts and webapplication.
Is there any way to achieve this? | 0 | php,python,google-app-engine,runtime | 2013-07-28T15:15:00.000 | 1 | 17,909,688 | Segregate your applications in different modules and communicate between the two using the GAE Data Store or Memcache.
Your applications can signal each other using a GET request with the name of the Memcache key or the url safe data store key. | 0 | 431 | false | 1 | 1 | Run both php and python at the same time on google app engine | 17,911,325 |
3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | I heard that this was possible using the new modules features for the google app engine, but this will require two different modules, which basically is like two different apps. I would like to be able to run my python and php in the same application. Im getting some results via python and I want to parse them using php to get an API that is able to communicate with my other webapplications online. it will be like a proxy between my python scripts and webapplication.
Is there any way to achieve this? | 0 | php,python,google-app-engine,runtime | 2013-07-28T15:15:00.000 | 1 | 17,909,688 | You can achieve the proxy pattern by simply making http requests from one module to the other, using the URLFetch service. | 0 | 431 | false | 1 | 1 | Run both php and python at the same time on google app engine | 17,914,860 |
1 | 4 | 0 | 1 | 1 | 0 | 0.049958 | 1 | I'm trying to make a simple script in python that will scan a tweet for a link and then visit that link.
I'm having trouble determining which direction to go from here. From what I've researched it seems that I can Use Selenium or Mechanize? Which can be used for browser automation. Would using these be considered web scraping?
Or
I can learn one of the twitter apis , the Requests library, and pyjamas(converts python code to javascript) so I can make a simple script and load it into google chrome's/firefox extensions.
Which would be the better option to take? | 0 | python,selenium-webdriver,browser-automation,pyjamas | 2013-07-30T01:39:00.000 | 0 | 17,937,010 | I am not expect in web scraping. But I had some experience with both Mechanize and Selenium. I think in your case, either Mechanize or Selenium will suit your needs well, but also spend some time look into these Python libraries Beautiful Soup, urllib and urlib2.
From my humble opinion, I will recommend you use Mechanize over Selenium in your case. Because, Selenium is not as light weighted compare to Mechanize. Selenium is used for emulating a real web browser, so you can actually perform 'click action'.
There are some draw back from Mechanize. You will find Mechanize give you a hard time when you try to click a type button input. Also Mechanize doesn't understand java-scripts, so many times I have to mimic what java-scripts are doing in my own python code.
Last advise, if you somehow decided to pick Selenium over Mechanize in future. Use a headless browser like PhantomJS, rather than Chrome or Firefox to reduce Selenium's computation time. Hope this helps and good luck. | 0 | 1,466 | false | 0 | 1 | Can anyone clarify some options for Python Web automation | 17,937,214 |
1 | 2 | 0 | 0 | 3 | 0 | 0 | 0 | I have been in this problem for long time and i want to know how its done in real / big companies project.
Suppose i have the project to build a website. now i divide the project into sub tasks and do it.
But u know that suppose i have task1 in hand like export the page to pdf. Now i spend 3 days to do that , came accross various problems , many stack overflow questions and in the end i solve it.
Now 4 months after someone told me that there is some error in the code.
Now by that i comepletely forgot about(60%) how i did it and why i do this way. I document the code but i can't write the whole story of that in the code.
Then i have to spend much time on code to find what was the problem so that i added this line etc.
I want to know that is there any way that i can log steps in completeing the project.
So that i can see how i end up with code , what erros i got , what questions i asked on So and etc.
How people do it in real time. Which software to use.
I know in our project management softaware called JIRA we have tasks but that does not cover what steps i took to solve that tasks.
what is the besy way so that when i look backt at my 2 year old project , i know how i solve particular task | 0 | python,coding-style,project-management,jira,issue-tracking | 2013-08-01T03:42:00.000 | 0 | 17,984,890 | Every time you revisit code, make a list of the information you are not finding. Then the next time you create code, make sure that information is present. It can be in comments, Wiki, bugs or even text notes in a separate file. Make the notes useful for other people, so private notebooks aren't a good idea except for personal notes. | 0 | 278 | false | 1 | 1 | What is the best way to track / record the current programming project u work on | 18,004,681 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | The short version
I want to, in python, subprocess.call(['php', '/path/somescript.php']), the first line of the php script is basically "echo 'Here!';". But the subprocess.call returns an error code of -11, and the php script does not get to execute its first line and echo anything to the output. This is all happening on an Ubuntu Server 12.04.2 and a Ubuntu Desktop 12.04.2.
Can anybody point me in the direction of what the -11 return code might mean? (Is it coming from python, the system, or the php command?
A couple of times, I've seen it run deep into the php script and then fail by printing "zend_mm_heap corrupted" and returning 1.
The more descriptive version of the question:
I have a python script that, after running some phpunit tests using subprocess.call(['phpunit', ...]), wants to run another php script to collect the code coverage data gathered while running the tests, by doing subprocess.call(['php', '/path/coverage_collector.php']).
For months, the script worked fine, but today, after adding a couple more files & tests, it started failing (not a 100% of the time, about 5-10% of times it works).
When it fails, subprocess.call returns -11, and the first line of coverage_collector.php has not managed to echo its message to stdout. A couple of times it ran deeper into the php script, and failed with error code 1 and printed "zend_mm_heap corrupted".
I have a directory structure where each folder may contain subfolders, each folder gets its unit tests executed, and then coverage data is collected for that folder + its subfolders.
The script works fine on all the folders and their subfolders (executing all the tests & collecting all of the coverage), and used to work fine on the root level folder too (and is currently working fine for a lot of smaller projects with the same exact structure and scripts) - until today, after it started failing after an innocent enough code checkin, that added some files and tests to one of the php projects using the script.
The weird thing is that it's failing in this weird spot - while trying to call a php command, without even getting to execute the first line of the php script, and this happens just seconds after the same php script has been executed for a number of other folders and worked fine.
I'm suspecting it might be due to the fact that the root level script simply has more data to process - combining its own coverage with that of all of the subfolders (which might explain the zend heap corruption, when that occurs), but that still does not explain why the majority of times the call fails with -11, and does not let the php script even start working on the collecting the coverage data.
Any ideas? | 0 | php,python,call,command-line-interface | 2013-08-01T19:25:00.000 | 1 | 18,002,824 | Seems to have been caused by too much coverage data exhausting PHP_CodeCoverage_Report_HTML. No idea why the php scripts output was suppressed making me believe the script never got running.
After asking for more memory using ini_set('memory_limit', '2048M'); in the start of the php script, the success rate went up dramatically (5/6 successful builds so far).
I guess I'll need to play around with memory management in php/zend to properly handle this. | 0 | 287 | false | 0 | 1 | What does this php return code (-11) mean (subprocess.call'ed from python)? | 18,003,558 |
2 | 3 | 0 | 2 | 11 | 0 | 0.132549 | 0 | I am developing a Python based application (HTTP -- REST or jsonrpc interface) that will be used in a production automated testing environment. This will connect to a Java client that runs all the test scripts. I.e., no need for human access (except for testing the app itself).
We hope to deploy this on Raspberry Pi's, so I want it to be relatively fast and have a small footprint. It probably won't get an enormous number of requests (at max load, maybe a few per second), but it should be able to run and remain stable over a long time period.
I've settled on Bottle as a framework due to its simplicity (one file). This was a tossup vs Flask. Anybody who thinks Flask might be better, let me know why.
I have been a bit unsure about the stability of Bottle's built-in HTTP server, so I'm evaluating these three options:
Use Bottle only -- As http server + App
Use Bottle on top of uwsgi -- Use uwsgi as the HTTP server
Use Bottle with nginx/uwsgi
Questions:
If I am not doing anything but Python/uwsgi, is there any reason to add nginx to the mix?
Would the uwsgi/bottle (or Flask) combination be considered production-ready?
Is it likely that I will gain anything by using a separate HTTP server from Bottle's built-in one? | 0 | python,nginx,uwsgi,bottle | 2013-08-01T22:53:00.000 | 1 | 18,006,014 | I also suggest you look at running bottle via gevent.pywsgi server. It's awesome, super simple to setup, asynchronous, and very fast.
Plus bottle has an adapter built for it already, so even easier.
I love bottle, and this concept that it is not meant for large projects is ridiculous. It's one of the most efficient and well written frameworks, and can be easily molded without a lot of hand wringing. | 0 | 7,565 | false | 1 | 1 | Python bottle vs uwsgi/bottle vs nginx/uwsgi/bottle | 49,163,067 |
2 | 3 | 0 | 15 | 11 | 0 | 1.2 | 0 | I am developing a Python based application (HTTP -- REST or jsonrpc interface) that will be used in a production automated testing environment. This will connect to a Java client that runs all the test scripts. I.e., no need for human access (except for testing the app itself).
We hope to deploy this on Raspberry Pi's, so I want it to be relatively fast and have a small footprint. It probably won't get an enormous number of requests (at max load, maybe a few per second), but it should be able to run and remain stable over a long time period.
I've settled on Bottle as a framework due to its simplicity (one file). This was a tossup vs Flask. Anybody who thinks Flask might be better, let me know why.
I have been a bit unsure about the stability of Bottle's built-in HTTP server, so I'm evaluating these three options:
Use Bottle only -- As http server + App
Use Bottle on top of uwsgi -- Use uwsgi as the HTTP server
Use Bottle with nginx/uwsgi
Questions:
If I am not doing anything but Python/uwsgi, is there any reason to add nginx to the mix?
Would the uwsgi/bottle (or Flask) combination be considered production-ready?
Is it likely that I will gain anything by using a separate HTTP server from Bottle's built-in one? | 0 | python,nginx,uwsgi,bottle | 2013-08-01T22:53:00.000 | 1 | 18,006,014 | Flask vs Bottle comes down to a couple of things for me.
How simple is the app. If it is very simple, then bottle is my choice. If not, then I got with Flask. The fact that bottle is a single file makes it incredibly simple to deploy with by just including the file in our source. But the fact that bottle is a single file should be a pretty good indication that it does not implement the full wsgi spec and all of its edge cases.
What does the app do. If it is going to have to render anything other than Python->JSON then I go with Flask for its built in support of Jinja2. If I need to do authentication and/or authorization then Flask has some pretty good extensions already for handling those requirements. If I need to do caching, again, Flask-Cache exists and does a pretty good job with minimal setup. I am not entirely sure what is available for bottle extension-wise, so that may still be worth a look.
The problem with using bottle's built in server is that it will be single process / single thread which means you can only handle processing one request at a time.
To deal with that limitation you can do any of the following in no particular order.
Eventlet's wsgi wrapping the bottle.app (single threaded, non-blocking I/O, single process)
uwsgi or gunicorn (the latter being simpler) which is most ofter set up as single threaded, multi-process (workers)
nginx in front of uwsgi.
3 is most important if you have static assets you want to serve up as you can serve those with nginx directly.
2 is really easy to get going (esp. gunicorn) - though I use uwsgi most of the time because it has more configurability to handle some things that I want.
1 is really simple and performs well... plus there is no external configuration or command line flags to remember. | 0 | 7,565 | true | 1 | 1 | Python bottle vs uwsgi/bottle vs nginx/uwsgi/bottle | 18,006,120 |
1 | 1 | 0 | 3 | 3 | 0 | 1.2 | 0 | How can I get python interpreter path in uwsgi process (if I started it with -h parameter)? I tryed to use VIRTUAL_ENV and UWSGI_PYHOME environment variables, but they are empty, I do not know why. Also i tryed to use sys.executable, but it points to uwsgi process path. | 0 | python,path,environment-variables,interpreter,uwsgi | 2013-08-02T10:00:00.000 | 1 | 18,014,122 | uWSGI is not a python application (it only calls libpython functions) so the effective executable is the uwsgi binary. If you use virtualenvs you can assume the binary is in venv/bin/python | 0 | 1,200 | true | 0 | 1 | How to get python interpreter path in uwsgi process | 18,021,303 |
Subsets and Splits