Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4 | 7 | 0 | 0 | 17 | 1 | 0 | 0 | We develop a distributed system built from components implemented in different programming languages (C++, C# and Python) and communicating one with another across a network.
All the components in the system operate with the same business concepts and communicate one with another also in terms of these concepts.
As a results we heavily struggle with the following two challenges:
Keeping the representation of our business concepts in these three languages in sync
Serialization / deserialization of our business concepts across these languages
A naive solution for this problem would be just to define the same data structures (and the serialization code) three times (for C++, C# and Python).
Unfortunately, this solution has serious drawbacks:
It creates a lot of “code duplication”
It requires a huge amount of cross-language integration tests to keep everything in sync
Another solution we considered is based on the frameworks like ProtoBufs or Thrift. These frameworks have an internal language, in which the business concepts are defined, and then the representation of these concepts in C++, C# and Python (together with the serialization logic) is auto-generated by these frameworks.
While this solution doesn’t have the above problems, it has another drawback: the code generated by these frameworks couples together the data structures representing the underlying business concepts and the code needed to serialize/deserialize these data-structures.
We feel that this pollutes our code base – any code in our system that uses these auto-generated classes is now “familiar” with this serialization/deserialization logic (a serious abstraction leak).
We can work around it by wrapping the auto-generated code by our classes / interfaces, but this returns us back to the drawbacks of the naive solution.
Can anyone recommend a solution that gets around the described problems? | 0 | c#,c++,python,serialization,cross-language | 2012-08-03T20:01:00.000 | 0 | 11,802,505 | You could model these data structures using tools like a UML modeler (Enterprise Architect comes to mind as it can generate code for all 3.) and then generate code for each language directly from the model.
Though I would look closely at a previous comment about using XSD. | 0 | 1,432 | false | 0 | 1 | How to share business concepts across different programming languages? | 11,804,524 |
1 | 2 | 0 | 1 | 2 | 1 | 0.099668 | 0 | I am working on a project in Python, using Git for version control, and I've decided it's time to add a couple of unit tests. However, I'm not sure about the best way to go about this.
I have two main questions: which framework should I use and how should I arrange my tests? For the first, I'm planning to use unittest since it is built in to Python, but if there is a compelling reason to prefer something else I'm open to suggestions. The second is a tougher question, because my code is already somewhat disorganized, with lots of submodules and relative imports. I'm not sure where to fit the testing code. Also, I'd prefer to keep the testing code seperate from everything else if possible. Lastly, I want the tests to be easy to run, preferably with a single commandline command and minimal path setup.
How do large Python projects handle testing? I understand that there is typically an automated system to run tests on all new checkins. How do they do it? What are the best practices for setting up a testing system? | 0 | python,unit-testing | 2012-08-06T05:05:00.000 | 0 | 11,822,790 | The Python unittest is fine, but it may be difficult to add unit testing to a large project. The reason is that unit testing is related to the testing of the functionality of the tiniest blocks.
Unit testing means to use a lot of small tests that are separated each from the other. They should be independent on anything but the tested part of the code.
When unittests are added to the existing code, it is usually added only to test the isolated cases that was proved to cause the error. The added unittest should be written with uncorrected functionality to disclose the error. Then the error should be fixed so that the unittest passes. This is the first extreme -- to add unit tests only to the code that fails. This is a must. You should always add unit test for the code that fails, and you should do it before you fix the error.
Now, it is a question how to add unit tests to the large project that did not use them. The quantity of code of unit tests may be comparable with the size of the project itself. This way the other extreme could be to add unit test to everything. However, this is too much work, and you usually have to reverse engineer your own code to find the building blocks to be tested.
I suggest to find the most important parts of the code and add unit tests to them. | 0 | 489 | false | 0 | 1 | How to arrange and set up unit testing in Python | 11,823,239 |
1 | 2 | 0 | 1 | 0 | 0 | 1.2 | 0 | I am working on Web application, which allows users to create their own webapp in turn. For each new webapp created by my application I Assign a new Subdomain.
e.g. subdomain1.xyzdomain.com, subdomain2.xyzdomain.com etc.
All these Webapps are stored in Database and are served by a python script (say
default_script.py) kept in /var/www/.
Till now, I have blocked Search Engine indexing for directory ( /var/www/ ) using robots.txt. Which essentially blocks indexing of my all scripts including default_script.py as well as content served for multiple webapps using that default_script.py script.
But now I want that some of those subdomains should be indexed.
After searching for a while I was able to figure out a way to block indexing of my scripts by explicitly specifing them in robots.txt
But I am still doubtful about the following:
Will blocking the my default_script.py from indexing also block indexing of all content that are served from default_script.py. If yes then if I let it index, will default_script.py start showing up in search results also.
How can I allow indexing of some of the Subdomains seletively.
Ex: Index subdomain1.xyzdomain.com but NOT subdomain2.xyzdomain.com | 0 | python,seo,indexing,robots.txt,googlebot | 2012-08-06T13:34:00.000 | 1 | 11,829,360 | No. The search engine should not care what script generates the pages. Just so long as the pages generated by the webapps are indexed you should be fine.
Second question:
You should create a separate robots.txt per subdomain. That is when robots.txt is fetched from a particular subdomain, return a robots.txt file that pertains to that sudomain only. So if you want the subdomain indexed, has that robots file allow all. If you don't want it indexed, have the robots file deny all. | 0 | 134 | true | 1 | 1 | Selectively indexing subdomains | 11,829,421 |
1 | 1 | 0 | 0 | 1 | 0 | 1.2 | 0 | I'm running a python file every minute using a cron job. It queries a site and gather's information, but it has to load through 4-5 pages before it gets to the data I need.
The execution time is around 5-10s per query.
I'm wondering if there's a difference in server load if the file is being run congruently multiple times verses having 3 different files assigned to load sections.
Example:
test1.py loads information between A-H
test2.py loads information between I-Q
test3.py loads information between R-Z
If someone requests information about a "B", "M", and "S" topic each file would run and return the results, verses one file test.py running a loop to return all three results.
P.S. I'm asking because I'm expecting in the future that people will request information about 2-6 topics, and that's just one person. So I don't want one file running for 60 seconds straight. I'm wondering if it'll alleviate load to spread it across multiple files.
P.P.S. Also I'm wondering the implications of using python vs php. | 0 | php,python,load,cron | 2012-08-08T07:51:00.000 | 0 | 11,860,036 | Using multiple files to fetch smaller parts probably won't make a difference in server load (well, in fact it'd make the load x times bigger for x times shorter period of time, but the overall result is the same), but it should fetch the data faster (thanks to multithreading and paralleling the requests) therefore reducing your response times. | 0 | 57 | true | 0 | 1 | Server load comparison | 11,860,120 |
1 | 4 | 0 | 0 | 1 | 1 | 0 | 0 | I am not a native English speaker. When I code with Python, I often make spelling mistakes and get 'NameError' Exceptions. Unit test can solve some problems but not all. Because one can hardly construct test cases which cover all logic. So I think a tool that detect such errors would help me a lot but I searched Google and cannot find it. | 0 | python,nameerror | 2012-08-09T07:21:00.000 | 0 | 11,878,554 | You could get an IDE which helps a bit with autocompletion of names, though not in all situations. PyDev is one such IDE with autocompletion; PyCharm is another (not free).
Using autocomplete is probably your best bet to solve your problem in the long term. Even if you find a tool which attempts to correct such spelling errors, that will not solve the initial problem and will probably just cause new ones. | 0 | 130 | false | 0 | 1 | Does there a tool exists which can help programmer avoid Python NameError? | 11,878,657 |
2 | 5 | 0 | 0 | 13 | 0 | 0 | 0 | I'm guessing my question is pretty basic, but after 15-20 minutes on Google and YouTube, I am still a little fuzzy. I am relatively new to both Linux and Python, so I am having some difficulty comprehending the file system tree (coming from Windows).
From what I've found digging around the directories in Ubuntu (which is version 12.04, I believe, which I am running in VBox), I have ID'd the following two directories related to Python:
/usr/local/lib/python2.7 which contains these two subdirectories:
dist-packages
site-packages
both of which do not show anything when I type "ls" to get a list of the files therein, but show ". .." when I type "ls -a".
/usr/lib/python2.7 which has no site-packages directory but does have a dist-packages directory that contains many files and subdirectories.
So if I want to install a 3rd party Python module, like, say, Mechanize, in which one of the above directories (and which subdirectory), am I supposed to install it in?
Furthermore, I am unclear on the steps to take even after I know where to install it; so far, I have the following planned:
Download the tar.gz (or whatever kind of file the module comes in) from whatever site or server has it
Direct the file to be unzipped in the appropriate subdirectory (one of the 2 listed above)
Test to make sure it works via import mechanize in interactive mode.
Lastly, if I want to replace step number 1 above with a terminal command (something like sudo apt-get), what command would that be, i.e., what command via the terminal would equate to clicking on a download link from a browser to download the desired file? | 0 | python,module | 2012-08-09T23:11:00.000 | 1 | 11,893,311 | To install nay python package in ubuntu, first run
sudo apt-get update
Then type "sudo apt-get install python-" and press tab twice repeatedly.
press y or yes and it will display all the packages available for python. Then again type
sudo apt-get install python-package
It will install the package from the internet. | 0 | 63,583 | false | 0 | 1 | Installing 3rd party Python modules on an Ubuntu Linux machine? | 31,068,954 |
2 | 5 | 0 | 11 | 13 | 0 | 1 | 0 | I'm guessing my question is pretty basic, but after 15-20 minutes on Google and YouTube, I am still a little fuzzy. I am relatively new to both Linux and Python, so I am having some difficulty comprehending the file system tree (coming from Windows).
From what I've found digging around the directories in Ubuntu (which is version 12.04, I believe, which I am running in VBox), I have ID'd the following two directories related to Python:
/usr/local/lib/python2.7 which contains these two subdirectories:
dist-packages
site-packages
both of which do not show anything when I type "ls" to get a list of the files therein, but show ". .." when I type "ls -a".
/usr/lib/python2.7 which has no site-packages directory but does have a dist-packages directory that contains many files and subdirectories.
So if I want to install a 3rd party Python module, like, say, Mechanize, in which one of the above directories (and which subdirectory), am I supposed to install it in?
Furthermore, I am unclear on the steps to take even after I know where to install it; so far, I have the following planned:
Download the tar.gz (or whatever kind of file the module comes in) from whatever site or server has it
Direct the file to be unzipped in the appropriate subdirectory (one of the 2 listed above)
Test to make sure it works via import mechanize in interactive mode.
Lastly, if I want to replace step number 1 above with a terminal command (something like sudo apt-get), what command would that be, i.e., what command via the terminal would equate to clicking on a download link from a browser to download the desired file? | 0 | python,module | 2012-08-09T23:11:00.000 | 1 | 11,893,311 | You aren't supposed to manually install anything.
There are three ways to install Python libraries:
Use apt-get, aptitude or similar utilities.
Use easy_install or pip (install pip first, its not available by default)
If you download some .tar.gz file, unzip it and then type sudo python setup.py install
Manually messing with paths and moving files around is the first step to headaches later. Do not do it.
For completeness I should mention the portable, isolated way; that is to create your own virtual environment for Python.
Run sudo apt-get install python-virtualenv
virtualenv myenv (this creates a new virtual environment. You can freely install packages in here without polluting your system-wide Python libraries. It will add (myenv) to your prompt.)
source myenv/bin/activate (this activates your environment; making sure your shell is pointing to the right place for Python)
pip install _____ (replace __ with whatever you want to install)
Once you are done type deactivate to reset your shell and environment to the default system Python. | 0 | 63,583 | false | 0 | 1 | Installing 3rd party Python modules on an Ubuntu Linux machine? | 11,893,356 |
1 | 1 | 0 | 3 | 3 | 0 | 1.2 | 0 | So as I near the production phase of my web project, I've been wondering how exactly to deploy a pyramid app. In the docs, it says to use ../bin/python setup.py develop to put the app in development mode. Is there another mode that is designed for production. Or do I just use ../bin/python setup.py install. | 0 | python,pyramid,production | 2012-08-10T01:18:00.000 | 0 | 11,894,210 | Well the big difference between python setup.py develop and python setup.py install. Is that install will install the package in your site-packages directory. While develop will install an egg-link that point to the directory for development.
So yeah you can technically use both method. But depending on how you did your project, installing in site-package might be a bad idea.
Why? FileUpload or anything your app might generate like dynamic files etc... If your app doesn't use config files to find where to save your files. Installing your app and running your app may try to write file in your site-packages directory.
In other words, you have to make sure that all files and directories that may be generated, etc can be located using config files.
Then if all dynamic directories are pointed out in the configs, then installing is good...
All you'll have to do is create a folder with a production.ini file and run pserve production.ini.
Code can be saved anywhere on your comp that way and you can also use uWSGI or any other WSGI server you like.
I think installing the code isn't a bad thing, and having data appart from the application is a good thing.
It has some advantage for deployment I guess. | 0 | 2,476 | true | 1 | 1 | Preparing a pyramid app for production | 11,898,284 |
1 | 3 | 0 | 1 | 1 | 1 | 0.066568 | 0 | I'm creating an app in several different python web frameworks to see which has the better balance of being comfortable for me to program in and performance. Is there a way of reporting the memory usage of a particular app that is being run in virtualenv?
If not, how can I find the average, maximum and minimum memory usage of my web framework apps? | 0 | python,memory,virtualenv,web-frameworks | 2012-08-10T01:37:00.000 | 0 | 11,894,333 | It depends on how you're going to run the application in your environment. There are many different ways to run Python web apps. Recently popular methods seem to be Gunicorn and uWSGI. So you'd be best off running the application as you would in your environment and you could simply use a process monitor to see how much memory and CPU is being used by the process running your applicaiton. | 0 | 1,578 | false | 1 | 1 | Testing memory usage of python frameworks in Virtualenv | 12,218,779 |
1 | 1 | 0 | 3 | 1 | 0 | 1.2 | 0 | I am working on a task to back up (copy) about 100 Gb of data (including a thousand files and sub folders in a directory) to another server. Normally, for the smaller scale, I can use scp or rsync instead. However, as the other server is not on the same LAN network, it could easily take hours, even days, to complete the task. I can't just leave my computer there with the terminal running. I don't think that's the best choice, and again, I have another good reason to use Python :)
Is there any library, or best practice for me to start with? As, it's just for in-house project, we don't need any fancy features, just some fundamental things such as logging, error tolerance, etc. | 0 | python,networking,file-transfer | 2012-08-10T04:00:00.000 | 1 | 11,895,298 | I think your best bet is to use scp or rsync from within screen. That way you can detach the screen session and logout and the transfer will keep going.
man screen | 0 | 723 | true | 0 | 1 | How can we transfer large amounts of data over a network, using Python? | 11,895,345 |
1 | 1 | 0 | 3 | 1 | 0 | 1.2 | 0 | I have an array of pixels which I wish to save to an image file. Python appears to have a few libraries which can do this for me, so I'm going to use one of them, passing in my pixel array and using functions I didn't write to write the image headers and data to disk.
How do I do unit testing for this situation?
I can:
Test that the pixel array I'm passing to the external library is what I expect it to be.
Test that the external library functions I call give me the expected return values.
Manually verify that the image looks like I'm expecting (by opening the image and eyeballing it).
I can't:
Test that the image file is correct. To do that I'd have to either generate an image to compare to (but how do I generate that 'trustworthy' image?), or write a unit-testable image-writing module (so I wouldn't need to bother with the external library at all).
Is this enough to provide coverage for my code? Is testing the interface between my code and the external library sufficient, leaving me to trust that the output of the external library (the image file) is correct through manual eyeballing?
How do you write unit tests to ensure that the external libraries you use do what you expect them to? | 0 | python,unit-testing,tdd | 2012-08-10T14:02:00.000 | 0 | 11,903,310 | Bit old on Python.
But this is how I would approach it.
Grab the image doing a manual test. Compute a check sum (MD5 perhaps). Then the automated tests need to compare it by computing the MD5 (in this example) with the one done on the manual test.
Hope this helps. | 0 | 641 | true | 0 | 1 | Unittest binary file output | 11,903,386 |
1 | 3 | 1 | 3 | 3 | 0 | 0.197375 | 0 | I started a project where you can "log in" on a terminal (basically a Raspberry Pi with a touchscreen attached) with a wireless token (for time tracking).
What will be the best and fastest solution to display the status (basically a background picture and 2-3 texts changing depending on the status of the token) on the screen (fullscreen)? I tried it web-based with chromium, which is -very- slow...
It has to be easy to do http request and en-/decoding JSON - and please no C/C++.
Maybe python + wxwidgets? | 0 | python,user-interface,raspberry-pi | 2012-08-10T18:09:00.000 | 0 | 11,907,027 | You could use Python for this easily with just the standard library (python 2.7.3).
For the GUI you can use Tkinter or Pygame (not standard library) which both support images and text placement (and full screen). It is notable that Tkinter is not thread safe, so that may be a problem if your planning on threading this program.
For the http request you can use httplib.
For the Json related things you can use the json library. | 0 | 3,021 | false | 0 | 1 | fast gui on raspberry | 11,907,139 |
2 | 4 | 0 | 5 | 4 | 1 | 0.244919 | 0 | It seems to me that languages that are quite simple to use (i.e. Python) often have slower execution times than languages that are deemed more complex to learn (i.e. C++ or Java). Why? I understand that part of the problem arises from the fact that Python is interpreted rather than compiled, but what prevents Python (or another high-level language) from being compiled efficiently? Is there any programming language that you feel does not have this trade off? | 0 | c++,python,compilation,interpreter,execution | 2012-08-10T20:52:00.000 | 0 | 11,909,078 | The problem with efficiency in high-level languages (or, at least, the dynamic ones) stems from the fact that it's usually not known WHAT operations needs to be performed until the actual types of objects are resolved in runtime. As a consequence, these languages don't compile to straightforward machine code and have to do all the heavy lifting behind the covers. | 0 | 375 | false | 0 | 1 | Why does there seem to be tension between the simplicity of a language and execution time? | 11,909,155 |
2 | 4 | 0 | 5 | 4 | 1 | 1.2 | 0 | It seems to me that languages that are quite simple to use (i.e. Python) often have slower execution times than languages that are deemed more complex to learn (i.e. C++ or Java). Why? I understand that part of the problem arises from the fact that Python is interpreted rather than compiled, but what prevents Python (or another high-level language) from being compiled efficiently? Is there any programming language that you feel does not have this trade off? | 0 | c++,python,compilation,interpreter,execution | 2012-08-10T20:52:00.000 | 0 | 11,909,078 | Lets compare C and Python. By most accounts C is more "complex" to program in than say, Python. This is because Python automates a lot of work which C doesn't. For example, garbage collection is automated in Python, but is the programmer's responsibility in C.
The price of this automation is that these "high level features" need to be generic enough to "fit" the needs of every program. For example, the Python garbage collector has a predefined schedule/garbage collection algorithm, which may not be the optimal for every application. On the other hand, C gives the programmer complete flexibility to define the GC schedule and algorithm as she wants it to be.
So there you have it, ease versus performance. | 0 | 375 | true | 0 | 1 | Why does there seem to be tension between the simplicity of a language and execution time? | 11,909,183 |
1 | 1 | 1 | 1 | 3 | 0 | 0.197375 | 0 | I am writing an application in Qt that I want to extend with plugins.
My application also has a library that the plugins will use. So, I need a 2 way communication. Basically, the plugins can call the library, and my application which loads the plugins will call them.
Right now, I have my library written in C++, so it has some classes. The plugins can include the header files, link to it and use it. I also have a header file with my interface, which is abstract base class that the plugins must have implemented. They should also export a function that will return a pointer to that class, and uses C linkage.
Up to this point I believe that everything is clear, a standard plugin interface. However, there are 3 main problems, or subtasks:
How to use the library from other languages?
I tried this with Python only. I used SIP to generate a Python component that I successfully imported in a test.py file, and called functions from a class in the library. I haven't tried with any other language.
How to generate the appropriate declaration, or stub, for my abstract class in other languages? Since the plugins must implement this class, I should be able to somehow generate an equivalent to a header in the other languages, like .py files for Python, .class files for Java, etc.
I didn't try this yet, but I suppose there are generators for other languages.
How am I going to make instances of the objects in the plugins? If I got to this point the class would be implemented in the plugins. Now I will need to call the function that returns the instance of the implemented abstract class, and get a pointer to it.
Based on my research, in order to make this work I will have to get a handle to the Python interpreter, JVM, etc., and then communicate with the plugin from there.
It doesn't look too complex, but when I started my research even for the simplest case it took a good amount of work. And I successfully got only to the 1st point, and only in Python. That made me wonder if I am taking the right approach? What are your thoughts on this.. maybe I should not have used Qt in my library and the abstract base class, but only pure C++. It could probably make the things a bit easier. Or maybe I should have used only C in my library, and make the plugins return a C struct instead of a class. That I believe would make the things much easier, since calling the library would be a trivial thing. And I believe the implementation of a C struct would be much easier that implementing C++ class, and even easier that implementing a C++ class that uses Qt objects.
Please point me to the right direction, and share your expertise on this. Also, if you know of any book on the subject, I'd be more than happy to purchase it. Or some links that deal with this would do. | 0 | java,c++,python,qt,plugins | 2012-08-11T12:15:00.000 | 0 | 11,914,614 | C++ mangles its symbols, and has special magic to define classes, which is sort of hacked on top of standard (C) object files. You don't want your files from other languages to understand that magic. So I would certainly follow your own suggestion, to do everything in pure C.
However, that doesn't mean you can't use C++. Only the interface has to be C, not the implementation. Or more strictly speaking, the object file that is produced must not use special features that other languages don't use.
While it is possible for a plugin to link to your program and thus use functions from it, I personally find it more readable (and thus maintainable) to call a plugin function after loading it, passing an array of function pointers which can be used by the plugin.
Every language has support for opening shared object (SO or DLL) files. Use that.
Your interface will consist of functions which have several arguments and return types, which probably have special needs in how they are passed in or retrieved. There probably are automated systems for this, but personally I would just write the interface file by hand. The most important is that you properly document the interface, so people can use any language they want, as long as they know how to load object files from their language.
Different languages have very different ways of storing objects. I would recommend to make the creator of the data also the owner of the memory. So if your program has a class with a constructor (which is wrapped in C functions for the plugin interface), the class is the one creating the data, and your program, not the plugin, should own it. This means that the plugin will need to notify your program when it's done with it and at that point your program can destroy it (unless it is still needed, of course). In languages which support it, such as Python and C++, this can be done automatically when their interface object is destroyed. (I'm assuming here that the plugin will create an object for the purpose of communicating with the actual object; this object behaves like the real object, but in the target language instead of C.)
Keep any libraries (such as Qt) out of the interface. You can allow functions like "Put resource #x at this position on the screen", but not "Put this Qt object at this position on the screen". The reason is that when you require the plugin to pass Qt objects around, they will need to understand Qt, which makes it a lot harder to write a plugin.
If plugins are completely trusted, you can allow them to pass (opaque) pointers to those objects, but for the interface that isn't any different from using other number types. Just don't require them to do things with the objects, other than calling functions in your program. | 0 | 308 | false | 0 | 1 | How to write Qt plugin system with bindings in other languages? | 13,563,225 |
1 | 1 | 0 | 2 | 0 | 0 | 0.379949 | 0 | I am successfully running a cron job every hour on Google Appengine. However I would like it to start when I launch the app. Now it does the first cron job 1 hour after the start.
I am using Python. | 0 | python,google-app-engine,cron | 2012-08-11T21:37:00.000 | 1 | 11,917,869 | There is no "launch" the app in production as such. You deploy the app for the first time and crontab is now present and crontab scheduling is started. So I assume you really mean you would like to run the cron job every time you deploy a new version of your application in addition to the cron schedule.
The cron handler is callable by you, so why not just wrap appcfg in a script that calls the cron handler after you do the deploy. Use wget/curl etc..... | 0 | 211 | false | 1 | 1 | Cron job on Appengine - first time on start? | 11,918,536 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I was working on a project using windows in Aptana. I changed my OS and installed ubuntu on unpartitioned space. I again downloaded Aptana for ubuntu and run it. I specified same workspace that I was using during windows as my that project partition is still there.
The problem I am having is that I am unable to use Aptana intelligence so should I change some paths e.t.c. or is there a way to remove data from workspace(data that tells info to aptana) and recreate project so that it take new info. I tried to see that data but didn't see data that aptana use from workspace or project directory.
Please tell what should be done in this sitaution. thanks in advance guys. | 0 | aptana,ubuntu-10.04,pythonpath | 2012-08-12T03:11:00.000 | 1 | 11,919,437 | I just changed the workspace and it works fine now. After doing so it asked for some paths for interpreter and I gave that and it works fine now. | 0 | 124 | true | 0 | 1 | Aptana getting path of Windows python interpreter instead of linux | 12,043,683 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 1 | First of all, this question has no malicious purposes. I had asked the same question yesterday in stackoverflow but it was removed. I would like to learn if I have to log into an account when sending emails with attachments using python smtplib module. The reason I don't want to log in to an account is that because there is no account that I can use in my company. Or I can ask my company's IT department to set up an account, but until that I want to write the program code and test it. Please don't remove this question.
Best Regards | 0 | python,email,anonymous,smtplib | 2012-08-12T07:00:00.000 | 0 | 11,920,330 | You don't have to have an account (ie. authenticate to your SMTP server) if your company's server is configured to accept mail from certain trusted networks.
Typically SMTP servers consider the internal network as trusted and may accept mail from it
without authentication. | 0 | 95 | true | 0 | 1 | Do I have to log into an email account when sending emails using python smtplib? | 11,920,368 |
1 | 3 | 0 | 1 | 4 | 0 | 0.066568 | 1 | I'm looking for a way to take gads of inbound SMTP messages and drop them onto an AMQP broker for further routing and processing. The messages won't actually end up in a mailbox, but instead SMTP is used as a message gateway.
I've written a Postfix After-Queue Content Filter in Python that drops the inbound SMTP message onto a RabbitMQ broker. That works well - I get the raw message over a queue and it gets picked up nicely by a consumer. The issue is that the AMQP connection is created and torn down with each message... the Content Filter script gets re-executed from scratch each time. I imagine that will end up being a performance issue.
If I could leverage something re-entrant I could reuse the connection. Or maybe I'm just approaching the whole thing incorrectly... | 0 | python,smtp,rabbitmq,postfix-mta,amqp | 2012-08-13T02:04:00.000 | 0 | 11,927,409 | Making an AMQP connection over plain TCP is pretty quick. Perhaps if you're using SSL then it's another story but you sure that enqueueing the raw message onto the AMQP exchange is going to be the bottleneck? My guess would be that actually delivering the message via SMTP is going to be much slower so how fast you can queue things up isn't going to affect the throughput of the system.
If this piece does turn out to be a bottleneck I rather like creating little web servers using Sinatra, or Rack but it sounds like you might prefer a Python based solution. Have the postfix content filter perform a HTTP POST using curl to a webserver, which maintains a persistent connection to the AMQP server.
Of course now you have an extra moving part for which you need to think about monitoring, error handling and security. | 0 | 1,843 | false | 0 | 1 | Sending raw SMTP messages to an AMQP broker | 11,927,486 |
1 | 3 | 0 | 2 | 0 | 1 | 0.132549 | 0 | If someone now studies the basics of the Python what should he do after that? Are there specific books he must read? Or, what exactly?
In other words, what is the pathway to mastering Python?
Thanks | 0 | python | 2012-08-15T15:46:00.000 | 0 | 11,972,592 | Get a project you are interested in, start hacking (i.e. extend it, fix small bugs you encounter). There are a lot of opensource projects out there you can checkout.
You need experience, and experience comes from failing, failing is a result of trying. That's your way to go.
If you get stuck somewhere, always check back to SO or google - that will aid you fixing 99.9% of your issues. | 0 | 6,353 | false | 0 | 1 | What do I do after studying the basics of Python? | 11,972,623 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 0 | I have a series of tests in Django that are categorised into various "types", such as "unit", "functional", "slow", "performance", ...
Currently I'm annotating them with a decorator that is used to only run tests of a certain type (similar to @skipIf(...)), but this doesn't seem like an optimal approach.
I'm wondering if there is a better way to do separation of tests into types? I'm open to using different test runners, extending the existing django testing framework, building suites or even using another test framework if that doesn't sacrifice other benefits.
The underlying reason for wanting to do this is to run an efficient build pipeline, and as such my priorities are to:
ensure that my continuous integration runs check the unit tests first,
possibly parallelise some test runs
skip some classes of test altogether | 0 | python,django,unit-testing,testing,django-testing | 2012-08-16T07:37:00.000 | 0 | 11,982,638 | The way my company organises tests is to split them into two broad categories. Unit and functional. The unit tests live inside the Django test discovery. manage.py test will run them. The functional tests live outside of that directory. They are run either manually or by the CI. Buildbot in this case. They are still run with the unittest textrunner. We also have a subcategory of functional tests called stress tests. These are tests that can't be run in parallel because they are doing rough things to the servers. Like switching off the database and seeing what happens.
The CI can then run each test type as a different step. Tests can be decorated with skipif.
It's not a perfect solution but it is quite clear and easy to understand. | 0 | 545 | false | 1 | 1 | How to separate test types using Django | 11,982,939 |
1 | 1 | 0 | 1 | 2 | 0 | 1.2 | 0 | I do run parallel write requests on my ZODB. I do have multiple BTree instances inside my ZODB. Once the server accesses the same objects inside such a BTree, I get a ConflictError for the IOBucket class. For all my Django bases classes I do have _p_resolveconflict set up, but can't implement it for IOBucket 'cause its a C based class.
I did a deeper analysis, but still don't understand why it complains about the IOBucket class and what it writes into it. Additionally, what would be the right strategy to resolve it?
Thousand thanks for any help! | 0 | python,django,zodb | 2012-08-16T15:56:00.000 | 0 | 11,991,114 | IOBucket is part of the persistence structure of a BTree; it exists to try and reduce conflict errors, and it does try and resolve conflicts where possible.
That said, conflicts are not always avoidable, and you should restart your transaction. In Zope, for example, the whole request is re-run up to 5 times if a ConflictError is raised. Conflicts are ZODB's way of handling the (hopefully rare) occasion where two different requests tried to change the exact same data structure.
Restarting your transaction means calling transaction.begin() and applying the same changes again. The .begin() will fetch any changes made by the other process and your commit will be based on the fresh data. | 0 | 565 | true | 1 | 1 | Conflict resolution in ZODB | 11,996,422 |
1 | 2 | 0 | 1 | 10 | 0 | 0.099668 | 0 | I need to generate a PDF with dynamic text and I'm using ReportLab. Since the text is dynamic, is there anyway to have it resized to fit within a specific area of the PDF? | 0 | python,pdf,reportlab | 2012-08-17T23:56:00.000 | 0 | 12,014,573 | Yes. Take a look at the ReportLab manual. Based on your (short) description of what you want to do it sounds like you need to look at using Frames within your page layout (assuming you use Platypus, which I would highly recommend). | 0 | 6,481 | false | 0 | 1 | ReportLab: How to auto resize text to fit block | 12,021,221 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 1 | I want to find a link by its text but it's written in non-English characters (Hebrew to be precise, if that matters). The "find_element_by_link_text('link_text')" method would have otherwise suited my needs, but here it fails. Any idea how I can do that? Thanks. | 0 | python,selenium,hyperlink | 2012-08-19T00:55:00.000 | 0 | 12,023,402 | In the future you need to pastebin a representative snippet of your code, and certainly a traceback. I'm going to assume that when you say "the code does not compile" that you mean that you get an exception telling you you haven't declared an encoding.
You need a line at the top of your file that looks like # -*- coding: utf-8 -*- or whatever encoding the literals you've put in your file are in. | 0 | 258 | false | 0 | 1 | Selenium in Python: how to click non-English link? | 12,023,574 |
1 | 4 | 0 | 0 | 2 | 0 | 0 | 0 | Occasionally, I have come across programming techniques that involve creating application frameworks or websites in Java, PHP or Python, but when complex algorithms are needed, writing those out in C or C++ and running them as API-like function calls within your Java/PHP/Python code.
I have been googling and searching around the net for this, and unless I don't know the name of the practice, I can't seem to find anything on it.
To put simply, how can I:
Create functions or classes in C or C++
Compile them into a DLL/binary/some form
Run the functions from -
Java
PHP
Python
I suspect JSON/XML like output and input must be created between the Java/PHP/Python and the C/C++ function so the data can be easily bridged, but that is okay.
I'm just not sure how to approach this technique, but it seems like a very smart way to take advantage of the great features of Java, PHP, and Python while at the same time utilizing the very fast programming languages for large, complex tasks.
The other thought going through my head is if I am creating functions using only literals in Java/PHP/Python, will it go nearly as fast as C anyway?
The specific tasks I'm looking to work with C/C++ on is massive loops, pinging a database, and analyzing maps. No work has started yet, its all theory now. | 0 | java,php,c++,python,c | 2012-08-19T18:27:00.000 | 0 | 12,028,908 | For Java, you can search JNI (Java Native Interface), there're a lot of guides telling how to use it. | 0 | 419 | false | 1 | 1 | Running algorithms in compiled C/C++ code within a Java/PHP/Python framework? | 12,033,703 |
1 | 5 | 0 | 4 | 11 | 1 | 0.158649 | 0 | Is there any way to get the total amount of time that "unittest.TextTestRunner().run()" has taken to run a specific unit test.
I'm using a for loop to test modules against certain scenarios (some having to be used and some not, so they run a few times), and I would like to print the total time it has taken to run all the tests.
Any help would be greatly appreciated. | 0 | python,unit-testing,time | 2012-08-20T08:55:00.000 | 0 | 12,034,755 | You could record start time in the setup function and then print elapsed time in cleanup. | 0 | 8,385 | false | 0 | 1 | Get python unit test duration in seconds | 12,034,788 |
1 | 4 | 0 | 0 | 8 | 0 | 0 | 0 | This question might sound weird, but how do I make a job fail?
I have a python script that compiles few files using scons, and which is running as a jenkins job. The script tests if the compiler can build x64 or x86 binaries, I want the job to fail if it fails to do one of these.
For instance: if I'm running my script on a 64-bit system and it fails to compile a 64-bit. Is there something I can do in the script that might cause to fail? | 0 | python,jenkins | 2012-08-20T11:11:00.000 | 1 | 12,036,620 | I came across this as a noob and found the accepted answer is missing something if you're running python scripts through a Windows batch shell in Jenkins.
In this case, Jenkins will only fail if the very last command in the shell fails. So your python command may fail but if there is another line after it which changes directory or something then Jenkins will believe the shell was successful.
The solution is to check the error level after the python line:
if %ERRORLEVEL% NEQ 0 (exit)
This will cause the shell to exit immediately if the python line fails, causing Jenkins to be marked as a fail because the last line on the shell failed. | 0 | 15,709 | false | 0 | 1 | Making a job fail in jenkins | 70,767,921 |
1 | 2 | 0 | 0 | 0 | 0 | 1.2 | 1 | I've been using Paramiko today to work with a Python SSH connection, and it is useful.
However one thing I'd really like to be able to do over the SSH is to utilise some Pythonic sugar. As far as I can tell I can only use the inbuilt Paramiko functions, and if I want to anything using Python on the remote side I would need to use a script which I have placed on there, and call it.
Is there a way I can send Python commands over the SSH connection rather than having to make do only with the limitations of the Paramiko SSH connection? Since I am running the SSH connection through Paramiko within a Python script, it would only seem right that I could, but I can't see a way to do so. | 0 | python,ssh,paramiko | 2012-08-20T19:57:00.000 | 0 | 12,044,262 | Well, that is what SSH created for - to be a secure shell, and the commands are executed on the remote machine (you can think of it as if you were sitting at a remote computer itself, and that either doesn't mean you can execute Python commands in a shell, though you're physically interact with a machine).
You can't send Python commands simply because Python do not have commands, it executes Python scripts.
So everything you can do is a "thing" that will make next steps:
Wrap a piece of Python code into file.
scp it to the remote machine.
Execute it there.
Remove the script (or cache it for further execution).
Basically shell commands are remote machine's programs themselves, so you can think of those scripts like shell extensions (python programs with command-line parameters, e.g.). | 0 | 705 | true | 0 | 1 | using python commands within paramiko | 12,044,350 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | folks,
I am wondering if such a package exists? Or is there a good reference for implementing it? | 0 | python,optimization | 2012-08-21T03:46:00.000 | 0 | 12,048,181 | scipy , pyANN , and pyevolve are some packages that come to mind that may have some tools to help with this... Im not entirely sure what multistart optimization is but I have a rough idea ... | 0 | 461 | false | 0 | 1 | is there a package for multi-start optimization written in python? | 12,048,523 |
2 | 2 | 0 | 1 | 2 | 1 | 0.099668 | 0 | I really like the PY2EXE module, it really helps me share scripts with other co-workers that are super easy for them to use.
My question is: when the PY2EXE module compiles the code into an executable, does the resulting executable process faster?
Thanks for any replies! | 0 | python,exe,py2exe | 2012-08-21T14:08:00.000 | 0 | 12,056,702 | Partly, it bundles the python environment with the 'precompiled' pyc files. These are already
parsed into python byte code but they aren't native speed executables | 0 | 2,575 | false | 0 | 1 | Does PY2EXE Compile a Python Code to run Faster? | 12,056,785 |
2 | 2 | 0 | 6 | 2 | 1 | 1.2 | 0 | I really like the PY2EXE module, it really helps me share scripts with other co-workers that are super easy for them to use.
My question is: when the PY2EXE module compiles the code into an executable, does the resulting executable process faster?
Thanks for any replies! | 0 | python,exe,py2exe | 2012-08-21T14:08:00.000 | 0 | 12,056,702 | py2exe just bundles the Python interpreter and all the needed libraries into the executable and a few library files. When you run the executable, it uses the bundled interpreter to run your script.
Since it doesn't actually generate native code, the speed of execution should be about the same, possibly slower because of the overhead of everything being packaged up. | 0 | 2,575 | true | 0 | 1 | Does PY2EXE Compile a Python Code to run Faster? | 12,056,778 |
1 | 5 | 0 | 0 | 53 | 0 | 0 | 0 | The gunicorn documentation talks about editing the config files, but I have no idea where it is.
Probably a simple answer :) I'm on Amazon Linux AMI. | 0 | python,flask,gunicorn | 2012-08-21T21:39:00.000 | 0 | 12,063,463 | I did this after reading the docs:
when deploying my app through gunicorn, usually there is a file called Procfile
open this file, add --timeout 600
finally, my Procfile would be like:
web: gunicorn app:app --timeout 600 | 0 | 65,764 | false | 1 | 1 | Where is the Gunicorn config file? | 54,821,323 |
1 | 1 | 1 | 2 | 0 | 0 | 1.2 | 0 | Is there any add-on which will activate while uploading files into the Plone site automatically? It should compress the files and then upload into the files. These can be image files like CAD drawings or any other types. Irrespective of the file type, beyond a specific size, they should get compressed and stored, rather than manually compressing the files and storing them.I am using plone 4.1. I am aware of the css, javascript files which get compressed, but not of uploaded files. I am also aware of the 'image handling' in the 'Site Setup' | 0 | python,plone | 2012-08-22T05:42:00.000 | 0 | 12,066,923 | As Maulwurfn says, there is no such add-on, but this would be fairly straightforward for an experienced developer to implementing using a custom content type. You will want to be pretty sure that the specific file types you're hoping to store will actually benefit from compression (many modern file formats already include some compression, and so simply zipping them won't shrink them much).
Also, unless you implement something complex like a client-side Flash uploader with built-in compression, Plone can only compress files after they've been uploaded, not before, so if you're hoping to make uploads quicker for users, rather than to minimize storage space, you're facing a somewhat more difficult challenge. | 0 | 258 | true | 1 | 1 | Is there an add-on to auto compress files while uploading into Plone? | 12,079,431 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | I'm building an Android IM chat app for fun. I can develop the Android stuff well but i'm not so good with the networking side so I am looking at using XMPP on AWS servers to run the actual IM side. I've looked at OpenFire and ejabberd which i could use. Does anyone have any experience with them or know a better one? I'm mostly looking for sending direct IM between friends and group IM with friends. | 0 | python,ruby,amazon-ec2,amazon-web-services,xmpp | 2012-08-23T15:48:00.000 | 0 | 12,095,507 | As an employee of ProcessOne, the makers of ejabberd, I can tell you we run a lot of services over AWS, including mobile chat apps. We have industrialized our procedures. | 0 | 365 | false | 1 | 1 | Running XMPP on Amazon for a chat app | 12,095,630 |
2 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 1 | I'm building an Android IM chat app for fun. I can develop the Android stuff well but i'm not so good with the networking side so I am looking at using XMPP on AWS servers to run the actual IM side. I've looked at OpenFire and ejabberd which i could use. Does anyone have any experience with them or know a better one? I'm mostly looking for sending direct IM between friends and group IM with friends. | 0 | python,ruby,amazon-ec2,amazon-web-services,xmpp | 2012-08-23T15:48:00.000 | 0 | 12,095,507 | Try to explore more about Amazon SQS( Simple Queuing Service) . It might come handy for your requirement. | 0 | 365 | false | 1 | 1 | Running XMPP on Amazon for a chat app | 12,095,743 |
1 | 3 | 1 | 3 | 2 | 1 | 1.2 | 0 | My question is a little bit stupid but I decided to ask advanced programmers like some of you. So I want to make a "dynamic" C++ program. My idea is to compile it and after compilation (maybe with scripting language like python) to change some how the code of the program. I know you will tell me that after the compilation I can not change the code but is there a way of doing that. Thank you! | 0 | c++,python,compilation | 2012-08-24T08:32:00.000 | 0 | 12,105,775 | The only way to do that in C++ is to unload the DLL with the code to be
modified, modify the sources, invoke the compiler to regenerate the DLL,
and reload the DLL. It's very, very heavy weight, and it only works if
the compiler is present on the machines where the code is to be run.
(Usually the case under Unix, rarely the case with Windows.)
Interpreted languages like Python are considerably more dynamic; Python
has a built-in function to execute a string as Python code, for example.
If you need dynamically modifiable code, I'd suggest embedding Python in
your application, and using it for the dynamic parts. | 0 | 315 | true | 0 | 1 | Using scripting language in C++ | 12,105,989 |
1 | 3 | 0 | 4 | 5 | 1 | 0.26052 | 0 | I'm doing TDD using Python and the unittest module. In NUnit you can Assert.Inconclusive("This test hasn't been written yet").
So far I haven't been able to find anything similar in Python to indicate that "These tests are just placeholders, I need to come back and actually put the code in them."
Is there a Pythonic pattern for this? | 0 | python,unit-testing,tdd | 2012-08-24T13:41:00.000 | 0 | 12,110,610 | I would not let them pass or show OK, because you will not find them easily back.
Maybe just let them fail and the reason (not written yet), which seems logical because you have a test that is not finished. | 0 | 984 | false | 0 | 1 | How should I indicate that a test hasn't been written yet in Python? | 12,110,670 |
2 | 4 | 0 | 2 | 8 | 0 | 0.099668 | 0 | I'm writing a linux application which uses PyQt4 for GUI and which will only be used during remote sessions (ssh -XY / vnc).
So sometimes it may occur that a user will forget to run ssh with X forwarding parameters or X forwarding will be unavailable for some reason. In this case the application crashes badly (unfortunately I am force to use an old C++ library wrapped into python and it completely messes user's current session if the application crashes).
I cannot use something else so my idea is to check if X forwarding is available before loading that library. However I have no idea how to do that.
I usually use xclock to check if my session has X forwarding enabled, but using xclock sounds like a big workaround.
ADDED
If possible I would like to use another way than creating an empty PyQt window and catching an exception. | 0 | python,ssh,pyqt4,xserver | 2012-08-25T14:05:00.000 | 1 | 12,122,671 | Similar to your xclock solution, I like to run xdpyinfo and see if it returns an error. | 0 | 8,184 | false | 0 | 1 | How to determine from a python application if X server/X forwarding is running? | 12,123,998 |
2 | 4 | 0 | 8 | 8 | 0 | 1.2 | 0 | I'm writing a linux application which uses PyQt4 for GUI and which will only be used during remote sessions (ssh -XY / vnc).
So sometimes it may occur that a user will forget to run ssh with X forwarding parameters or X forwarding will be unavailable for some reason. In this case the application crashes badly (unfortunately I am force to use an old C++ library wrapped into python and it completely messes user's current session if the application crashes).
I cannot use something else so my idea is to check if X forwarding is available before loading that library. However I have no idea how to do that.
I usually use xclock to check if my session has X forwarding enabled, but using xclock sounds like a big workaround.
ADDED
If possible I would like to use another way than creating an empty PyQt window and catching an exception. | 0 | python,ssh,pyqt4,xserver | 2012-08-25T14:05:00.000 | 1 | 12,122,671 | Check to see that the $DISPLAY environment variable is set - if they didn't use ssh -X, it will be empty (instead of containing something like localhost:10). | 0 | 8,184 | true | 0 | 1 | How to determine from a python application if X server/X forwarding is running? | 12,123,396 |
2 | 3 | 0 | 2 | 0 | 0 | 0.132549 | 0 | I have written a simple python script that runs as soon as a certain user on my linux system logs in. It ask's for a password... however the problem is they just exit out of the terminal or minimize it and continue using the computer. So basically it is a password authentication script. So what I am curious about is how to make the python script stay up and not let them exit or do anything else until they entered the correct password. Is there some module I need to import or some command that can pause the system functions until my python script is done?
Thanks
I am doing it just out of interest and I know a lot could go wrong but I think it would be a fun thing to do. It can even protect 1 specific system process. I am just curious how to pause the system and make the user do the python script before anything else. | 0 | python,linux,passwords | 2012-08-26T20:57:00.000 | 1 | 12,133,857 | You want the equivalent of a "modal" window, but this is not (directly) possible in a multiuser, multitasking environment.
The next best thing is to prevent the user from accessing the system. For example, if you create an invisible window as large as the display, that will intercept any mouse events, and whatever is "behind" will be unaccessible.
At that point you have the problem of preventing the user from using the keyboard to terminate the application, or to switch to another application, or to another virtual console (this last is maybe the most difficult). So you need to access and lock the keyboard, not only the "standard" keyboard but the low-level keys as well.
And to do this, your application needs to have administrative rights, and yet run in the user environment. Which starts to look like a recipe for disaster, unless you really know what you are doing.
What you want to do should be done through a Pluggable Authentication Module (PAM) that will integrate with your display manager. Maybe, you can find some PAM module that will "outsource" or "callback" some external program, i.e., your Python script. | 0 | 223 | false | 0 | 1 | pause system functionality until my python script is done | 12,134,029 |
2 | 3 | 0 | 3 | 0 | 0 | 0.197375 | 0 | I have written a simple python script that runs as soon as a certain user on my linux system logs in. It ask's for a password... however the problem is they just exit out of the terminal or minimize it and continue using the computer. So basically it is a password authentication script. So what I am curious about is how to make the python script stay up and not let them exit or do anything else until they entered the correct password. Is there some module I need to import or some command that can pause the system functions until my python script is done?
Thanks
I am doing it just out of interest and I know a lot could go wrong but I think it would be a fun thing to do. It can even protect 1 specific system process. I am just curious how to pause the system and make the user do the python script before anything else. | 0 | python,linux,passwords | 2012-08-26T20:57:00.000 | 1 | 12,133,857 | There will always be a way for the user to get past your script.
Let's assume for a moment that you actually manage to block the X-server, without blocking input to your program (so the user can still enter the password). The user could just alt-f1 out of the X-server to a console and kill "your weird app". If you manage to block that too he could ssh to the box and kill your app.
There is most certainly no generic way to do something like this; this is what the login commands for the console and the session managers (like gdm) for the graphical display are for: they require a user to enter his password before giving him some form of interactive session. After that, why would you want yet another password to do the same thing? the system is designed to not let users use it without a password (or another form of authentication), but there is no API to let programs block the system whenever they feel like it. | 0 | 223 | false | 0 | 1 | pause system functionality until my python script is done | 12,134,000 |
1 | 2 | 0 | 1 | 0 | 0 | 1.2 | 0 | I am running some shell test scripts from a python script under Windows. The shell scripts are testing the functionality of various modules.
The problem that I faced is that some scripts can hang. For this I added a timeout for each script. This timeout has a default value. But this timeout value can be changed by the bash script - from a bash function ( SetMaxTime ) - I can modify SetMaxTime.
When the default value is used I wait for that period of time in python and if the bash script is not done I will consider that test as failed due to timeout.
The problem is when the default value of timeout is changed from bash. Is there a way to communicate with a bash script (ran with mingw) from python?
NOTE: The scripts are ran under Windows. | 0 | python,windows,bash,mingw | 2012-08-27T10:52:00.000 | 1 | 12,140,608 | Sure you can communicate between them, just read/write from a file or pair of files (one for Python to write to and the bash script to read from, and the other for the visa-versa situation). | 0 | 479 | true | 0 | 1 | Communicating with bash scripts from python | 12,141,142 |
1 | 3 | 0 | 1 | 4 | 1 | 0.066568 | 0 | I am using pyramids framework for large project and I find it messy to have all my tests in one tests.py file. So I have decided to create directory that would contain files with my tests. Problem is, I have no idea, how to tell pyramids, to run my tests from this directory.
I am running the tests using python setup.py test -q.
But this of course do not work, after I have moved my tests into tests directory. What to do, to make it work? | 0 | python,testing,pyramid | 2012-08-27T16:17:00.000 | 0 | 12,145,688 | I have finally found the way to do it. I have just created directory named tests, put my tests inside it and created empty file __init__.py. I needed to fix relative imports, or it make strange errors like:
AttributeError: 'module' object has no attribute 'tests'
I do not really understand what is going on, and what is the nosetest role here, but it works.
If someone is able to explain this problematics deeper, it would be nice. | 0 | 4,414 | false | 0 | 1 | Having tests in multiple files | 12,145,889 |
1 | 1 | 0 | 0 | 4 | 0 | 1.2 | 0 | I have a php application that executes Python scripts via exec() and cgi.
I have a number of pages that do this and while I know WSGI is the better way to go long-term, I'm wondering if for a small/medium amount of traffic this arrangement is acceptable.
I ask because a few posts mentioned that Apache has to spawn a new process for each instance of the Python interpreter which increases overhead, but I don't know how significant it is for a smaller project.
Thank you. | 0 | php,python,exec | 2012-08-27T22:18:00.000 | 1 | 12,150,405 | In case of CGI, server starts a copy of PHP interpreter every time it gets a request. PHP in turn starts Python process, which is killed after exec(). There is a huge overhead on starting two processes and doing all imports on every request.
In case of FastCGI or WSGI, server keeps couple processes warmed up (min and max amount of running processes is configurable), so at price of some memory you get rid of starting new process every time. However, you still have to start/stop Python process on every exec() call. If you can use a Python app without exec(), eg port PHP part to Python, it would boost performance a lot.
But as you mentioned this is a small project so the only important criteria is if your current server can sustain current load. | 0 | 266 | true | 0 | 1 | PHP Exec() and Python script scaleability | 12,151,910 |
3 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | I want to learn Python for Web Programming. As of now I work on PHP and want to try Python and its object oriented features. I have basic knowledge of python and its syntax and data strucures.
I want to start with making basic web pages - forms, file upload and then move on to more dynamic websites using MYSQl.
As of now I do not want to try Django or for that matter any other frameworks.
Is python with cgi and MySQLdb modules a good start?
Thanks | 0 | python,html,web,cgi,mysql-python | 2012-08-28T09:22:00.000 | 0 | 12,156,293 | If nothing else it will show you why you want to use a framework, should be a really valuable learning experience. I say go for it. | 0 | 350 | false | 1 | 1 | Python Web Programming - Not Using Django | 12,156,338 |
3 | 6 | 0 | 1 | 0 | 0 | 0.033321 | 0 | I want to learn Python for Web Programming. As of now I work on PHP and want to try Python and its object oriented features. I have basic knowledge of python and its syntax and data strucures.
I want to start with making basic web pages - forms, file upload and then move on to more dynamic websites using MYSQl.
As of now I do not want to try Django or for that matter any other frameworks.
Is python with cgi and MySQLdb modules a good start?
Thanks | 0 | python,html,web,cgi,mysql-python | 2012-08-28T09:22:00.000 | 0 | 12,156,293 | Having used both Flask and Django for a bit now, I must say that I much prefer Flask for most things. I would recommend giving it a try. Flask-Uploads and WTForms are two nice extensions for the Flask framework that make it easy to do the things you mentioned. Lots of other extensions available.
If you go on to work with dynamic site attached to a database, Flask + SQL Alchemy make a very powerful combination. I much prefer the SQLAlchemy ORM to the django model ORM. | 0 | 350 | false | 1 | 1 | Python Web Programming - Not Using Django | 12,160,352 |
3 | 6 | 0 | 2 | 0 | 0 | 0.066568 | 0 | I want to learn Python for Web Programming. As of now I work on PHP and want to try Python and its object oriented features. I have basic knowledge of python and its syntax and data strucures.
I want to start with making basic web pages - forms, file upload and then move on to more dynamic websites using MYSQl.
As of now I do not want to try Django or for that matter any other frameworks.
Is python with cgi and MySQLdb modules a good start?
Thanks | 0 | python,html,web,cgi,mysql-python | 2012-08-28T09:22:00.000 | 0 | 12,156,293 | I recommend Pyramid Framework! | 0 | 350 | false | 1 | 1 | Python Web Programming - Not Using Django | 12,268,540 |
2 | 2 | 0 | 2 | 0 | 0 | 1.2 | 0 | I'm looking for the fastest algorithm/package i could use to compute the null space of an extremely large (millions of elements, and not necessarily square) matrix. Any language would be alright, preferably something in Python/C/C++/Java. Your help would be greatly appreciated! | 0 | c++,python,c,algorithm,matrix | 2012-08-28T14:12:00.000 | 0 | 12,161,182 | The manner to avoid trashing CPU caches greatly depends on how the matrix is stored/loaded/transmitted, a point that you did not address.
There are a few generic recommendations:
divide the problem into worker threads addressing contiguous rows per threads
increment pointers (in C) to traverse rows and keep the count on a per-thread basis
consolidate the per-thread results at the end of all worker threads.
If your matrix cells are made of bits (instead of bytes, ints, or arrays) then you can read words (either 4-byte or 8-byte on 32-bit/64-bit platforms) to speedup the count.
There are too many questions left unanswered in the problem description to give you any further guidance. | 1 | 766 | true | 0 | 1 | Computing the null space of a large matrix | 12,161,433 |
2 | 2 | 0 | -1 | 0 | 0 | -0.099668 | 0 | I'm looking for the fastest algorithm/package i could use to compute the null space of an extremely large (millions of elements, and not necessarily square) matrix. Any language would be alright, preferably something in Python/C/C++/Java. Your help would be greatly appreciated! | 0 | c++,python,c,algorithm,matrix | 2012-08-28T14:12:00.000 | 0 | 12,161,182 | In what kind of data structure is your matrix represented?
If you use an element list to represent the matrix, i.e. "column, row, value" tuple for one matrix element, then the solution would be just count the number of the tuples (subtracted by the matrix size) | 1 | 766 | false | 0 | 1 | Computing the null space of a large matrix | 12,161,500 |
1 | 1 | 0 | 20 | 21 | 0 | 1.2 | 0 | I have a gcc 4.3.3 toolchain for my embedded device but I have no python (and don't need it).
I'am looking for a way to configure boostbuild without python (compilation and cross-compilation).
Is python mandatory ?
Must I compile every single parts but boost-python ? (I hope not).
Thanks in advance.
What I did thanks to Jupiter
./bootstrap.sh --without-libraries=python
./b2
and I got
Component configuration:
- chrono : building
- context : building
- date_time : building
- exception : building
- filesystem : building
- graph : building
- graph_parallel : building
- iostreams : building
- locale : building
- math : building
- mpi : building
- program_options : building
- python : not building
- random : building
- regex : building
- serialization : building
- signals : building
- system : building
- test : building
- thread : building
- timer : building
- wave : building | 0 | python,boost,cross-compiling | 2012-08-28T15:39:00.000 | 0 | 12,162,793 | Look at --without-* bjam option e.g. --without-python | 0 | 7,027 | true | 0 | 1 | How to (cross-)compile boost WITHOUT python? | 12,168,033 |
1 | 2 | 0 | 4 | 1 | 0 | 1.2 | 0 | The problem that I am facing is whenever I make changes to my Python code, like in __init__.py or views.py file, they are not reflected on the server immediately. I am running the server using Apache+mod_wsgi, so all the Daemon process and virtual host are configured properly.
I find that I have to run setup.py each time for new changes to take place. Is this how Pyramid works or I am missing something. Shouldn't the updated files be served instead of the old ones. | 0 | python,apache,mod-wsgi,pyramid | 2012-08-30T04:55:00.000 | 0 | 12,190,125 | It's usually a lot easier to use something other than mod_wsgi to develop your Python WSGI application (mod_wsgi captures stdout and stderr, which makes it tricky to use things like pdb).
The Pyramid scaffolding generates code that allows you to do something like "pserve development.ini" to start a server. If you use this instead of mod_wsgi to do your development, you can do "pserve development.ini --reload" and your changes to Python source will be reflected immediately.
This doesn't mean you can't use mod_wsgi to serve your application in production. After you get done developing, you can then put your application into mod_wsgi for its productiony goodness. | 0 | 1,229 | true | 1 | 1 | File changes not reflecting immediately | 12,203,642 |
1 | 1 | 0 | 3 | 0 | 0 | 1.2 | 0 | Let us have some simple page that allows logged users to edit articles. Imagine following situation:
User Bob is logged into the system and is editing long article. As it takes really long to edit such article, his authentication becomes expired. After that, he clicks submit button and because of expired authentication, he is redirected to login page.
It is really desirable to finish the action (saving article) after his successful login. So we shall restore the request that was done while Bob was unauthenticated and repeat it now, after successful login. How could this be done with pyramids? | 0 | python,login,pyramid | 2012-08-30T12:04:00.000 | 0 | 12,196,442 | there are three parts you need;
The page that handles the authenticated form submission should check to see if the request is properly authenticated, perform the action, but if it isn't, store all of the data in a server side session and redirect the use to a login page.
The login page should look for a "was trying to do X" sort of query param (eg, ...?fromurl=/post/a/comment. After the user successfully logs in, the login page should redirect the user to that page instead of the site's front page.
The url the user was redirected to should be the same form they used to originally fill out the unauthenticated request. In this case, though, the server should recognize that there are field values stored in the server side session for this user; and so it should populate all of the form fields with those values. The user could then hit submit immediately and complete the post. This could work in a similar way that fields are repopulated when a request contains some invalid form values.
It's important that step 3 should not perform the post directly; The original data and request came from a user who was not authenticated. | 0 | 227 | true | 1 | 1 | Request restoration after login in pyramid | 12,196,884 |
1 | 2 | 0 | 1 | 1 | 0 | 1.2 | 0 | I am running a web app on python2.7 with mod_wsgi/apache. Everything is fine but I can't find any .pyc files. Do they not get generated with mod_wsgi? | 0 | python,django,mod-wsgi,pyc | 2012-08-30T19:46:00.000 | 0 | 12,204,330 | By default apache probably doesn't have any write access to your django app directory which is a good thing security wise.
Now Python will byte recompile your code once every apache restart then cache it in memory.
As it is a longlive process it is ok.
Note: if you really really want to have those pyc, give a write access to your apache user to the source directory.
Note2: This can create a hell lot of confusion when you start with manage.py a test instance shared by apache as this will create those pyc as root and will keep them if you then run apache despite a source code change. | 0 | 1,616 | true | 1 | 1 | Cannot find .pyc files for django/apache/mod_wsgi | 12,204,524 |
1 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 0 | I want to run a java program on a remote machine, and intercept its logs-- also I want to be able to know if the program has completed execution, and also whether it was successful execution or if execution was halted due to an error.
Is there any ready-made java library available for this purpose? Also, I would like to be able to use this program for obtaining logs/execution completion for remote programs in different languages-- like Java/Ruby/Python etc-- | 0 | java,python,ruby,remote-debugging | 2012-08-30T23:10:00.000 | 1 | 12,206,879 | If you're only looking to determine when it has completed (and not looking to really capture all the output, as in your other question) you can simply check for the existence of the process id and, when you fail to find the process id, phone home. You really don't need the logs for that. | 0 | 142 | false | 1 | 1 | java- how to code a process to intercept the output streams of program running on remote machine/know when remote program has halted/completed | 12,206,913 |
2 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I am working on C++ programming with perforce (a version control tool) on VMS.
I need to handle tens or even hundreds of C++ files (managed by perforce) on VMS.
I am familiar with Linux, python but not DCL (a script language) on VMS.
I need to find a way to make programming/debug/code-review as easy as possible.
I prefer using python and kscope (a kde based file search/code-review GUI tool that can generate call graph) or similar tools on VMS.
I do not have sys-adm authorization, so I prefer some code-review GUI tools that can be installed without the authorization.
Would you please give me some suggestions about how to do code-review/debug/programing/compile/test by python on VMS meanwhile using kscope or similar large-scale files management tools for code-review ?
Any help will really be appreciated.
Thanks | 0 | c++,python,linux,perforce,vms | 2012-08-31T15:08:00.000 | 0 | 12,218,088 | Indeed, it's not clear from your question what sort of programming you want to do on VMS: C++ or python??
Assuming your first goal is to get familiar with the code-base, i.e. you want the ease of cross-ref'ing the sources:
If you have Perforce server running on VMS, then you may try to connect to it directly with Linux Perforce client. And do "review" locally on Linux.
If you've no Linux client, I'd try fetching the latest revisions off and importing raw files it into an external repository (svn, git, fossil etc.). Then again using Linux client and tools.
If your ultimate goal is to do all development off VMS, then it may not really be viable -- the code may use VMS specific includes, system/RMS calls, data structs. And sync'ing the changes back and forth to VMS will get messy.
From my experience, once you're familiar with the code-base, it's a lot more effective to make the code-changes directly on VMS using whatever editor is available (EDIT/TPU, EDT, LSE, emacs or vim ports etc.).
As for debugging - VMS native debugger supports X-GUI as well as command-line. Check your build system for debug build, or use /NOOPT /DEBUG compile and /DEBUG link qualifiers.
BTW, have a look into DECset, if installed on your VMS system. | 0 | 281 | false | 0 | 1 | How to do code-review/debug/coding/test/version-control for C++ on perforce and VMS | 13,205,270 |
2 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 0 | I am working on C++ programming with perforce (a version control tool) on VMS.
I need to handle tens or even hundreds of C++ files (managed by perforce) on VMS.
I am familiar with Linux, python but not DCL (a script language) on VMS.
I need to find a way to make programming/debug/code-review as easy as possible.
I prefer using python and kscope (a kde based file search/code-review GUI tool that can generate call graph) or similar tools on VMS.
I do not have sys-adm authorization, so I prefer some code-review GUI tools that can be installed without the authorization.
Would you please give me some suggestions about how to do code-review/debug/programing/compile/test by python on VMS meanwhile using kscope or similar large-scale files management tools for code-review ?
Any help will really be appreciated.
Thanks | 0 | c++,python,linux,perforce,vms | 2012-08-31T15:08:00.000 | 0 | 12,218,088 | Your question is pretty broad so it's tough to give a specific answer.
It sounds like you have big goals in mind which is good, but since you are on VMS, you won't have a whole lot of tools at your disposal. It's unlikely that kscope works on VMS. Correct me if I'm wrong. I believe a semi-recent version of python is functional there.
I would recommend starting off with the basics. Get a basic build system working that let's you build in release and debug. Consider starting with either MMS (an HP provided make like tool) or GNU make. You should also spend some time making sure that your VMS based Perforce client is working too. There are some quirks that may or may not have been fixed by the nice folks at Perforce.
If you have more specific issues in setting up GNU make (on VMS) or dealing with the Perforce client on VMS, do ask, but I'd recommend creating separate questions for those. | 0 | 281 | false | 0 | 1 | How to do code-review/debug/coding/test/version-control for C++ on perforce and VMS | 12,220,702 |
1 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I'm kinda new to scripting for IDA - nevertheless, I've written a complex script I need to debug, as it is not working properly.
It is composed of a few different files containing a few different classes.
Writing line-by-line in the commandline is not effective for obvious reasons.
Running a whole script from the File doesn't allow debugging.
Is there a way of using the idc, idautils, idaapi not from within IDA?
I've written the script on PyDev for Eclipse, I'm hoping for a way to run the scripts from within it.
A similar question is, can the api classes I have mentioned work on idb files without IDA having them loaded?
Thanks. | 0 | python,debugging,reverse-engineering,ida | 2012-09-01T15:36:00.000 | 0 | 12,229,101 | We've just got a notice from one of our users that the latest version of WingIDE supports debugging of IDAPython scripts. I think there are a couple of other programs using the same approach (import a module to do RPC debugging) that might work. | 0 | 2,173 | false | 0 | 1 | Debugging IDAPython Scripts outside of IDAPro | 12,314,987 |
1 | 3 | 0 | 0 | 4 | 0 | 0 | 0 | I am using phpseclib to ssh to my server and run a python script. The python script is an infinite loop, so it runs until you stop it. When I execute python script.py via ssh with phpseclib, it works, but the page just loads for ever. It does this because phpseclib does not think it is "done" running the line of code that runs the infinite loop script so it hangs on that line. I have tried using exit and die after that line, but of course, it didnt work because it hangs on the line before, the one that executes the command. Does any one have any ideas on how I can fix this without modifying the python file? Thanks. | 0 | php,python | 2012-09-01T21:06:00.000 | 0 | 12,231,412 | If you put an & on the end of any shell command it will run in the background and return immediately, that's all you really need. | 0 | 486 | false | 0 | 1 | PHP "Cancel" line of code | 12,231,539 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I am die hard fan of Artificial intelligence and machine learning. I don't know much about them but i am ready to learn. I am currently a web programmer in PHP , and I am learning python/django for a website.
Now as this AI field is very wide and there are countless algorithms I don't know where to start.
But eventually my main target is to use whichever algorithms; like Genetic Algorithms , Neural networks , optimization which can be programmed in web application to show some stuff.
For Example : Recommendation of items in amazon.com
Now what I want is that in my personal site I have the demo of each algorithm where if I click run and I can show someone what this algorithm can do.
So can anyone please guide which algorithms should I study for web based applications.
I see lot of example in sci-kit python library but they are very calculation and graph based.
I don't think I can use them from web point of view.
Any ideas how should I go? | 0 | python,web,machine-learning,artificial-intelligence | 2012-09-03T04:37:00.000 | 0 | 12,242,054 | I assume you are mostly concerned with a general approach to implementing AI in a web context, and not in the details of the AI algorithms themselves. Any computable algorithm can be implemented in any turing complete language (i.e.all modern programming languages). There's no special limitations for what you can do on the web, it's just a matter of representation, and keeping track of session-specific data, and shared data. Also, there is no need to shy away from "calculation" and "graph based" algorithms; most AI-algorithms will be either one or the other (or indeed both) - and that's part of the fun.
For example, as an overall approach for a neural net, you could:
Implement a standard neural network using python classes
Possibly train the set with historical data
Load the state of the net on each request (i.e. from a pickle)
Feed a part of the request string (i.e. a product-ID) to the net, and output the result (i.e. a weighted set of other products, like "users who clicked this, also clicked this")
Also, store the relevant part of the request (i.e. the product-ID) in a session variable (i.e. "previousProduct"). When a new request (i.e. for another product) comes in from the same user, strengthen/create the connection between the first product and the next.
Save the state of the net between each request (i.e. back to pickle)
That's just one, very general example. But keep in mind - there is nothing special about web-programming in this context, except keeping track of session-specific data, and shared data. | 1 | 1,093 | true | 1 | 1 | What algorithms i can use from machine learning or Artificial intelligence which i can show via web site | 12,243,670 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I have a system that sends different types of messages (HTTP, SMTP, POP, IMAP, and regular TCP) to different systems, and I need to queue all of those messages in my system, in case of other systems in-availability.
I'm a bit new to the message queueing concept. so I don't know the best python library that I shall go for.
Is Django-celery (and the underling components - RabbitMQ, MySql, django, apache) is the best choice for me? Will this library cover all my needs? | 0 | python,rabbitmq,message-queue,django-celery | 2012-09-03T19:23:00.000 | 0 | 12,253,063 | Try the Pika client or the Kombu client. Celery is a whole framework for job queues, which you may not need - but it's worth taking a look if you want to understand a queue use case. | 0 | 169 | true | 1 | 1 | Queueing HTTP, emails, and TCP messages in Python | 12,258,879 |
1 | 1 | 0 | 1 | 0 | 1 | 1.2 | 0 | I am pretty new to Python. Currently I am trying out PyCharm and I am encountering some weird behavior that I can't explain when I run tests.
The project I am currently working on is located in a folder called PythonPlayground. This folder contains some subdirectories. Every folder contains a init.py file. Some of the folders contain nosetest tests.
When I run the tests with the nosetest runner from the command line inside the project directory, I have to put "PythonPlayground" in front of all my local imports. E.g. when importing the module called "model" in the folder "ui" I have to import it like this:
from PythonPlayground.ui.model import *
But when I run the tests from inside Pycharm, I have to remove the leading "PythonPlayground" again, otherwise the tests don't work. Like this:
from ui.model import *
I am also trying out the mock framework, and for some reason this framework always needs the complete name of the module (including "PythonPlayground"). It doesn't matter whether I run the tests from command line or from inside PyCharm:
with patch('PythonPlayground.ui.models.User') as mock:
Could somebody explain the difference in behavior to me? And what is the correct behavior? | 0 | python,pycharm,nose | 2012-09-04T09:56:00.000 | 0 | 12,260,983 | I think it happens because PyCharm have its own "copy" of interpreter which have its own version of sys paths where you project's root set to one level lower the PythonPlayground dir.
And you could find preferences of interpreter in PyCharm fro your project and set proper top level.
ps. I have same problems but in Eclipse + pydev | 0 | 514 | true | 0 | 1 | Nosetest & import | 12,261,280 |
1 | 5 | 0 | 1 | 4 | 0 | 0.039979 | 0 | My problem is the following:
I have an undirected graph. Each edge has a cost (or weight). One of the nodes is labelled S. I want to start from S and visit every node at least once. Visiting a node multiple times is allowed. Travelling along an edge multiple times is allowed, although that would make the solution more expensive -- travelling an edge with cost of 3 twice will add 6 to the cost of the total path. The graph has some "dead-end" nodes, so sometimes we have to travel an edge more than once.
What is a fast algorithm to do this? Is this a well known problem?
What I'm looking for:
Reasonably fast -- The relative size of the graph we are talking about here is order of 40 - 50 nodes. So the algorithm hopefully won't take longer than 10 - 15 seconds.
Reasonably optimal -- I'm not looking for absolute optimality. Any interesting heuristic to guide the search so that the solution will be near optimal is good enough.
I will be writing this in python. So if you know of any python implementation of the algorithm, that's best.
Thanks for any help. | 0 | python,search,artificial-intelligence,computer-science,heuristics | 2012-09-05T16:27:00.000 | 0 | 12,285,858 | A simple approach is to build the minimum spanning tree for your graph and do a (depth-first) walk over it, skipping nodes already visited.
This is proven to be no more than twice as long as the optimal TSP path. You can definitely do better with heuristics, but it's a starter (and easy to implement too). | 0 | 4,388 | false | 0 | 1 | What's a fast algorithm that can find a short path to traverse each node of a weighted undirected graph at least once? | 12,287,399 |
1 | 5 | 0 | 2 | 12 | 0 | 0.07983 | 0 | I have some class-based unit tests running in python's unittest2 framework. We're using Selenium WebDriver, which has a convenient save_screenshot() method. I'd like to grab a screenshot in tearDown() for every test failure, to reduce the time spent debugging why a test failed.
However, I can't find any way to run code on test failures only. tearDown() is called regardless of whether the test succeeds, and I don't want to clutter our filesystem with hundreds of browser screenshots for tests that succeeded.
How would you approach this? | 0 | python,unit-testing,selenium-webdriver | 2012-09-05T21:53:00.000 | 0 | 12,290,336 | Override fail() to generate the screenshot and then call TestCase.fail(self)? | 0 | 1,958 | false | 1 | 1 | How to execute code only on test failures with python unittest2? | 12,290,574 |
1 | 2 | 0 | 2 | 2 | 0 | 1.2 | 0 | how can I debug python code in to the eclipse.if it will be done then we face less effort and fast do our work.can any one tell me??? | 0 | python,openerp | 2012-09-06T11:12:00.000 | 0 | 12,298,811 | To debug your Openerp+python code in eclipse, start eclipse in debug perspective and follow the given steps:
1: Stop your openERP running server by pressing "ctr+c".
2: In eclipse go to Menu "Run/Debug Configurations". In configuration window under "Python Run", create new debug configuration(Double click on 'Python Run').
3: After creating new debug configuration follow the given steps:
3.1: In "Main" tab under "Project", select the "server" project or folder (in which Openerp Server resides) from your workspace.
3.2: Write location of 'openerp-server' under "Main Module".
Ex: ${workspace_loc:server/openerp-server}.
3.3: In "Arguments" tab under "Program Arguments", click on button "Variables" and new window will appear.
3.4: Then create new "Variable" by clicking on "Edit Variables" button and new window will appear.
3.5: Press on "New" button and give your addons path as value.
Ex: --addons ../addons,../your_module_path
3.6: Press Ok in all the opened windows and then "Apply".
4: Now into "PyDev Package Explorer" view go to 6.1/server and right click on "openerp-server" file, Select 'Debug As --> Python Run'.
5: Now in "Console" you can see your server has been started.
6: Now open your .py file which you want to debug and set a break-point.
7: Now start your module's form from 'gtk' or 'web-client' and execution will stop when execution will reach to break-point.
8: Now enjoy by debugging your code by pressing "F5, F6, F7" and you can see value of your variables. | 0 | 3,051 | true | 0 | 1 | how can i debug openerp code in to the eclipse | 12,298,831 |
2 | 4 | 0 | 0 | 10 | 0 | 0 | 0 | I want to communicate with the phone via serial port. After writing some command to phone, I used ser.read(ser.inWaiting()) to get its return value, but I always got total 1020 bytes of characters, and actually, the desired returns is supposed to be over 50KB.
I have tried to set ser.read(50000), but the interpreter will hang on.
How would I expand the input buffer to get all of the returns at once? | 0 | python,buffer,pyserial | 2012-09-06T14:16:00.000 | 0 | 12,302,155 | pySerial uses the native OS drivers for serial receiving. In the case of Windows, the size of the input driver is based on the device driver.
You may be able to increase the size in your Device Manager settings if it is possible, but ultimately you just need to read the data in fast enough. | 0 | 21,578 | false | 1 | 1 | How to expand input buffer size of pyserial | 45,513,398 |
2 | 4 | 0 | 1 | 10 | 0 | 0.049958 | 0 | I want to communicate with the phone via serial port. After writing some command to phone, I used ser.read(ser.inWaiting()) to get its return value, but I always got total 1020 bytes of characters, and actually, the desired returns is supposed to be over 50KB.
I have tried to set ser.read(50000), but the interpreter will hang on.
How would I expand the input buffer to get all of the returns at once? | 0 | python,buffer,pyserial | 2012-09-06T14:16:00.000 | 0 | 12,302,155 | I'm guessing that you are reading 1020 bytes because that is all there is in the buffer, which is what ser.inWaiting() is returning. Depending on the baud rate 50 KB may take a while to transfer, or the phone is expecting something different from you. Handshaking?
Inspect the value of ser.inWaiting, and then the contents of what you are receiving for hints. | 0 | 21,578 | false | 1 | 1 | How to expand input buffer size of pyserial | 12,920,183 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I am unsure which of the two I should go for. Flup or modwsgi.
Flup seems to have very little documentation and even less people adding to the code. modwsgi on the other hand seems to be widely supported.
I just want to start running my webpy environmental so that I can utilize Python scripts online. But this thing stops me from moving ahead. Any suggestions? | 0 | python,python-2.7,mod-wsgi,web.py,flup | 2012-09-06T17:14:00.000 | 0 | 12,305,146 | I use nginx and uwsgi to deploy my own web.py apps, seems faster and consumes less ram than apache+mod_wsgi, the setup is not as easy though. I have to run supervisord to ensure that all uwsgi processes are on.
Don't use flup, I think its considered to be a bit outdated way of deploying python web apps. | 0 | 813 | false | 0 | 1 | Python 2.7 with Webpy - flup or modwsgi? | 12,314,167 |
1 | 3 | 0 | 1 | 3 | 0 | 0.066568 | 0 | Here's a data flow:
http <--> nginx <--> uWSGI <--> python webapp
I guess there's http2uwsgi transfer in nginx, and uwsgi2http in uWSGI.
What if I want to directly call uWSGI to test an API in a webapp?
actually i'm using pyramid. just config [uwsgi] in .ini and run uWSGI. but i want to test if uWSGI hold webapp function normally, the uWSGI socket is not directly reachable by http. | 0 | python,nginx,uwsgi | 2012-09-07T08:13:00.000 | 0 | 12,314,245 | First, consider those questions:
On which port is uWSGI running?
Is uWSGI running on your or on a remote machine?
If it's running on a remote machine, is the port accessible from your computer? (iptables rules might forbid external access)
If you made sure you have access, you can just call http://hostname:port/path/to/uWSGI for direct API access. | 0 | 5,693 | false | 0 | 1 | Can I use the uwsgi protocol to call http? | 12,314,652 |
1 | 11 | 0 | 18 | 47 | 1 | 1.2 | 0 | Is there a good way to check if a string is encoded in base64 using Python? | 0 | python,base64 | 2012-09-07T09:30:00.000 | 0 | 12,315,398 | This isn't possible. The best you could do would be to verify that a string might be valid Base 64, although many strings consisting of only ASCII text can be decoded as if they were Base 64. | 0 | 50,919 | true | 0 | 1 | Check if a string is encoded in base64 using Python | 12,317,005 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 1 | I have read this example from AutobahnPython: https://github.com/tavendo/AutobahnPython/tree/master/examples/websocket/broadcast
It looks pretty easy to understand and practice. But I want to add a little more. Members who submit the correct secret string can send the messages, anyone else can only view the information transmitted. Any idea?
Thanks! | 0 | python,websocket,broadcasting,autobahn | 2012-09-07T15:39:00.000 | 0 | 12,321,301 | Well it is purely your logic in the code. When you receive the message you are simply broadcasting it, what you have to do is to pass this onto a custom function, and there, do a check:
Create a temporary array that contains list of active authenticated users. when user logs on, it should send this special string, match it, if OK, add this user to this active user list array, if not don't add it. Later, call the bradcast function but rather then taking all online users, use this custom array instead.
That is all that you have to do.
Make sure when someone logs out, remove him from this array. | 0 | 1,310 | false | 0 | 1 | Python - Broadcasting with WebSocket using AutobahnPython | 14,835,403 |
1 | 2 | 0 | 2 | 1 | 0 | 1.2 | 0 | I'd like to pass somehow user password for rsync_project() function (that is wrapper for regular rsync command) from Fabric library.
I've found the option --password-file=FILE of rsync command that requires password stored in FILE. This could somehow work but I am looking for better solution as I have (temporarily) passwords stored as plain-text in database.
Please provide me any suggestions how should I work with it. | 0 | python,rsync,fabric | 2012-09-08T22:53:00.000 | 1 | 12,335,114 | If rsync using ssh as a remote shell transport is an option and you can setup public key authentication for the users, that would provide you a secure way of doing the rsync without requiring passwords to be entered. | 0 | 1,763 | true | 0 | 1 | Putting password for fabric rsync_project() function | 12,335,299 |
1 | 2 | 0 | 19 | 19 | 0 | 1 | 0 | Is there a command in Eclipse Pydev which allows me to only run a few selected (highlighted) lines of code within a larger script?
If not, is it possible to run multiple lines of code in the PyDev console at once? | 0 | python,eclipse,pydev | 2012-09-08T23:53:00.000 | 1 | 12,335,424 | press CTRL+ALT+ENTER to send the selected lines to the interactive console | 0 | 11,174 | false | 0 | 1 | Eclipse Pydev: Run selected lines of code | 12,774,197 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | When installing mod_wsgi I get stuck after doing ./config
Apparently I am missing the apxs2
Here is the result:
checking for apxs2... no
checking for apxs... /usr/sbin/apxs
checking Apache version... 2.2.22
checking for python... /usr/bin/python
configure: creating ./config.status
config.status: creating Makefile
What I am not sure of now is how I get apxs2 working and installed. Any solution anyone? This is so that I can later on install Django and finally get a Python/Django environment up and running on my VPS. | 0 | python,python-2.7,mod-wsgi | 2012-09-09T18:12:00.000 | 0 | 12,341,610 | You have Apache 2.2 core package installed, but possibly have the devel package for Apache 1.3 instead of that for 2.2 installed. This isn't certain though, as for some Apache distributions, such as when compiled from source code, 'apxs' is still called 'apxs'. It is only certain Linux distros that have changed the name of 'apxs' in Apache 2.2 distros to be 'apxs2'. This is why the mod_wsgi configure script checks for 'apxs2' as well as 'apxs'.
So, do the actual make and see if that fails before assuming you have the wrong apxs. | 0 | 314 | false | 1 | 1 | Error when installing mod_wsgi | 12,444,455 |
2 | 2 | 0 | 2 | 0 | 0 | 0.197375 | 0 | When installing mod_wsgi I get stuck after doing ./config
Apparently I am missing the apxs2
Here is the result:
checking for apxs2... no
checking for apxs... /usr/sbin/apxs
checking Apache version... 2.2.22
checking for python... /usr/bin/python
configure: creating ./config.status
config.status: creating Makefile
What I am not sure of now is how I get apxs2 working and installed. Any solution anyone? This is so that I can later on install Django and finally get a Python/Django environment up and running on my VPS. | 0 | python,python-2.7,mod-wsgi | 2012-09-09T18:12:00.000 | 0 | 12,341,610 | checking for apxs... /usr/sbin/apxs
...
config.status: creating Makefile
It succeeded. Go on to the next step. | 0 | 314 | false | 1 | 1 | Error when installing mod_wsgi | 12,341,622 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I have installed psutil module.
It works well if run by the python interpreter but when I try to import the module in a monkeyrunner script,it gives
No such module.
Is there any way through which i use psutil module in monkeyrunner?
Note-i am using the monkeyrunner with the android ics-x86 version | 0 | python-module,monkeyrunner | 2012-09-10T06:26:00.000 | 0 | 12,346,337 | Try to name your script something.py. This way you have a python script where you can import the modules. When you run the script with monkeyrunner, some python modules are not recognized. Monkeyrunner does not equal python 100%. It does not have all the power and functionality. | 0 | 258 | false | 0 | 1 | Monkeyrunner doesnt find my module | 12,348,061 |
1 | 1 | 0 | 1 | 1 | 0 | 0.197375 | 0 | I'd like to test my WSGI library with gevent's WSGI Servers to ensure that request parameters aren't leaked/overwritten with those from another request/greenlet - in my library request is "global", though it should be thread-safe... which is what I'd like to test using gevent.
What approaches can I use? Are there any open-source projects which already have unittests which achieve this from which I could learn? | 0 | python,wsgi,gevent | 2012-09-10T09:09:00.000 | 0 | 12,348,500 | If your library uses threading.local to provide thread-isolated "global" request variable then all you need is to call gevent.monkey.patch_thread BEFORE you use threading.local. That should turn all threading.local objects into "greenlet.local" ones. | 0 | 563 | false | 1 | 1 | How can I unittest wsgi code which uses gevent? | 15,670,806 |
2 | 2 | 0 | 0 | 4 | 1 | 0 | 0 | My python script does some heavy computation. To boost performance, it caches the computed data on the disk so that next time I'll run it, it doesn't waste time in computing the same thing. However, before extracting data from the cache, it needs to do some checking to make sure that the cache is not stale. This is the part where I am stuck.
My first idea was to compare the creation time of cache and modification time of python script and if the later is larger (ie more recent) than the former, I would consider the cache as stale, else not. However, since linux kernel does not store creation times of files, I am stuck at this point.
Similar situation:
When python interpreter creates .pyc files from .py files, it does something similar --> creates a new .pyc file if I'll modify my .py file after the .pyc file was created, else it does not. How does it do that? I wish to know the algorithm. Thank you. | 0 | python,algorithm,caching | 2012-09-10T10:39:00.000 | 0 | 12,349,970 | You can have a metadata file that will hold a list of all cached entities together with their creation times | 0 | 591 | false | 0 | 1 | Algorithm to check if cache is stale | 12,350,058 |
2 | 2 | 0 | 2 | 4 | 1 | 1.2 | 0 | My python script does some heavy computation. To boost performance, it caches the computed data on the disk so that next time I'll run it, it doesn't waste time in computing the same thing. However, before extracting data from the cache, it needs to do some checking to make sure that the cache is not stale. This is the part where I am stuck.
My first idea was to compare the creation time of cache and modification time of python script and if the later is larger (ie more recent) than the former, I would consider the cache as stale, else not. However, since linux kernel does not store creation times of files, I am stuck at this point.
Similar situation:
When python interpreter creates .pyc files from .py files, it does something similar --> creates a new .pyc file if I'll modify my .py file after the .pyc file was created, else it does not. How does it do that? I wish to know the algorithm. Thank you. | 0 | python,algorithm,caching | 2012-09-10T10:39:00.000 | 0 | 12,349,970 | Just check the last-modified time of your cache file instead.
Even better, that's what you really want to check in any case, because when you update your cache to store the new computed value, you want to know when that was done last, not when that was done the first time. :-) | 0 | 591 | true | 0 | 1 | Algorithm to check if cache is stale | 12,356,214 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I am having a hierarchy of directories and inside every directory there is one 'test/' directory which has all the test files. nosetests is not able to collect these test files somehow.
I have followed naming convention used for filenames and class names as well. All the classes defined in those files are subclass of unittest:TestCase. Still no luck. What must be the problem ? | 0 | python,nose,nosetests | 2012-09-11T09:48:00.000 | 0 | 12,367,009 | I am only answering my question. It is really very strange. I found that the test files previously were in executable mode. And as soon as i changed there modes, it started working like a charm. :-) chmod -x *_test.py worked for me. Can anybody explain this behaviour of nosetests ??? | 0 | 379 | false | 0 | 1 | 'nosetests' was unable to collect test files from directory | 12,367,042 |
2 | 2 | 0 | 1 | 0 | 0 | 1.2 | 0 | I am having a hierarchy of directories and inside every directory there is one 'test/' directory which has all the test files. nosetests is not able to collect these test files somehow.
I have followed naming convention used for filenames and class names as well. All the classes defined in those files are subclass of unittest:TestCase. Still no luck. What must be the problem ? | 0 | python,nose,nosetests | 2012-09-11T09:48:00.000 | 0 | 12,367,009 | If you carefully see the python nose usage, you will get it
--exe Look for tests in python modules that are executable.
Normal behavior is to exclude executable modules,
Thanks. | 0 | 379 | true | 0 | 1 | 'nosetests' was unable to collect test files from directory | 12,418,833 |
1 | 2 | 0 | 0 | 2 | 1 | 0 | 0 | Is it possible to determine the type of a file-like object in Python?
For instance, if I were to read the contents of a file into a StringIO container and store it in a database, could I later work-out the original file-/content-/mime-type from the data? Eg. are there any common headers I could search for?
If not, are there any ways to determine "common" files (images, office docs, etc)? | 0 | python,mime-types,content-type | 2012-09-11T10:40:00.000 | 0 | 12,367,891 | Yes, you should evaluate the hex signature. | 0 | 584 | false | 0 | 1 | Determine file-/content-/mime-type from file-like? | 55,849,444 |
1 | 2 | 0 | 2 | 1 | 0 | 0.197375 | 1 | I am trying to download emails using imaplib with Python. I have tested the script using my own email account, but I am having trouble doing it for my corporate gmail (I don't know how the corporate gmail works, but I go to gmail.companyname.com to sign in). When I try running the script with imaplib.IMAP4_SSL("imap.gmail.companyname.com", 993), I get an error gaierror name or service not known. Does anybody know how to connect to a my company gmail with imaplib? | 0 | python,gmail,gmail-imap,imaplib | 2012-09-11T17:43:00.000 | 0 | 12,375,113 | IMAP server is still imap.gmail.com -- try with that? | 0 | 321 | false | 0 | 1 | IMAP in Corporate Gmail | 12,375,120 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have installed rabbitmq, use pika in python and rabbitmq-c in C for testing.
I have done nothing to rabbitmq except that i modify the listener port to my own one.
The producer works the whole night to put enough messages into rabbitmq, about 1000K durable messages.
The customer is written both in C and python, but its qps is just 80 per queue.
The articles on internet says that their single queue can reach 15000 qps, so what's wrong with mine? Do i need to configure some essential things about rabbitmq?
Each message is about 100 Btyes long, I use consume ack, and the queue and message are both durable. | 0 | python,c,rabbitmq | 2012-09-12T07:36:00.000 | 0 | 12,383,270 | To get a good throughput one should monitor:
flow control : memory based, ensure alert levels are set correctly to avoid connections blocking. Connection based ,check publisher and consumer rates are appropriate to avoid flow control)
Setting appropriate Qos values for consumers. | 0 | 319 | false | 0 | 1 | My rabbitmq's qps is only 80 in piki 1000 in rabbitmq-c, what's wrong with it? | 20,113,247 |
1 | 2 | 0 | 1 | 3 | 0 | 1.2 | 1 | I have a consumer which listens for messages, if the flow of messages is more than the consumer can handle I want to start another instance of this consumer.
But I also want to be able to poll for information from the consumer(s), my thought was that I could use RPC to request this information from the producers by using a fanout exchange so all the producers gets the RPC-call.
My question is first of all is this possible and secondly is it reasonable? | 0 | python,rabbitmq,messaging,pika | 2012-09-13T13:31:00.000 | 0 | 12,407,485 | After some researching it seems that this is not possible. If you look at the tutorial on RabbitMQ.com you see that there is an id for the call which, as far as I understand gets consumed.
I've choosen to go another way, which is reading the log-files and aggregating the data. | 0 | 2,271 | true | 0 | 1 | RPC calls to multiple consumers | 12,478,098 |
1 | 1 | 0 | 0 | 3 | 0 | 1.2 | 0 | I have to deploy a heavily JS based project to a embedded device. Its disk size is no more than 16Mb. The problem is the size of my minified js file all-classes.js is about 3Mb. If I compress it using gzip I get a 560k file which saves about 2.4M. Now I want to store all-classes.js as all-classes.js.gz so I can save space and it can be uncompressed by browser very well. All I have to do is handle the headers.
Now the question is how do I include the .gz file so browser understands and decompresses? Well i am aware that a .gz file contains file structure information while browser accepts only raw gzipped data. In that I would like to store the raw gzipped data. It'd some sort of caching! | 0 | javascript,python,extjs,embedded | 2012-09-14T05:50:00.000 | 0 | 12,418,822 | What you need to do, when the "all-classes.js" file is requested, is to return the content of "all-classes.js.gzip" with the additional "Content-Encoding: gzip" HTTP header.
But it's only possible if the request contained the "Accept-Encoding: gzip" HTTP header in the first place... | 0 | 332 | true | 1 | 1 | Use compressed JavaScript file (not run time compression) | 12,436,279 |
1 | 4 | 0 | 0 | 0 | 1 | 0 | 0 | I am running python on linux and am currently using vim for my single-file programs, and gedit for multi-file programs. I have seen development environments like eclipse and was basically wondering if there's a similar thing on ubuntu designed for python. | 0 | python,ide,text-editor | 2012-09-14T13:17:00.000 | 1 | 12,425,407 | Komodo is a good commercial IDE.And Eric is a free python IDE which is written with python. | 0 | 210 | false | 0 | 1 | development environment for python projects | 12,425,441 |
1 | 1 | 0 | 1 | 0 | 1 | 1.2 | 0 | I am using PyDev via eclipse and have used easy_install to get jsonpickle. No matter what I do I can't seem to get the import to work.
What I have tried thus far:
I have removed it from easy_install.pth and deleted the egg and installed again.
Add my python lib, dll, etc folders to a PYTHONPATH system variable
Restarted eclipse
Other imports are working fine. Not sure what I am doing wrong?
EDIT:
Sorry should have included OS / Python version.
OS: Windows 7
Python: 2.7
Any suggestions greatly appreciated | 0 | python,jsonpickle | 2012-09-14T13:20:00.000 | 0 | 12,425,433 | OS and python version?
Please use pip. Always.
pydev seems to ignore your package. It should be in /usr/share/pythonX.Y/site-packages/jsonpickle, or, if on Windows, c:\pythonxx[...].
If using Linux, please try to find a distro package for jsonpickle. | 0 | 800 | true | 0 | 1 | Can't get import to be recognized - jsonpickle | 12,426,079 |
1 | 2 | 0 | 4 | 2 | 0 | 0.379949 | 0 | Developing in Python using mod-python mod-wsgi on Apache 2.
All running fine, but if I do any change on my PY file, the changes are not propagated until I restart Apache /etc/init.d/apache2 restart.
This is annoying since I can't SSH and restart Apache service everytime in development.
Is there any way to disable Apache caching?
Thank you. | 0 | python,apache,caching,wsgi | 2012-09-14T21:18:00.000 | 0 | 12,432,130 | Its a very bad setting from a performance point of view, but what I do in my http.conf is set MaxRequestsPerChild to 1. This has the effect of each apache process handles a single request before dying. It kills throughput (so don't run benchmarks with that setting, or use it on a production site), but it has the effect of giving python a clean environment for every request. | 0 | 1,461 | false | 1 | 1 | Disable caching in Apache 2 for Python Development | 12,432,255 |
1 | 1 | 0 | 1 | 4 | 0 | 0.197375 | 0 | From time to time I suddenly have a need to connect to a device's console via its serial port. The problem is, I never remember what port settings (baud rate, data bits, stop bits, etc...) to use with each particular device, and documentation never seems to be lying around when it's really needed.
I wrote a Python script, which uses a simple brute-force method (i.e. iterates over all possible settings, sends some test input and displays the response for a human to decide if it makes sense ), but:
it takes a long time to complete
does not always work (perhaps port reset/timeout issues)
just does not seem like a proper way to do this :)
So the question is: does anyone know of a procedure to auto-detect what port settings the remote device is using? | 0 | python,serial-port,communication | 2012-09-15T08:46:00.000 | 0 | 12,435,923 | Although part 1 is no direct answer to your question:
There are devices, which just have a autodetection (called Auto-bauding) method included, that means: Send a character using your current settings (9k6, 115k2, ..) to the device and chances are high that the device will answer with your (!) settings. I've seen this on HP switches.
Second approach: try to re-order the connection possibilities. E.g. chances are high that the other end uses 9k6 with no hardware handshake, but less that it uses 38k4 with software Xon/Xoff.
If you break down your tries into just a few, the "brute force" method will be much more efficient. | 0 | 2,135 | false | 1 | 1 | Detecting serial port settings | 12,436,940 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | Can this be done?
No idea if the Cython .so extension can be dynamic loaded from a php script or does it needs any extra manage? | 0 | php,python,cython | 2012-09-17T15:13:00.000 | 0 | 12,462,227 | The short answer is no. Cython extensions use the Python C API, so they can't be loaded and called directly from PHP. They will typically take and return PyObject structs as arguments (Python objects). You'll need a Python <-> PHP binding to load the .so and do object conversion. | 0 | 320 | true | 0 | 1 | Use a Cython extension from a compiled python in php? | 12,462,519 |
2 | 2 | 0 | 1 | 2 | 0 | 0.099668 | 0 | What its the best game-engine 3D to Python 3.x and easy to install on Linux (Debian 7 "wheezy")? | 0 | python,python-3.x,game-engine | 2012-09-19T02:46:00.000 | 1 | 12,487,889 | If you want a game engine in python, I would recommend these:
Kivy (multiplatform)
PyGame (multiplatform)
Blender (graphical game engine made in python, multiplatform, also used for modeling)
PyOpenGL (multiplatform, 3d game engine like blender)
These are some game engines I know. You also might want to try Unity3d. | 0 | 1,505 | false | 0 | 1 | Game Engine 3D to Python3.x? | 31,010,631 |
2 | 2 | 0 | 2 | 2 | 0 | 0.197375 | 0 | What its the best game-engine 3D to Python 3.x and easy to install on Linux (Debian 7 "wheezy")? | 0 | python,python-3.x,game-engine | 2012-09-19T02:46:00.000 | 1 | 12,487,889 | Not sure if it is the "best" - but not working on the field I am aware of few others than Blender 3D 's game engine. Blender moved to Python 3 scripting at version 2.5, so any newer version than that will use Python 3 for BGE (Blender Game Engine) scripts.
Pygame is also available for Python 3.x, and it does feature a somewhat low-level interface to OpenGL - -sou you could do 3d with it.
Both should not have any major problems running in Debian, but maybe you will have to configure some sort of PPA to get packages being installed for Python 3.
Also, be sure that your Debian's python3 is 3.2 - this distribution is known to have surprisingly obsolete packages even when one is running the most recent one. | 0 | 1,505 | false | 0 | 1 | Game Engine 3D to Python3.x? | 12,488,430 |
1 | 1 | 1 | 0 | 0 | 0 | 1.2 | 0 | I want to execute the Python scripts(that displays a toast and notification) in Android using sl4a. Can I show a toast message and a notification simultaneously? I m using an emulator for testing. | 0 | android,python,sl4a | 2012-09-19T07:45:00.000 | 0 | 12,490,468 | Yes, it is possible to use a toast and a notification at the same time.
Although it may be not the best user experience in my opinion.
Toast is a way to let the user know of something while he'she is looking at the screen and is low priority. It goes away in a while.
Notification is a way to let the user know about something which is higher priority than a toast. It may be at a time where user's primary focus is not your app, or the device is sleeping as well. User can go to the notification drawer and see what's new with your app with this.
In most use cases, one of them does the job. I'm not sure why you need both.. at the same time. Doesn't a single notification cut it ? | 0 | 694 | true | 0 | 1 | Toast, notification simultaneously in Android | 12,490,713 |
1 | 2 | 0 | 0 | 4 | 1 | 0 | 0 | How would I be able to find which module is overriding the Python root logger?
My Django project imports from quite a few external packages, and I have tried searching for all instances of logging.basicConfig and logging.root setups, however most of them are in tests and should not be overriding it unless specifically called.
Django's logging config does not specify a root logger. | 0 | python,logging | 2012-09-20T22:53:00.000 | 0 | 12,522,080 | I'm assuming you mean import logging imports a different logging module? In this case, there are many special attributes of modules/packages that can help, such as __path__. Printing logging.__path__ should tell you where python is importing it from. | 0 | 205 | false | 1 | 1 | Finding out which module is setting the root logger | 12,523,536 |
1 | 3 | 0 | 0 | 3 | 0 | 0 | 0 | I'm writing a python3 program that generates a text file that is post-procesed with asciidoc for the final report in html and pdf.
The python program generates thousands files with graphics to be included in the final report. The filenames for the files are generated with tempfile.NamedTemporaryFile
The problem it that the character set used by tempfile is defined as:
characters = "abcdefghijklmnopqrstuvwxyz0123456789_"
then I end with some files with names like "_6456_" and asciidoc interprets the "_" as formatting and inserts some html that breaks the report.
I need to either find a way to "escape" the filenames in asciidoc or control the characters in the temporary file.
My current solution is to rename the temporary file after I close it to replace the "_" with some other character (not in the list of characters used by tempfile to avoid a collision) but i have the feeling that there is a better way to do it.
I will appreciate any ideas. I'm not very proficient with python yet, i think overloading _RandomNameSequence in tempfile will work, but i'm not sure how to do it.
regards. | 0 | python-3.x | 2012-09-21T00:38:00.000 | 0 | 12,522,844 | Maybe you could create a temporary directory using tempfile.tempdir and generate the filenames manually such as file1, file2, ..., filen . This way you easily avoid "_" characters and you can just delete the temporary directory after you are finished with that. | 0 | 491 | false | 0 | 1 | change character set for tempfile.NamedTemporaryFile | 12,522,892 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | Is there any way to read GSM modem port number programmatically using Python, when I connect mobile to Windows XP machine? | 0 | python,port,gsm | 2012-09-21T08:57:00.000 | 0 | 12,527,309 | Sorry I donot know the python syntaxes, just an idea to follow. You can use SerialPort.GetPortNames(); to get the list of available ports in your system.
And then send an AT command to each port. Which ever port responds with an OK , it means that your modem is connected to that port. | 0 | 894 | false | 0 | 1 | Programmatically read GSM modem port number | 12,527,528 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | It is a django project. I am using pydev 2.6. How do I make it to use the Django test runner? | 0 | python,django,pydev | 2012-09-21T14:24:00.000 | 0 | 12,532,465 | The Django test runner can be accessed by creating a new (Run or Debug) configuration for your project using the Django template. Set your main module as manage.py and under the Arguments tab enter "test" (or any other manage.py arguments you need). | 0 | 234 | true | 1 | 1 | pydev with eclipse does create test database when running test | 12,535,217 |
1 | 1 | 0 | 0 | 4 | 0 | 0 | 0 | I'm writing an application to simulate train sounds. I got very short (0.2s) audio samples for every speed of the train and I need to be able to loop up to 20 of them (one for every train) without gaps at the same time.
Gapless changing of audio samples (train speed) is also a Must-Have.
I've been searching for possible python-audio-solutions, including
PyAudio
PyMedia
pyaudiere
but I'm not sure which one suits best my use-case, so I do really appreciate any propositions and experiences!
PS: I did already try out gstreamer but since the 1.0 release is not there yet and I cant figure out how to get gapless playback to work with pygi, i thought there might be a better choice. I also tried pygame, but it seems like it's limited to 8 audio channels?? | 0 | python,audio,loops,playback | 2012-09-21T14:34:00.000 | 0 | 12,532,631 | I am using PyAudio for a lot of things and are quite happy with it. If it can do this, I do not know, but I think it can.
One solution is to feed sound buffer manually and control / set the needed latency. I have done this and it works quite well. If you have the latency high enough it will work.
An other solution, similar to this, is to manage the latency your self. You can queue up and or mix your small sound files manually to e.g. sizes of 0.5 -1 sec. This will greatly reduce the requirement to the "realtimeness" and alow you to do some pretty cool transitions between "speeds"
I do not know what sort of latency you can cope with, but if we are talking about train speeds, I guess they do not change instantaneous - hence latency of 500ms to several seconds is most likely acceptable. | 0 | 1,335 | false | 1 | 1 | Python Audio library for fast, gapless looping of many short audio tracks | 31,437,197 |
1 | 2 | 0 | 0 | 2 | 0 | 0 | 0 | I need to run a monkeyrunner script in a remote machine. I'm using python to to automate it and RPyC so that I could connect to other machines, everything is running in CentOS.
written below is the command that I used:
import rpyc
import subprocess
conn = rpyc.classic.connect("192.XXX.XXX.XXX",XXXXX)
conn.execute ("print 'Hello'")
subprocess.Popen("/opt/android-sdk/tools/monkeyrunner -v ALL
/opt/android-sdk/tools/MYSCRIPT.py", shell=True)
and this is the result:
can't open specified script file
Usage : monkeyrunner [option] script_file
-s MonkeyServer IP Address
-p MonkeyServer TCP Port
-v MonkeyServer Logging level
And then I realized that if you use the command below, it is running the command in your machine. (example: the command inside the Popen is "ls" the result that it will give you is the list of files and directories in the current directory of the LOCALHOST) hence, the command is wrong.
subprocess.Popen("/opt/android-sdk/tools/monkeyrunner -v ALL
/opt/android-sdk/tools/MYSCRIPT.py", shell=True)
and so I replaced the code with this
conn.modules.subprocess.Popen("/opt/android-sdk/tools/monkeyrunner -v ALL
/opt/android-sdk/tools/MYSCRIPT.py", shell=True)
And give me this error message
======= Remote traceback ======= Traceback (most recent call last): File
"/usr/lib/python2.4/site-packages/rpyc-3.2.2-py2.4.egg/rpyc/core/protocol.py",
line 300, in _dispatch_request
res = self._HANDLERS[handler](self, *args) File "/usr/lib/python2.4/site-packages/rpyc-3.2.2-py2.4.egg/rpyc/core/protocol.py",
line 532, in _handle_call
return self._local_objects[oid](*args, **dict(kwargs)) File "/usr/lib/python2.4/subprocess.py", line 542, in init
errread, errwrite) File "/usr/lib/python2.4/subprocess.py", line 975, in _execute_child
raise child_exception OSError: [Errno 2] No such file or directory
======= Local exception ======== Traceback (most recent call last): File "", line 1, in ? File
"/usr/lib/python2.4/site-packages/rpyc-3.2.2-py2.4.egg/rpyc/core/netref.py",
line 196, in call
return syncreq(_self, consts.HANDLE_CALL, args, kwargs) File "/usr/lib/python2.4/site-packages/rpyc-3.2.2-py2.4.egg/rpyc/core/netref.py",
line 71, in syncreq
return conn.sync_request(handler, oid, *args) File "/usr/lib/python2.4/site-packages/rpyc-3.2.2-py2.4.egg/rpyc/core/protocol.py",
line 438, in sync_request
raise obj OSError: [Errno 2] No such file or directory
I am thinking that it cannot run the file because I don't have administrator access (since I didn't supply the username and password of the remote machine)?
Help! | 0 | android,python,centos,monkeyrunner,rpyc | 2012-09-25T07:20:00.000 | 1 | 12,578,021 | Using this function to run monekyrunner doesn't work although running ls, pwd works fine.
conn.modules.subprocess.Popen("/opt/android-sdk/tools/monkeyrunner -v
ALL
/opt/android-sdk/tools/MYSCRIPT.py", shell=True)
The chunk of code below solved my problem :
import rpyc
import subprocess , os
conn = rpyc.classic.connect("192.XXX.XXX.XXX",XXXXX)
conn.execute ("print 'Hello'")
conn.modules.os.popen("monkeyrunner -v ALL MYSCRIPT.py",)
Hope this helps to those who are experiencing the same problem as mine. | 0 | 1,078 | false | 0 | 1 | why is monkeyrunner not working when run from a remote machine? | 12,593,620 |
1 | 1 | 0 | 6 | 6 | 1 | 1.2 | 0 | When working on a project my scripts often have some boiler-plate code, like adding paths to sys.path and importing my project's modules. It gets tedious to run this boiler-plate code every time I start up the interactive interpreter to quickly check something, so I'm wondering if it's possible to pass a script to the interpreter that it will run before it becomes "interactive". | 0 | python,python-interactive | 2012-09-25T11:03:00.000 | 0 | 12,581,638 | That can be done using the -i option. Quoting the interpreter help text:
-i : inspect interactively after running script; forces a prompt even
if stdin does not appear to be a terminal; also PYTHONINSPECT=x
So the interpreter runs the script, then makes the interactive prompt available after execution.
Example:
$ python -i boilerplate.py
>>> print mymodule.__doc__
I'm a module!
>>>
This can also be done using the environment variable PYTHONSTARTUP. Example:
$ PYTHONSTARTUP=boilerplate.py python
Python 2.7.3 (default, Sep 4 2012, 10:30:34)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> print mymodule.__doc__
I'm a module!
>>>
I personally prefer the former method since it doesn't show the three lines of information, but either will get the job done. | 0 | 422 | true | 0 | 1 | Is it possible to get the Python Interactive Interpreter to run a script on load? | 12,581,642 |
Subsets and Splits