Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 2 | 0 | 1 | 2 | 1 | 0.099668 | 0 | I'm working with a framework and the source code is raising exceptions using the Exception class (and not a subclass, either framework specific or from the stdlib) in a few places, which is is not a good idea in my opinion.
The main argument against this idiom is that it forces the caller to use except Exception: which can catch more than what is meant, and therefore hide problems at lower stack levels.
However, a quick search in the Python documentation did not come up with arguments against this practice, and there are even examples of this in the tutorial (although things which are OK in Python scripts may not be OK at all in a Python framework in my opinion).
So is raise Exception considered pythonic? | 0 | python,coding-style | 2012-09-26T07:20:00.000 | 0 | 12,596,557 | No, it is not. At the very minimum the framework should provide its own exception class, and probably should have several (depending on the variety of things that could go wrong).
As you said, except Exception will catch way too much and is not good practice. | 0 | 89 | false | 0 | 1 | arguments for / against `raise Exception(message)` in Python | 12,596,686 |
1 | 1 | 0 | 2 | 3 | 0 | 0.379949 | 1 | What is the best way to communicate between a Python 3.x and a Python 2.x program?
We're writing a web app whose front end servers will be written in Python 3 (CherryPy + uWSGI) primarily because it is unicode heavy app and Python 3.x has a cleaner support for unicode.
But we need to use systems like Redis and Boto (AWS client) which don't yet have Python 3 support.
Hence we need to create a system in which we can communicate between Python 3.x and 2.x programs.
What do you think is the best way to do this? | 0 | python,python-3.x,python-2.x | 2012-09-26T08:15:00.000 | 0 | 12,597,394 | The best way? Write everything in Python 2.x. It's a simple question: can I do everything in Python 2.x? Yes! Can I do everything in Python 3.x? No. What's your problem then?
But if you really, really have to use two different Python versions ( why not two different languages for example? ) then you will probably have to create two different servers ( which will be clients at the same time ) which will communicate via TCP/UDP or whatever protocol you want. This might actually be quite handy if you think about scaling the application in the future. Although let me warn you: it won't be easy at all. | 0 | 1,491 | false | 0 | 1 | communication between Python 3 and Python 2 | 12,599,590 |
1 | 6 | 0 | 10 | 21 | 1 | 1 | 0 | Looking to improve quality of a fairly large Python project. I am happy with the types of warnings PyLint gives me. However, they are just too numerous and hard to enforce across a large organization. Also I believe that some code is more critical/sensitive than others with respect to where the next bug may come. For example I would like to spend more time validating a library method that is used by 100 modules rather than a script that was last touched 2 years ago and may not be used in production. Also it would be interesting to know modules that are frequently updated.
Is anyone familiar with tools for Python or otherwise that help with this type of analysis? | 0 | python,code-analysis | 2012-09-27T04:32:00.000 | 0 | 12,614,131 | I'm afraid you are mostly on your own.
If you have decent set of tests, look at code coverage and dead code.
If you have a decent profiling setup, use that to get a glimpse of what's used more.
In the end, it seems you are more interested in fan-in/fan-out analysis, I'm not aware of any good tools for Python, primarily because static analysis is horribly unreliable against a dynamic language, and so far I didn't see any statistical analysis tools.
I reckon that this information is sort of available in JIT compilers -- whatever (function, argument types) is in cache (compiled) those are used the most. Whether or not you can get this data out of e.g. PyPy I really don't have a clue. | 0 | 1,381 | false | 0 | 1 | Identifying "sensitive" code in your application | 12,663,047 |
1 | 2 | 0 | 0 | 2 | 1 | 0 | 0 | I have a Telit module which runs [Python 1.5.2+] (http://www.roundsolutions.com/techdocs/python/Easy_Script_Python_r13.pdf)!. There are certain restrictions in the number of variable, module and method names I can use (< 500), the size of each variable (16k) and amount of RAM (~ 1MB). Refer pg 113&114 for details. I would like to know how to get the number of symbols being generated, size in RAM of each variable, memory usage (stack and heap usage).
I need something similar to a map file that gets generated with gcc after the linking process which shows me each constant / variable, symbol, its address and size allocated. | 0 | python,symbols,decompiling | 2012-09-27T17:59:00.000 | 0 | 12,627,401 | This post makes me recall my pain once with Telit GM862-GPS modules. My code was exactly at the point that the number of variables, strings, etc added up to the limit. Of course, I didn't know this fact by then. I added one innocent line and my program did not work any more. I drove me really crazy for two days until I look at the datasheet to find this fact.
What you are looking for might not have a good answer because the Python interpreter is not a full fledged version. What I did was to use the same local variable names as many as possible. Also I deleted doc strings for functions (those count too) and replace with #comments.
In the end, I want to say that this module is good for small applications. The python interpreter does not support threads or interrupts so your program must be a super loop. When your application gets bigger, each iteration will take longer. Eventually, you might want to switch to a faster platform. | 0 | 403 | false | 0 | 1 | Counting number of symbols in Python script | 15,160,831 |
1 | 3 | 0 | 3 | 17 | 0 | 0.197375 | 0 | I spent the last 3 hours trying to find out if it possible to disable or to build Python without the interactive mode or how can I get the size of the python executable smaller for linux.
As you can guess it's for an embedded device and after the cross compilation Python is approximately 1MB big and that is too much for me.
Now the questions:
Are there possibilities to shrink the Python executable? Maybe to disable the interactive mode (starting Python programms on the command line).
I looked for the configure options and tried some of them but it doesn't produce any change for my executable.
I compile it with optimized options from gcc and it's already stripped. | 0 | python,embedded | 2012-09-27T23:32:00.000 | 1 | 12,631,577 | There may be ways you can cram it down a little more just by configuring, but not much more.
Also, the actual interactive-mode code is pretty trivial, so I doubt you're going to save much there.
I'm sure there are more substantial features you're not using that you could hack out of the interpreter to get the size down. For example, you can probably throw out a big chunk of the parser and compiler and just deal with nothing but bytecode. The problem is that the only way to do that is to hack the interpreter source. (And it's not the most beautiful code in the world, so you're going to have to dedicate a good amount of time to learning your way around.) And you'll have to know what features you can actually hack out.
The only other real alternative would be to write a smaller interpreter for a Python-like language—e.g., by picking up the tinypy project. But from your comments, it doesn't sound as if "Python-like" is sufficient for you unless it's very close.
Well, I suppose there's one more alternative: Hack up a different, nicer Python implementation than CPython. The problem is that Jython and IronPython aren't native code (although maybe you can use a JVM->native compiler, or possibly cram enough of Jython into a J2ME JVM?), and PyPy really isn't ready for prime time on embedded systems. (Can you wait a couple years?) So, you're probably stuck with CPython. | 0 | 8,769 | false | 0 | 1 | Optimizing the size of embedded Python interpreter | 12,632,227 |
1 | 1 | 0 | 0 | 5 | 0 | 0 | 0 | I’m having a very strange issue with running a python CGI script in IIS.
The script is running in a custom application pool which uses a user account from the domain for identity. Impersonation is disabled for the site and Kerberos is used for authentication.
When the account is member of the “Domain Admins” group, everything works like a charm
When the account is not member of “Domain Admins”, I get an error on the very first line in the script: “import cgi”. It seems like that import eventually leads to a random number being generated and it’s the call to _urandom() which fails with a “WindowsError: [Error 5] Access is denied”.
If I run the same script from the command prompt, when logged in with the same user as the one from the application pool, everything works as a charm.
When searching the web I have found out that the _urandom on windows is backed by the CryptGenRandom function in the operating system. Somehow it seems like my python CGI script does not have access to that function when running from the IIS, while it has access to that function when run from a command prompt.
To complicate things further, when logging in as the account running the application pool and then invoking the CGI-script from the web browser it works. It turns out I have to be logged in with the same user as the application pool for it to work. As I previously stated, impersonation is disabled, but somehow it seems like the identity is somehow passed along to the security functions in windows.
If I modify the random.py file that calls the _urandom() function to just return a fixed number, everything works fine, but then I have probably broken a lot of the security functions in python.
So have anyone experienced anything like this? Any ideas of what is going on? | 0 | python,cgi,iis-7.5 | 2012-09-28T12:21:00.000 | 1 | 12,639,930 | I've solved the _urandom() error by changing IIS 7.5 settings to Impersonate User = yes. I'm not a Windows admin so I cannot elaborate.
Afterwards import cgi inside python script worked just fine. | 0 | 928 | false | 0 | 1 | Python CGI in IIS: issue with urandom function | 21,917,122 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | What is the best way to load a python module/file after the whole python program is up and running. My current idea is to save the new python file to disk and call import on it. I am working with python 2.7.
Each new python file will have pre-known functions, that will be called by the already running application. | 0 | python-2.7,runtime | 2012-09-29T11:52:00.000 | 0 | 12,652,475 | The import statement is like any other executable statement, and can be executed at any point during execution. | 0 | 23 | false | 0 | 1 | loading python after the application is up and running | 12,654,018 |
2 | 2 | 0 | -2 | 0 | 0 | -0.197375 | 0 | Hi all~ I just be interested in embedded development, and as known to all, C is the most popular programming language in embedded development. But I prefer to use Python, does Python be adapted to do any tasks about embedded development or automatic control? And are there some books about this be worth recommended? Thanks! | 0 | python,embedded | 2012-09-29T13:20:00.000 | 0 | 12,653,026 | OOP is generally not suitable for embedded development. This is because embedded hardware is limited on memory and OOP is unpredictable with memory usage. It is possible, but you are forced into static objects an methods to have any kind of reliability. | 0 | 664 | false | 0 | 1 | Python on automatic control and embedded development | 12,653,117 |
2 | 2 | 0 | 5 | 0 | 0 | 0.462117 | 0 | Hi all~ I just be interested in embedded development, and as known to all, C is the most popular programming language in embedded development. But I prefer to use Python, does Python be adapted to do any tasks about embedded development or automatic control? And are there some books about this be worth recommended? Thanks! | 0 | python,embedded | 2012-09-29T13:20:00.000 | 0 | 12,653,026 | The reason C (and C++) are prevalent in embedded systems is that they are systems-level languages with minimal run-time environment requirements and can run stand-alone (bare metal), with an simple RTOS kernel, or within a complete OS environment. Both are also almost ubiquitous being available for most 8, 16, 32 and 64 bit architectures. For example, you can write bootstrap and OS code in C or C++, whereas Python needs both of those already in place just to run.
Python on the other hand is an interpreted language (although it is possible to compile it, you would also need cross-compilation tools or an embedded target that could support self hosted development for that), and a significant amount of system level code (usually and OS) as well an the interpreter itself is required to support it. All this precludes for example deployment on very small systems where C and even C++ can deliver.
Moreover it Python would probably be unsuitable for hard-real-time systems due to its intrinsically slower execution and non-deterministic behaviour with respect to memory management.
If your embedded system happened to be running Linux it would of course be possible to use Python but the number of applications to which it was suited may be limited, and since Linux itself is somewhat resource hungry, you would probably not deploy it is the only reason was to be able to run Python. | 0 | 664 | false | 0 | 1 | Python on automatic control and embedded development | 12,654,265 |
2 | 2 | 0 | 5 | 3 | 0 | 1.2 | 0 | I've to develop a server that has to make a lot of connections to receive and send small files. The question is if the increment of performance with C++ worth the time to spend on develop the code or if is better to use Python and debug the code time to time to speed it up. Maybe is a little abstract question without giving a number of connections but I don't really know. At least 10,000 connections/minute to update clients status. | 0 | c++,python,performance,cpu-speed | 2012-09-29T20:11:00.000 | 0 | 12,656,098 | With that many connections, your server will be I/O bound. The frequently cited speed differences between languages like C and C++ and languages like Python and (say) Ruby lie in the interpreter and boxing overhead which slow down computation, not in the realm of I/O.
Not only can use make good and reasonably use of concurrency (both via processes and threads, the GIL is released during I/O and thus does not matter much for I/O-bound programs), there is also a wealth of asynchronous servers. In addition, web servers in general have much better Python integration (e.g. mod_wsgi for Apache) than C and C++. This frees you from writing your own server loop, socket management, etc. which you likely won't do as well as the major servers anyway. This is assuming we're talking about a web service, and not something more arcane which Apache etc. cannot do out of the box. | 0 | 3,596 | true | 0 | 1 | C++ vs Python server side performance | 12,656,127 |
2 | 2 | 0 | 2 | 3 | 0 | 0.197375 | 0 | I've to develop a server that has to make a lot of connections to receive and send small files. The question is if the increment of performance with C++ worth the time to spend on develop the code or if is better to use Python and debug the code time to time to speed it up. Maybe is a little abstract question without giving a number of connections but I don't really know. At least 10,000 connections/minute to update clients status. | 0 | c++,python,performance,cpu-speed | 2012-09-29T20:11:00.000 | 0 | 12,656,098 | I'd expect that the server time would be dominated by I/O- network, disk, etc. You'd want to prove that the CPU consumption of the Python program is problematic and that you've grasped all the low-hanging CPU fruit before considering a change. | 0 | 3,596 | false | 0 | 1 | C++ vs Python server side performance | 12,656,117 |
1 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | python blabla.py will execute. But ./blabla.py gives me an error of "no such file or directory" on CentOS6.3.
/usr/bin/env python does open up python properly.
I am new to linux and really would like to get this working. Could someone help?
Thanks in advance!
Note: thanks to all the fast replies!
I did have the #!/usr/bin/env python line at the beginning.
which python gives /usr/bin/python as an output.
And the chmod +x was done as well.
The exact error was "no such file or directory" for ./blabla.py, but python blabla.py runs fine. | 0 | python,centos | 2012-09-30T18:20:00.000 | 1 | 12,663,774 | Add #!/usr/bin/env python at the head of your script file.
It tell your system to search for the python interpreter and execute your script with it. | 0 | 2,334 | false | 0 | 1 | /usr/bin/env python opens up python, but ./blabla.py does not execute | 12,663,789 |
1 | 2 | 0 | 2 | 2 | 0 | 1.2 | 0 | I am using python2.7 and PDFminer for extracting text from pdf. I noticed that sometimes PDFminer gives me words with strange letters, but pdf viewers don't. Also for some pdf docs result returned by PDFminer and other pdf viewers are same (strange), but there are docs where pdf viewers can recognize text (copy-paste). Here is example of returned values:
from pdf viewer: فتــح بـــاب ا�ستيــراد البيــ�ض والدجــــاج المجمـــد
from PDFMiner: óªéªdG êÉ````LódGh ¢†``«ÑdG OGô``«à°SG ÜÉH í``àa
So my question is can I get same result as pdf viewer, and what is wrong with PDFminer. Does it missing encodings I don't know. | 0 | python,pdf,encoding | 2012-10-01T14:41:00.000 | 0 | 12,675,471 | Yes.
This will happen when custom font encodings have been used e.g. identity-H,identity-V, etc. but fonts have not been embedded properly.
pdfminer gives garbage output in such cases because encoding is required to interpret the text | 0 | 1,267 | true | 0 | 1 | PDFminer gives strange letters | 13,703,110 |
1 | 1 | 0 | 1 | 1 | 0 | 0.197375 | 0 | I need to write program that will change bytes in file in specific addreses. I can use only python 2.2 it's game's module so... I read once about mmap but i can't find it in python 2.2 | 0 | python,file,byte,python-2.2 | 2012-10-01T15:21:00.000 | 0 | 12,676,194 | Your best option is to manipulate the file directly; this will work regarding of Python version, i.e., 1.x, 2.x, 3.x. Here is some rough outline to get you started... if you do the actual pseudocode, it'll probably be pretty close if not exactly the correct Python:
open the file for 'r+b' (read/write; for POSIX systems, you can also just use 'r+')
go to the specific byte in question (use a file's tell() method)
write out the single byte you want changed (use a file's write() method)
close the file (use a file's close() method) | 0 | 191 | false | 0 | 1 | How to change byte on specific addres | 12,677,772 |
1 | 2 | 0 | 1 | 0 | 0 | 1.2 | 0 | How can I add custom fields (in this case, meta codes) to a dispatch_list in Tastypie? | 0 | python,django,api,rest,tastypie | 2012-10-02T02:08:00.000 | 0 | 12,683,630 | You can add a new field to the resource and dehydrate it with dehydrate_field_name(). | 0 | 883 | true | 1 | 1 | Tastypie: Add meta codes to dispatch_list | 12,686,954 |
1 | 1 | 0 | 2 | 0 | 0 | 1.2 | 1 | I'm writing up an IRC bot from scratch in Python and it's coming along fine.
One thing I can't seem to track down is how to get the bot to send a message to a user that is private (only viewable to them) but within a channel and not a separate PM/conversation.
I know that it must be there somewhere but I can't find it in the docs.
I don't need the full function, just the command keyword to invoke the action from the server (eg PRIVMSG).
Thanks folks. | 0 | python,protocols,irc | 2012-10-03T11:08:00.000 | 0 | 12,707,239 | Are you looking for /notice ? (see irchelp.org/irchelp/misc/ccosmos.html#Heading227) | 0 | 2,449 | true | 0 | 1 | IRC msg to send to server to send a "whisper" message to a user in channel | 12,721,513 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 1 | I'm trying to track several keywords at once, with the following url:
https://stream.twitter.com/1.1/statuses/filter.json?track=twitter%2C%20whatever%2C%20streamingd%2C%20
But the stream only returns results for the first keyword?! What am I doing wrong? | 0 | python,twitter,urlencode | 2012-10-03T12:34:00.000 | 0 | 12,708,573 | Try without spaces (ie. the %20). Doh! | 0 | 163 | true | 0 | 1 | Twitter Public Stream URL when several track keywords? | 12,708,663 |
1 | 2 | 0 | 1 | 4 | 0 | 0.099668 | 0 | I am using pyramid web framework to build a website. I keep getting this warning in chrome console:
Resource interpreted as Font but transferred with MIME type application/octet-stream: "http:static/images/fonts/font.woff".
How do I get rid of this warning message?
I have configured static files to be served using add_static_view
I can think of a way to do this by adding a subscriber function for responses that checks if the path ends in .woff and setting the response header to application/x-font-woff. But it does not look like a clean solution. Is there a way to tell Pyramid to do it through some setting. | 0 | python,pyramid | 2012-10-04T08:13:00.000 | 0 | 12,723,009 | Simply add this following code where your Pyramid web app gets initialized.
import mimetypes
mimetypes.add_type('application/x-font-woff', '.woff')
For instance, I have added it in my webapp.py file, which gets called the first time the server gets hit with a request. | 0 | 1,474 | false | 1 | 1 | How to set the content type header in response for a particular file type in Pyramid web framework | 26,917,124 |
1 | 1 | 0 | 3 | 6 | 1 | 1.2 | 0 | I am trying to clarify the concept of runtime dynamic binding and class inheritance in dynamic languages (Python, ruby) and static type languages (java, C++). I am not sure I am right or not.
In dynamic languages like Python and Ruby, runtime dynamic binding is implemented as duck typing. When the interpreter checks the type of an object, it checks whether the object has the specific method (or behaviour) rather than check the type of the object; and runtime dynamic binding does not mean class inheritence. Class inheritance just reduce code copy in Python and Ruby.
In static typed languages like Java and C++, runtime dynamic binding can be obtained only class inheritance. Class inheritance not only reduces code copy here, but is also used to implement runtime dynamic binding.
In summary, class inheritance and runtime dynamic binding are two difference concepts. In Python and Ruby, they are totally different; in Java and C++ they are mixed together.
Am I right? | 0 | java,c++,python,ruby,compiler-construction | 2012-10-04T14:41:00.000 | 0 | 12,729,828 | You are correct in that runtime dynamic binding is entirely different conceptually from class inheritance.
But as I re-read your question, I don't think I would agree that "Java and C++, runtime dynamic binding is implemented as class inheritance." Class inheritance is simply the definition of broader behavior that includes existing behavior from existing classes. Further, runtime binding doesn't necessarily have anything to do with object orientation; it can refer merely to deferred method resolution.
Class inheritance refers to the "template" for how an object is built, with more and more refined behavior with successive subclasses. Runtime dynamic binding is merely a way of saying that a reference to a method (for example) is deferred until execution time. In a given language, a particular class may leverage runtime dynamic binding, but have inherited classes resolved at compile time.
In a nutshell, Inheritance refers to the definition or blueprint of an object. Runtime dynamic binding is, at its most basic level, merely a mechanism for resolving method calls at execution time.
EDIT I do need to clarify one point on this: Java implements dynamic binding on overridden class methods, while C++ determines a type through polymorphism at runtime, so it is not accurate for me to say that dynamic binding has "no relationship" to class inheritance. At a "macro" level, they're not inherently related, but a given language might leverage it in its inheritance mechanism. | 0 | 1,662 | true | 0 | 1 | Difference between runtime dynamic binding and class inheritance | 12,730,127 |
2 | 5 | 0 | 0 | 3 | 0 | 1.2 | 1 | I'm unit testing a URL fetcher, and I need a test url which always causes urllib2.urlopen() (Python) to time out. I've tried making a php page with just sleep(10000) in it, but that causes 500 internal server error.
How would I make a resource that causes a connection timeout in the client whenever it is requested? | 0 | php,python,apache,url,timeout | 2012-10-05T20:22:00.000 | 0 | 12,753,527 | While there have been some good answers here, I found that a simple php sleep() call with an override to Apache's timeout was all I needed.
I know that unit tests should be in isolation, but the server this endpoint is hosted on is no going anywhere. | 0 | 615 | true | 0 | 1 | How should I create a test resource which always times out | 12,941,867 |
2 | 5 | 0 | 1 | 3 | 0 | 0.039979 | 1 | I'm unit testing a URL fetcher, and I need a test url which always causes urllib2.urlopen() (Python) to time out. I've tried making a php page with just sleep(10000) in it, but that causes 500 internal server error.
How would I make a resource that causes a connection timeout in the client whenever it is requested? | 0 | php,python,apache,url,timeout | 2012-10-05T20:22:00.000 | 0 | 12,753,527 | Connection timeout? Use, for example, netcat. Listen on some port (nc -l), and then try to download data from that port.. http://localhost:port/. It will open connection, which will never reply. | 0 | 615 | false | 0 | 1 | How should I create a test resource which always times out | 12,753,554 |
1 | 2 | 0 | 26 | 13 | 0 | 1.2 | 0 | I'm attempting to configure SQLAlchemy Alembic for my Pyramid project and I want to use my developement.ini (or production.ini) for the configuration settings for Alembic. Is it possible to specify the .ini file I wish to use anywhere within Alembic? | 0 | python,pyramid,alembic | 2012-10-06T05:18:00.000 | 0 | 12,756,976 | Just specify alembic -c /some/path/to/another.ini when running alembic commands. You could even put the [alembic] section in your development.ini and production.ini files and just alembic -c production.ini upgrade head. | 0 | 5,384 | true | 0 | 1 | Use different .ini file for alembic.ini | 12,757,266 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 0 | Sorry if my title is not correct. Below is the explanation of what i'm looking for.
I've coded a small GUI game (let say a snake game) in python, and I want it to be run on Linux machine. I can run this program by just run command "python snake.py" in the terminal.
However, I want to combine all my .py files into one file, and when I click on this file, it just run my game. I don't want to go to shell and type "python snake.py". I means something like manifest .jar in java.
Could any one help me please? If my explanation is not good enough, please let me know. I'll give some more explanation. | 0 | linux,jar,python-2.7 | 2012-10-06T19:16:00.000 | 1 | 12,763,015 | If you only want it to run on a Linux machine, using Python eggs is the simplest way.
python snake.egg will try to execute the main.py inside the egg.
Python eggs are meant to be packages, and basically is a zip file with metadata files included. | 0 | 172 | false | 0 | 1 | How to make an executable for a python project | 12,763,086 |
2 | 3 | 0 | 12 | 9 | 0 | 1.2 | 0 | I'm using an event loop based server in twisted python that stores files, and I'd like to be able to classify the files according to their compressibility.
If the probability that they'd benefit from compression is high, they would go to a directory with btrfs compression switched on, otherwise they'd go elsewhere.
I do not need to be sure - 80% accuracy would be plenty, and would save a lot of diskspace. But since there is the CPU and fs performance issue too, I can not just save everything compressed.
The files are in the low megabytes. I can not test-compress them without using a huge chunk of CPU and unduly delaying the event loop or refactoring a compression algorithm to fit into the event loop.
Is there any best practice to give a quick estimate for compressibility? What I came up with is taking a small chunk (few kB) of data from the beginning of the file, test-compress it (with a presumably tolerable delay) and base my decision on that.
Any suggestions? Hints? Flaws in my reasoning and/or problem? | 0 | python,compression,twisted | 2012-10-07T15:04:00.000 | 0 | 12,769,933 | Just 10K from the middle of the file will do the trick. You don't want the beginning or the end, since they may contain header or trailer information that is not representative of the rest of the file. 10K is enough to get some amount of compression with any typical algorithm. That will predict a relative amount of compression for the whole file, to the extent that that middle 10K is representative. The absolute ratio you get will not be the same as for the whole file, but the amount that it differs from no compression will allow you to set a threshold. Just experiment with many files to see where to set the threshold.
As noted, you can save time by doing nothing for files that are obviously already compressed, e.g. .png. .jpg., .mov, .pdf, .zip, etc.
Measuring entropy is not necessarily a good indicator, since it only gives the zeroth-order estimate of compressibility. If the entropy indicates that it is compressible enough, then it is right. If the entropy indicates that it is not compressible enough, then it may or may not be right. Your actual compressor is a much better estimator of compressibility. Running it on 10K won't take long. | 0 | 3,128 | true | 0 | 1 | How can I estimate the compressibility of a file without compressing it? | 12,770,967 |
2 | 3 | 0 | 5 | 9 | 0 | 0.321513 | 0 | I'm using an event loop based server in twisted python that stores files, and I'd like to be able to classify the files according to their compressibility.
If the probability that they'd benefit from compression is high, they would go to a directory with btrfs compression switched on, otherwise they'd go elsewhere.
I do not need to be sure - 80% accuracy would be plenty, and would save a lot of diskspace. But since there is the CPU and fs performance issue too, I can not just save everything compressed.
The files are in the low megabytes. I can not test-compress them without using a huge chunk of CPU and unduly delaying the event loop or refactoring a compression algorithm to fit into the event loop.
Is there any best practice to give a quick estimate for compressibility? What I came up with is taking a small chunk (few kB) of data from the beginning of the file, test-compress it (with a presumably tolerable delay) and base my decision on that.
Any suggestions? Hints? Flaws in my reasoning and/or problem? | 0 | python,compression,twisted | 2012-10-07T15:04:00.000 | 0 | 12,769,933 | Compressed files usually don't compress well. This means that just about any media file is not going to compress very well, since most media formats already include compression. Clearly there are exceptions to this, such as BMP and TIFF images, but you can probably build a whitelist of well-compressed filetypes (PNGs, MPEGs, and venturing away from visual media - gzip, bzip2, etc) to skip and then assume the rest of the files you encounter will compress well.
If you feel like getting fancy, you could build feedback into the system (observe the results of any compression you do and associate the resulting ratio with the filetype). If you come across a filetype that has consistently poor compression, you could add it to the whitelist.
These ideas depend on being able to identify a file's type, but there are standard utilities which do a pretty good job of this (generally much better than 80%) - file(1), /etc/mime.types, etc. | 0 | 3,128 | false | 0 | 1 | How can I estimate the compressibility of a file without compressing it? | 12,770,116 |
1 | 1 | 0 | 4 | 2 | 1 | 0.664037 | 0 | I need to work with m2crypto library. How can I import it to my .py file? I use Eclipse. | 0 | python,m2crypto | 2012-10-08T16:48:00.000 | 0 | 12,785,963 | usually just doing import m2crypto is sufficient
you may need to easy_install m2crypto first or maybe even pip install m2crypto
If you are on windows you may need the Visual Studio DLL to compile it | 0 | 17,398 | false | 0 | 1 | How to import m2crypto library into python | 12,786,065 |
2 | 2 | 0 | 2 | 3 | 1 | 1.2 | 0 | I have a command-line python script that uses a configuration file. I'm planning to put this on pypi soon.
What is the best general approach for including a default version of the configuration file in the package, so that it is obvious to the end-user where to find it?
One example of a pypi project which includes user-editable config files is Django. In Django, the user has to run a script to initialize a new project. This generates a directory with a bunch of stuff, including the project configuration file. However, this seems like a heavy approach for a simple command line utility like mine.
Another option is requiring the user to specify the location of the config file as a command line arg. I guess this is okay, but it puts the onus on the user to go to the documentation and create the entire config file from scratch.
Is there any better option? Is there any standard practice for this?
Thanks!
-Travis | 0 | python,pypi | 2012-10-09T00:27:00.000 | 0 | 12,791,275 | You could include the defaults as part of your script and then allow the user to change the defaults with either command line arguments or a config file in the user's home directory.
I don't think the Django approach would work unless you have the concept of a project.
If this is on Unix I would either put the config file in /etc if the script will be run by more than one user or in the user's home folder as a dotfile. This way the user does not have to specify the config file each time, though you could also have a command line argument that lets the user specify a different config file to use. | 0 | 426 | true | 0 | 1 | Best way to include a user-editable config file in a pypi package? | 12,792,669 |
2 | 2 | 0 | 0 | 3 | 1 | 0 | 0 | I have a command-line python script that uses a configuration file. I'm planning to put this on pypi soon.
What is the best general approach for including a default version of the configuration file in the package, so that it is obvious to the end-user where to find it?
One example of a pypi project which includes user-editable config files is Django. In Django, the user has to run a script to initialize a new project. This generates a directory with a bunch of stuff, including the project configuration file. However, this seems like a heavy approach for a simple command line utility like mine.
Another option is requiring the user to specify the location of the config file as a command line arg. I guess this is okay, but it puts the onus on the user to go to the documentation and create the entire config file from scratch.
Is there any better option? Is there any standard practice for this?
Thanks!
-Travis | 0 | python,pypi | 2012-10-09T00:27:00.000 | 0 | 12,791,275 | I like Nathan's answer. But in this specific case I wound up adding a command-line option to the script that would dump an example config file to standard out. | 0 | 426 | false | 0 | 1 | Best way to include a user-editable config file in a pypi package? | 14,864,557 |
1 | 3 | 0 | 1 | 0 | 1 | 0.066568 | 0 | I am using Python Nose and would like to print the type of the test ran, like whether it is a Doctest or unittest or so? How can this be done?
Thanks. | 0 | python,nose,nosetests | 2012-10-09T07:09:00.000 | 0 | 12,794,631 | Using --with-doctests implies that you're running doctests. Anything outside of a doctest can be considered a unit test. AFAIK, they're not mutually exclusive, so you can't strictly tell which you're running if you've enabled --with-doctests.
Having said that, doctests generally are a form of unit test, so I'm not quite sure what end you're trying to achieve with this. | 0 | 502 | false | 0 | 1 | Print the test type in Python Nose | 12,794,692 |
3 | 3 | 0 | 2 | 4 | 0 | 0.132549 | 0 | I am running flask/memcached and am looking for a lean/efficient method to prevent automated scripts from slamming me with requests and/or submitting new posts too quickly.
I had the thought of including a 'last_action' time in the session cookie and checking against it each request but no matter what time I set, the script could be set up to delay that long.
I also thought to grab the IP and if too many requests from it are made in x amount of time, deny anymore for so long, but this would require something like redis to run efficiently, which I'd like to avoid having to pay for.
I prefer a cookie-based solution unless something like redis can prove it's worth.
What are the 'industry standards' for dealing with these kinds of situations? What methods come with the least amount of cost/performance trade-offs? | 0 | python,security,flask,spam-prevention | 2012-10-09T17:59:00.000 | 0 | 12,805,732 | You should sit down and decide what exactly your "core" problems in the scenario of your app are, and who your likely users will be. That will help you guide the right solution.
In my experience, there are a lot of different problems and solutions in this subject - and none are a "one size fits all"
If you have a problem with Anonymous users , you can try to migrate as much of the functionality behind an 'account wall' as possible.
If you can't use an account wall, then you'll be better off with some IP
based tracking, along with some other headers/javascript stuff. Going by IP alone can be a disaster because of corporate proxies , home routers, etc. You'll run the risk of too many false positives. If you add in browser info, unsavory users can still fake it - but you'll penalize fewer real users.
You might want an account wall to only serve as a way to enforce a cookie , or it might plug into the idea of having a site identity where experience earns privilege
You might want an account that can map to another trusted site's account. For example, I generally trust a 3rd party account binding against Facebook - who are pretty decent at dealing with fake accounts. I don't trust a 3rd party account binding against Twitter - which is largely spam.
You might only require site "registration" to need solving a captcha , or something else mildly inconvenient to weed out most unsavory visits. If the reward for bad behavior is high enough though, you won't solve anything.
I could talk about this all day. From my perspective, you have to solve the business logic and ux concepts first - and then a tech solution is much easier. | 0 | 1,007 | false | 1 | 1 | Leanest way to prevent scripted abuse of a web app? | 12,806,423 |
3 | 3 | 0 | 0 | 4 | 0 | 0 | 0 | I am running flask/memcached and am looking for a lean/efficient method to prevent automated scripts from slamming me with requests and/or submitting new posts too quickly.
I had the thought of including a 'last_action' time in the session cookie and checking against it each request but no matter what time I set, the script could be set up to delay that long.
I also thought to grab the IP and if too many requests from it are made in x amount of time, deny anymore for so long, but this would require something like redis to run efficiently, which I'd like to avoid having to pay for.
I prefer a cookie-based solution unless something like redis can prove it's worth.
What are the 'industry standards' for dealing with these kinds of situations? What methods come with the least amount of cost/performance trade-offs? | 0 | python,security,flask,spam-prevention | 2012-10-09T17:59:00.000 | 0 | 12,805,732 | An extremely simple method that I have used before is to have an additional input in the registration form that is hidden using CSS (i.e. has display:none). Most form bots will fill this field in whereas humans will not (because it is not visible). In your server-side code you can then just reject any POST with the input populated. | 0 | 1,007 | false | 1 | 1 | Leanest way to prevent scripted abuse of a web app? | 14,003,551 |
3 | 3 | 0 | 4 | 4 | 0 | 0.26052 | 0 | I am running flask/memcached and am looking for a lean/efficient method to prevent automated scripts from slamming me with requests and/or submitting new posts too quickly.
I had the thought of including a 'last_action' time in the session cookie and checking against it each request but no matter what time I set, the script could be set up to delay that long.
I also thought to grab the IP and if too many requests from it are made in x amount of time, deny anymore for so long, but this would require something like redis to run efficiently, which I'd like to avoid having to pay for.
I prefer a cookie-based solution unless something like redis can prove it's worth.
What are the 'industry standards' for dealing with these kinds of situations? What methods come with the least amount of cost/performance trade-offs? | 0 | python,security,flask,spam-prevention | 2012-10-09T17:59:00.000 | 0 | 12,805,732 | There is no way to achieve this with cookies, since a malicious script can just silently drop your cookie. Since you have to support the case where a user first visits (meaning without any cookies set), there is no way to distinguish between a genuine new user and a malicious script by only considering state stored on the client.
You will need to keep track of your users on the server-side to achieve your goals. This can be as simple as an IP-based filter that prevents fast posting by the same IP. | 0 | 1,007 | false | 1 | 1 | Leanest way to prevent scripted abuse of a web app? | 12,806,126 |
1 | 1 | 0 | 7 | 4 | 0 | 1 | 0 | I getting the below issue when firing up django or ipython notebook
/opt/bitnami/python/bin/.python2.7.bin: error while loading shared libraries: libreadline.so.5
However libreadline.so.5 exists in my system after locating it as shown below
root@linux:/opt/bitnami/scripts# locate libreadline.so.5
/opt/bitnami/common/lib/libreadline.so.5
/opt/bitnami/common/lib/libreadline.so.5.2
I have also exported the path in the environment variable (where the libreadlive.so.5 is located) but still does'nt seems to be resolving my issue (see below)
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$HOME/opt/bitnami/common/lib
Also there is a script which is being provided by bitnami which is located in /opt/bitnami/scripts/setenv.sh. But even after executing it still i am stuck.
Anyone can help me with this | 0 | python,django,centos,bitnami | 2012-10-10T08:20:00.000 | 1 | 12,814,973 | Can you execute the following and see if it solves your issue?
. /opt/bitnami/scripts/setenv.sh
(notice the space between the dot and the path to the script)
Also what are you executing that gives you that error? | 0 | 1,566 | false | 1 | 1 | Bitnami - /opt/bitnami/python/bin/.python2.7.bin: error while loading shared libraries: libreadline.so.5 | 12,898,508 |
1 | 2 | 0 | 2 | 4 | 0 | 1.2 | 0 | Is there a way to run only doctests using Python Nose (nosetests)? . I do not want to run any unittests but only and only the doctests.
Thanks. | 0 | python,nose,nosetests | 2012-10-10T12:35:00.000 | 0 | 12,819,489 | You can achieve that effect ignoring all regular test files.
This can be done easily using the -I or --ignore-files options and a regex like .*\.py.
An other way could be to save the doctests in a separate directory and launch nose on that.
In newer versions of nose this doesn't seem to work anymore. | 0 | 604 | true | 0 | 1 | run only doctests from python nose | 12,820,015 |
1 | 2 | 0 | 2 | 0 | 1 | 0.197375 | 0 | I am trying to extract an address automatically from a postscript document that has been intercepted by redmon and piped to a python program. I have gotten to the point where I can capture the postscript output (and write it to a file), but I am stuck at the extraction part.
Is there a good/reliable way of doing this in python, or do I need to run the postscript file through ps2ascii and hope for the best?
If there are tools in other languages that could do this I would be happy to evaluate them. | 0 | python,postscript | 2012-10-11T10:44:00.000 | 0 | 12,837,793 | Actually, in most cases just parsing the Postscript will suffice, since a Postscript document is a normal text file.
As a clarification: yes, I am aware that what a Postscript document displays is a result of a program written in the beautifully reversed or reversely beautiful language called Postscript. In most of the cases, however, it is sufficient to grep the program source. In some other cases text may be encoded as a curve or bitmap and there will be no way of extracting it short of OCR'ing the rendered output.
Bottom line: it depends of the type of information you would like to extract, and on the type of the postscript file. In my view, ps2ascii is a fine tool, and one way of solving the problem, but one that (i) will not guarantee a success (maybe slightly more than greping the source) (ii) to a large extent just strips operators and (iii) might, in some cases, lead to a loss of text. | 0 | 2,171 | false | 0 | 1 | Extracting text from postscript and/or creating overlays using python | 12,837,843 |
1 | 1 | 0 | 2 | 0 | 1 | 1.2 | 0 | How do I create a thread that continuously checks for obstacles using the ultrasonic class in nxt-python 2.2.2? I want to implement it in a way that while my robot is moving it also detects obstacles in a background process and once it detects an object it will brake and do something else | 0 | python,multithreading,nxt-python | 2012-10-12T02:26:00.000 | 0 | 12,851,374 | You used the daemon thread instead of normal thread. because this is different to normal thread. I hope so daemon thread resolve your problem. | 0 | 235 | true | 1 | 1 | Ultrasonic thread that runs in the background in python | 12,852,400 |
1 | 3 | 0 | 0 | 14 | 0 | 0 | 0 | I'm trying to use auto-doc tool to generate API doc for Tastypie REST API. I tried Tastytool, but it seems not showing the API's result parameters but the model's columns. Then I tried Sphinx seems more promising since Tastypie supports Sphinx, but I can't find an example to show where & how to put comment for the API inside the code, and generate them into the document.
Anyone can share some info or example about correctly write comment and generate Sphinx doc for Tastypie based API? thanks. | 0 | django,python-sphinx,tastypie,documentation-generation | 2012-10-12T03:43:00.000 | 0 | 12,851,898 | Perhaps I'm completely missing the point of your question but if you are just trying to build the docs that come with the source distribution there is a Makefile in the docs directory that performs the necessary actions. You are required to specify a target output type such as html, json, latex, etc. I keep a local copy of the docs for django, tastypie, and slumber as I use all three in conjunction with each other and I use the option make html frequently.
If I am mistaken about what you are trying to accomplish perhaps we can come to some clarification. | 0 | 2,869 | false | 1 | 1 | Tastypie documentation generation | 12,867,026 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | I am trying to run a python file from the telnet session
Steps:
Dailyscript.py
Telnetting in to montavista
from the telnet session I am trying to run another python file "python sample.py"
sample.py
Importing TestLib (in this file)
But, when I run directly form my linux box, it is running fine.
Is there any thing I need? | 0 | python | 2012-10-12T15:22:00.000 | 0 | 12,862,260 | Most likely the problem is that TestLib.py isn't in your working directory. Make sure your Dailyscript.py sets its directory to wherever you ran it from (over SSH) before executing python sample.py.
Also, if you have SSH access, why aren't you just using SSH? | 0 | 581 | false | 0 | 1 | Import Error: No Module found named TestLIb | 12,863,073 |
2 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I am having a design problem in test automation:-
Requirements - Need to test different servers (using unix console and not GUI) through automation framework. Tests which I'm going to run - Unit, System, Integration
Question: While designing a test case, I am thinking that a Test Case should be a part of a test suite (test suite is a class), just as we have in Python's pyunit framework. But, should we keep test cases as functions for a scalable automation framework or should be keep test cases as separate classes(each having their own setup, run and teardown methods) ? From automation perspective, Is the idea of having a test case as a class more scalable, maintainable or as a function? | 0 | java,python,testing,frameworks,automation | 2012-10-13T08:30:00.000 | 0 | 12,871,388 | Normally Test Cases are used as classes rather than functions because each test case has own setup data and initialization mechanism. Implementing test cases as a single function will not only make difficult to set up test data before running any test case, but yes you can have different test method in a test case class if you are running same test scenario. | 0 | 1,670 | false | 0 | 1 | How to write automated tests - Test case as a function or test case as a class | 12,871,446 |
2 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I am having a design problem in test automation:-
Requirements - Need to test different servers (using unix console and not GUI) through automation framework. Tests which I'm going to run - Unit, System, Integration
Question: While designing a test case, I am thinking that a Test Case should be a part of a test suite (test suite is a class), just as we have in Python's pyunit framework. But, should we keep test cases as functions for a scalable automation framework or should be keep test cases as separate classes(each having their own setup, run and teardown methods) ? From automation perspective, Is the idea of having a test case as a class more scalable, maintainable or as a function? | 0 | java,python,testing,frameworks,automation | 2012-10-13T08:30:00.000 | 0 | 12,871,388 | The following is my opinion:
Pros of writing tests as functions:
If you need any pre-requisites for that test case, just call another function which provides the pre-requisites. Do the same thing for teardown steps
Looks simple for a new person in the team. Easy to undertstand what is happening by looking into tests as functions
Cons of writing tests as functions:
Not maintainable - Because if there are huge number of tests where
same kind of pre-requisites are required, the test case author has to
maintain calling each pre-requisite function in the test case. Same
for each teardown inside the test case
If there are so many calls to such a pre-requisite function inside many test cases, and if anything changes in the product functionality etc, you have to manually make efforts in many places again.
Pros of writing test cases as classes:
Setup, run and teardown are clearly defined. the test pre-requisites are easily understood
If there is Test 1 which is does something and the result of Test 1 is used as a setup pre-requisite in Test 2 and 3, its easy to just inherit from Test 1, and call its setup, run a teardown methods first, and then, continue your tests. This helps make the tests independent of each other. Here, you dont need to make efforts to maintain the actual calling of your code. It will be done implicitly because of inheritance.
Sometimes, if the setup method of Test 1 and run method of Test 2 might become the pre-requisites of another Test 3. In that case, just inherit from both of Test 1 and Test 2 classes and in the Test 3's setup method, call the setup of Test 1 and run of Test 2. Again you dont need to need to maintain the calling of the actual code, because you are calling the setup and run methods, which are tried and tested from the framework perspective.
Cons of writing test case as classes:
When the number of tests increase, you cant look into a particular test and say what it does, because it may have inherited so much levels that you cant back track. But, there is a solution around it - Write doc strings in each setup, run, teardown method of each test case. And, write a custom wrapper to generate doc strings for each test case. While/After inheriting, you should provide an option to add/Remove the docstring of a particular function (setup, run, teardown) to the inherited function. This way, you can just run that wrapper and get information about a test case from its doc-strings | 0 | 1,670 | false | 0 | 1 | How to write automated tests - Test case as a function or test case as a class | 12,871,619 |
1 | 1 | 0 | 3 | 0 | 0 | 1.2 | 0 | I've been testing out Mod_python and it seems that there are two ways of producing python code using:-
Publisher Handler
PSP Handler
I've gotten both to work at the same time however, should I use one over the other? PSP resembles PHP a lot but Publisher seems to resemble python more. Is there an advantage over using one (speed, ease of use, etc.)? | 0 | python,mod-python | 2012-10-13T19:19:00.000 | 0 | 12,876,159 | I am not familiar with the mod_python (project was abandoned long ago) but nowadays Python applications are using wsgi (mod_wsgi or uwsgi). If you are using apache, mod_wsgi is easy to configure, for nginx use the uwsgi. | 0 | 274 | true | 1 | 1 | Mod_Python: Publisher Handler vs PSP Handler | 12,876,548 |
2 | 4 | 0 | 0 | 4 | 0 | 0 | 0 | Python3 has a pass command that does nothing. This command is used in if-constructs because python requires the programmer to have at least one command for else. Does Ruby have an equivalent to python3's pass command? | 0 | python,ruby,unix,scripting,python-3.x | 2012-10-14T00:09:00.000 | 0 | 12,878,175 | I don't think you need it in ruby ... an if doesn't require an else. | 0 | 1,225 | false | 0 | 1 | Python3 Pass Command Equivalent in Ruby | 12,878,180 |
2 | 4 | 0 | 6 | 4 | 0 | 1 | 0 | Python3 has a pass command that does nothing. This command is used in if-constructs because python requires the programmer to have at least one command for else. Does Ruby have an equivalent to python3's pass command? | 0 | python,ruby,unix,scripting,python-3.x | 2012-10-14T00:09:00.000 | 0 | 12,878,175 | Your statement is essentially wrong, since else statement is not obligatory in Python.
One of the frequent uses of the pass statement is in try/ except construct, when exception may be ignored.
pass is also useful when you define API - and wish to postpone actual implementation of classes/functions.
EDIT:
One more frequent usage I haven't ,mentioned - defining user exception; usually you just override name to distinguish them from standard exceptions. | 0 | 1,225 | false | 0 | 1 | Python3 Pass Command Equivalent in Ruby | 12,878,394 |
1 | 1 | 0 | 2 | 1 | 0 | 1.2 | 0 | Is there a way to check the size of the incoming POST in Pyramid, without saving the file to disk and using the os module? | 0 | python,pyramid | 2012-10-14T02:30:00.000 | 0 | 12,878,819 | You should be able to check the request.content_length. WSGI does not support streaming the request body so content length must be specified. If you ever access request.body, request.params or request.POST it will read the content and save it to disk.
The best way to handle this, however, is as close to the client as possible. Meaning if you are running behind a proxy of any sort, have that proxy reject requests that are too large. Once it gets to Python, something else may have already stored the request to disk. | 0 | 134 | true | 1 | 1 | Check size of HTTP POST without saving to disk | 12,879,591 |
1 | 4 | 0 | 1 | 0 | 0 | 0.049958 | 1 | I have a remote method created via Python web2py. How do I test and invoke the method from Java?
I was able to test if the method implements @service.xmlrpc but how do i test if the method implements @service.run? | 0 | java,python,rmi,rpc,web2py | 2012-10-15T06:06:00.000 | 0 | 12,890,137 | I'd be astonished if you could do it at all. Java RMI requires Java peers. | 0 | 2,053 | false | 1 | 1 | Using Java RMI to invoke Python method | 12,890,526 |
1 | 1 | 0 | 2 | 2 | 0 | 1.2 | 0 | Is there a built-in way in Eclipse to redirect PyUnit's output to a file (~ save the report)? | 0 | eclipse,pydev,python-unittest | 2012-10-15T14:35:00.000 | 0 | 12,897,908 | Output can be easily redirected to a file in Run Configurations > Common tab > Standard Input and Output section. Hiding just in plain sight... | 0 | 823 | true | 1 | 1 | Redirecting PyUnit output to file in Eclipse | 12,945,643 |
1 | 3 | 0 | 8 | 89 | 0 | 1 | 0 | I'm using celery and django-celery. I have defined a periodic task that I'd like to test. Is it possible to run the periodic task from the shell manually so that I view the console output? | 0 | python,django,celery,django-celery,celery-task | 2012-10-15T16:34:00.000 | 1 | 12,900,023 | I think you'll need to open two shells: one for executing tasks from the Python/Django shell, and one for running celery worker (python manage.py celery worker). And as the previous answer said, you can run tasks using apply() or apply_async()
I've edited the answer so you're not using a deprecated command. | 0 | 60,194 | false | 1 | 1 | How can I run a celery periodic task from the shell manually? | 12,900,160 |
1 | 2 | 0 | -1 | 2 | 0 | -0.099668 | 1 | I am writing a python script to copy python(say ABC.py) files from one directory to another
directory with the same folder name(say ABC) as script name excluding .py.
In the local system it works fine and copying the files from one directory to others by
creating the same name folder.
But actually I want copy these files from my local system (windows XP) to the remote
system(Linux) located in other country on which I execute my script. But I am getting
the error as "Destination Path not found" means I am not able to connect to remote
that's why.
I use SSH Secure client.
I use an IP Address and Port number to connect to the remote server.
Then it asks for user id and password.
But I am not able to connect to the remote server by my python script.
Can Any one help me out how can I do this?? | 0 | python | 2012-10-16T07:14:00.000 | 1 | 12,909,334 | I used the same script, but my host failed to respond. My host is in different network.
WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond | 0 | 9,452 | false | 0 | 1 | How to Transfer Files from Client to Server Computer by using python script? | 66,757,126 |
1 | 1 | 0 | 3 | 0 | 0 | 0.53705 | 0 | I can find a load of information on the reverse, not so much this way around :)
So, the summary is that I want to write some Python code-completion stuff in C++, but I can't figure out the best way of tokenizing the Python code.
Are there any libraries out there that will do this?
I'm leaning towards calling Python's tokenize.tokenize directly from C++... but whenever I look at calling Python code from C++ I go cross-eyed. | 0 | c++,python,parsing | 2012-10-16T11:01:00.000 | 0 | 12,913,250 | Using regular parser-generators to generate parsers from the grammar is usually complicated with Python (for example due to its significant whitespace and difficult line-continuation rules).
I am not sure about your experience with Python, but my recommendation would be to parse the Python file from Python, and do as much of the processing as possible in Python, then return the result to the C++ code using well-defined data types (such as the stdc++ ones) and using Boost.python for the bindings. | 0 | 210 | false | 0 | 1 | Parsing Python code from C++ | 12,913,328 |
1 | 2 | 0 | 1 | 4 | 0 | 0.099668 | 0 | How to to send out a pwm signal from the serial port with linux? (With python or c++)
I want to connect a motor directly to change the speed rotation. | 0 | c++,python,embedded | 2012-10-16T16:50:00.000 | 1 | 12,919,644 | doubt you can do this you are using a uart interface...just get an arduino or someat and send serial commands to the arduino (serial pins) which then puts the correct pwm signal out its pins ... probably 5 lines of arduino code and another 5 of python code ...
all that said you may be able to find some very difficult and hacky way to output a PWM signal over serial ... but you need to think about if thats really appropriate ... | 0 | 1,468 | false | 0 | 1 | PWM signal out of serial port with linux | 12,919,968 |
1 | 3 | 0 | 0 | 3 | 0 | 0 | 0 | The project I'm working on requires me to store javascript numbers(which are doubles) as BLOB primary keys in a database table(can't use the database native float data type). So basically I need to serialize numbers to a byte array in such a way that:
1 - The length of the byte array is 8(which is what is normally needed to serialize doubles)
2 - the byte arrays must preserve natural order so the database will transparently sort rows in the index b-tree.
A simple function that takes a number and returns an array of numbers representing the bytes is what I'm seeking. I prefer the function to be written in javascript but answers in java, C, C#, C++ or python will also be accepted. | 0 | javascript,python,c,serialization,natural-sort | 2012-10-17T10:49:00.000 | 0 | 12,932,663 | The obvious answer is to remove the restriction that you can't use the native database type. I can't see any point in it. It's still 8 bytes and it does the ordering for you without any further investigation, work, experiment, testing etc being necessary. | 0 | 797 | false | 0 | 1 | How to efficiently serialize 64-bit floats so that the byte arrays preserve natural numeric order? | 12,933,456 |
1 | 1 | 0 | 2 | 0 | 0 | 0.379949 | 1 | I am trying to download only files modified in the last 30 minutes from a URL. Can you please guide me how to proceed with this. Please let me know if I should use shell scripting or python scripting for this. | 0 | python,bash,shell | 2012-10-17T17:45:00.000 | 0 | 12,940,223 | If the server supports if-modified-since, you could send the request with If-Modified-Since: (T-30 minutes) and ignore the 304 responses. | 0 | 170 | false | 0 | 1 | Download files modified in last 30 minutes from a URL | 12,940,327 |
2 | 3 | 0 | 2 | 2 | 0 | 0.132549 | 1 | Basically I want a Java, Python, or C++ script running on a server, listening for player instances to: join, call, bet, fold, draw cards, etc and also have a timeout for when players leave or get disconnected.
Basically I want each of these actions to be a small request, so that players could either be processes on same machine talking to a game server, or machines across network.
Security of messaging is not an issue, this is for learning/research/fun.
My priorities:
Have a good scheme for detecting when players disconnect, but also be able to account for network latencies, etc before booting/causing to lose hand.
Speed. I'm going to be playing millions of these hands as fast as I can.
Run on a shared server instance (I may have limited access to ports or things that need root)
My questions:
Listen on ports or use sockets or HTTP port 80 apache listening script? (I'm a bit hazy on the differences between these).
Any good frameworks to work off of?
Message types? I'm thinking JSON or Protocol Buffers.
How to make it FAST?
Thanks guys - just looking for some pointers and suggestions. I think it is a cool problem with a lot of neat things to learn doing it. | 0 | java,python,optimization,webserver,multiplayer | 2012-10-18T00:04:00.000 | 0 | 12,945,278 | Anything else? Maybe a cup of coffee to go with your question :-)
Answering your question from the ground up would require several books worth of text with topics ranging from basic TCP/IP networking to scalable architectures, but I'll try to give you some direction nevertheless.
Questions:
Listen on ports or use sockets or HTTP port 80 apache listening script? (I'm a bit hazy on the differences between these).
I would venture that if you're not clear on the definition of each of these maybe designing an implementing a service that will be "be playing millions of these hands as fast as I can" is a bit hmm, over-reaching? But don't let that stop you as they say "ignorance is bliss."
Any good frameworks to work off of?
I think your project is a good candidate for Node.js. There main reason being that Node.js is relatively scaleable and it is good at hiding the complexity required for that scalability. There are downsides to Node.js, just Google search for 'Node.js scalability critisism'.
The main point against Node.js as opposed to using a more general purpose framework is that scalability is difficult, there is no way around it, and Node.js being so high level and specific provides less options for solving though problems.
The other drawback is Node.js is Javascript not Java or Phyton as you prefer.
Message types? I'm thinking JSON or Protocol Buffers.
I don't think there's going to be a lot of traffic between client and server so it doesn't really matter I'd go with JSON just because it is more prevalent.
How to make it FAST?
The real question is how to make it scalable. Running human vs human card games is not computationally intensive, so you're probably going to run out of I/O capacity before you reach any computational limit.
Overcoming these limitations is done by spreading the load across machines. The common way to do in multi-player games is to have a list server that provides links to identical game servers with each server having a predefined number of slots available for players.
This is a variation of a broker-workers architecture were the broker machine assigns a worker machine to clients based on how busy they are. In gaming users want to be able to select their server so they can play with their friends.
Related:
Have a good scheme for detecting when players disconnect, but also be able to account for network latencies, etc before booting/causing to lose hand.
Since this is in human time scales (seconds as opposed to miliseconds) the client should send keepalives say every 10 seconds with say 30 second session timeout.
The keepalives would be JSON messages in your application protocol not HTTP which is lower level and handled by the framework.
The framework itself should provide you with HTTP 1.1 connection management/pooling which allows several http sessions (request/response) to go through the same connection, but do not require the client to be always connected. This is a good compromise between reliability and speed and should be good enough for turn based card games. | 0 | 2,321 | false | 0 | 1 | Multiplayer card game on server using RPC | 12,946,896 |
2 | 3 | 0 | 1 | 2 | 0 | 0.066568 | 1 | Basically I want a Java, Python, or C++ script running on a server, listening for player instances to: join, call, bet, fold, draw cards, etc and also have a timeout for when players leave or get disconnected.
Basically I want each of these actions to be a small request, so that players could either be processes on same machine talking to a game server, or machines across network.
Security of messaging is not an issue, this is for learning/research/fun.
My priorities:
Have a good scheme for detecting when players disconnect, but also be able to account for network latencies, etc before booting/causing to lose hand.
Speed. I'm going to be playing millions of these hands as fast as I can.
Run on a shared server instance (I may have limited access to ports or things that need root)
My questions:
Listen on ports or use sockets or HTTP port 80 apache listening script? (I'm a bit hazy on the differences between these).
Any good frameworks to work off of?
Message types? I'm thinking JSON or Protocol Buffers.
How to make it FAST?
Thanks guys - just looking for some pointers and suggestions. I think it is a cool problem with a lot of neat things to learn doing it. | 0 | java,python,optimization,webserver,multiplayer | 2012-10-18T00:04:00.000 | 0 | 12,945,278 | Honestly, I'd start with classic LAMP. Take a stock Apache server, and a mysql database, and put your Python scripts in the cgi-bin directory. The fact that they're sending and receiving JSON instead of HTTP doesn't make much difference.
This is obviously not going to be the most flexible or scalable solution, of course, but it forces you to confront the actual problems as early as possible.
The first problem you're going to run into is game state. You claim there is no shared state, but that's not right—the cards in the deck, the bets on the table, whose turn it is—that's all state, shared between multiple players, managed on the server. How else could any of those commands work? So, you need some way to share state between separate instances of the CGI script. The classic solution is to store the state in the database.
Of course you also need to deal with user sessions in the first place. The details depend on which session-management scheme you pick, but the big problem is how to propagate a disconnect/timeout from the lower level up to the application level. What happens if someone puts $20 on the table and then disconnects? You have to think through all of the possible use cases.
Next, you need to think about scalability. You want millions of games? Well, if there's a single database with all the game state, you can have as many web servers in front of it as you want—John Doe may be on server1 while Joe Schmoe is on server2, but they can be in the same game. On the other hand, you can a separate database for each server, as long as you have some way to force people in the same game to meet on the same server. Which one makes more sense? Either way, how do you load-balance between the servers. (You not only want to keep them all busy, you want to avoid the situation where 4 players are all ready to go, but they're on 3 different servers, so they can't play each other…).
The end result of this process is going to be a huge mess of a server that runs at 1% of the capacity you hoped for, that you have no idea how to maintain. But you'll have thought through your problem space in more detail, and you'll also have learned the basics of server development, both of which are probably more important in the long run.
If you've got the time, I'd next throw the whole thing out and rewrite everything from scratch by designing a custom TCP protocol, implementing a server for it in something like Twisted, keeping game state in memory, and writing a simple custom broker instead of a standard load balancer. | 0 | 2,321 | false | 0 | 1 | Multiplayer card game on server using RPC | 12,963,229 |
2 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | To begin with, I am only allowed to use python 2.4.4
I need to write a process controller in python which launches and various subprocesses monitors how they affect the environment. Each of these subprocesses are themselves python scripts.
When executed from the unix shell, the command lines look something like this:
python myscript arg1 arg2 arg3 >output.log 2>err.log &
I am not interested in the input or the output, python does not need to process. The python program only needs to know
1) The pid of each process
2) Whether each process is running.
And the processes run continuously.
I have tried reading in the output and just sending it out a file again but then I run into issues with readline not being asynchronous, for which there are several answers many of them very complex.
How can I a formulate a python subprocess call that preserves the bash redirection operations?
Thanks | 0 | python,redirect,subprocess | 2012-10-18T17:23:00.000 | 1 | 12,960,276 | You can use existing file descriptors as the stdout/stderr arguments to subprocess.Popen. This should be exquivalent to running from with redirection from bash. That redirection is implemented with fdup(2) after fork and the output should never touch your program. You can probably also pass fopen('/dev/null') as a file descriptor.
Alternatively you can redirect the stdout/stderr of your controller program and pass None as stdout/stderr. Children should print to your controllers stdout/stderr without passing through python itself. This works because the children will inherit the stdin/stdout descriptors of the controller, which were redirected by bash at launch time. | 0 | 1,449 | false | 0 | 1 | Preserving bash redirection in a python subprocess | 12,960,474 |
2 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | To begin with, I am only allowed to use python 2.4.4
I need to write a process controller in python which launches and various subprocesses monitors how they affect the environment. Each of these subprocesses are themselves python scripts.
When executed from the unix shell, the command lines look something like this:
python myscript arg1 arg2 arg3 >output.log 2>err.log &
I am not interested in the input or the output, python does not need to process. The python program only needs to know
1) The pid of each process
2) Whether each process is running.
And the processes run continuously.
I have tried reading in the output and just sending it out a file again but then I run into issues with readline not being asynchronous, for which there are several answers many of them very complex.
How can I a formulate a python subprocess call that preserves the bash redirection operations?
Thanks | 0 | python,redirect,subprocess | 2012-10-18T17:23:00.000 | 1 | 12,960,276 | The subprocess module is good.
You can also do this on *ix with os.fork() and a periodic os.wait() with a WNOHANG. | 0 | 1,449 | false | 0 | 1 | Preserving bash redirection in a python subprocess | 12,961,812 |
1 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | I've been tasked with a thesis project where i have to extend the features of ArcGis. I've been asked to create a model written in Python that can run out of ArcGIS 10. This model will have a simple user interface where the user can drag/drop a variety of shapefiles and enter the values for particular variables in order for the model to run effectively. Once the model has finished running, a new shapefile is created that lays out the most cost effective Collector Cable route for a wind turbine from point A to point B.
I'd like to know if such a functionality/ extension already exists in ArcGIS so i don't have to re-invent the wheel. If not then what is the best programming language to learn to extend ArcGIS for this (Python vs Visual basic vs Java). My background is Java, PHP, Jquery and Javascript. Also any pointers in the right direction i.e documentation, resources etc would be hugely appreciated | 0 | java,python,visual-studio-2010,arcgis,arcobjects | 2012-10-20T17:52:00.000 | 0 | 12,991,111 | Creating a Python AddIn is probably the quickest and easiest approach if you just want to do some geoprocessing and deploy the tool to lots of users.
But as soon as you need a user interface (that does more than simply select GIS data sources) you should create a .Net AddIn (using either C# or VB.net).
I've created many AddIns over the years and they are a dramatic improvement to the old ArcGIS "plugins" that involved lots of complicated COM registration. AddIns are easy to build and deploy. Easy for users to install and uninstall.
.Net has excellent, powerful features for creating rich user interfaces with the kind of drag and drop that you require. And there are great books, forums, samples to leverage. | 0 | 727 | false | 1 | 1 | Extending ArcGIS | 32,124,775 |
1 | 1 | 0 | 1 | 0 | 1 | 0.197375 | 0 | eval_cli_line("cache_%s" % cpu.name + ".ptime") in my python script is constantly giving the following error
NameError: global name 'eval_cli_line' is not defined
Any suggestions ? | 0 | python,simics | 2012-10-21T04:34:00.000 | 0 | 12,995,006 | In Simics 4.x, eval_cli_line has been replaced with run_command(). Read the migration guide. | 0 | 241 | false | 0 | 1 | Python eval_cli_line() | 13,001,532 |
1 | 1 | 0 | 0 | 4 | 0 | 0 | 0 | My Python 2.7 script (on Raspberry Pi Debian) runs a couple of stepper motors synchronously via the GPIO port. I currently have a signal handler in place for Ctrl-C to clean up tidily before exit. I'd now like to extend that method such that keyboard inputs could also generate SIGUSR1 or similar as an asynchonous control mechanism. I know this could be achieve through threading, but I'm after a KISS approach.
Ta | 0 | python,keyboard,signals | 2012-10-21T16:51:00.000 | 0 | 12,999,970 | Have a parent process that monitors keyboard input, and forward a signal to the child if it occurs. | 0 | 415 | false | 0 | 1 | Getting keyboard inputs to cause SIGUSR1/2 akin to ctrl-C / SIGINT to trigger signal_handler | 36,716,865 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | What I would like it is to run a script that automatically checks for new assets (files that aren't code) that have been submitted to a specific directory, and then every so often automatically commit those files and push them.
I could make a script that does this through the command line, but I was mostly curious if mercurial offered any special functionality for this, specifically I'd really like some kind of return error code so that my script will know if the process breaks at any point so I can send an email with the error to specific developers. For example if for some reason the push fails because a pull is necessary first, I'd like the script to get a code so that it knows this and can handle it properly.
I've tried researching this and can only find things like automatically doing a push after a commit, which isn't exactly what I'm looking for. | 0 | python,macos,mercurial,build-automation | 2012-10-22T19:13:00.000 | 0 | 13,018,157 | You can always check exit-code of used commands
hg add (if new, unversioned files appeared in WC) "Returns 0 if all files are successfully added": non-zero means "some troubles here, not all files added"
hg commit "Returns 0 on success, 1 if nothing changed": 1 means "no commit, nothing to push"
hg push "Returns 0 if push was successful, 1 if nothing to push" | 0 | 448 | true | 0 | 1 | Automating commit and push through mercurial from script | 13,667,498 |
1 | 3 | 0 | 2 | 4 | 1 | 0.132549 | 0 | I need an alternative to the shutil module in particular shutil.copyfile.
It is a little known bug with py2exe that makes the entire shutil module useless. | 0 | python,py2exe | 2012-10-24T04:09:00.000 | 0 | 13,042,897 | Using os.system() will be problematic for many reasons; for example, when you have spaces or Unicode in the file names. It will also be more opaque relative to exceptions/failures.
If this is on windows, using win32file.CopyFile() is probably the best approach, since that will yield the correct file attributes, dates, permissions, etc. relative to the original file (that is, it will be more similar to the results you'd get by using Explorer to copy the file). | 0 | 4,141 | false | 0 | 1 | Alternative to shutil.copyfile | 16,571,073 |
1 | 1 | 0 | 5 | 0 | 0 | 1.2 | 0 | Which of these two languages interfaces better and delivers a better performance/toolset for working with sqlite database? I am familiar with both languages but need to choose one for a project I'm developing and so I thought I would ask here. I don't believe this to be opinionated as performance of a language is pretty objective. | 1 | python,ruby,sqlite | 2012-10-24T23:00:00.000 | 0 | 13,059,142 | There is no good reason to choose one over the other as far as sqlite performance or usability.
Both languages have perfectly usable (and pythonic/rubyriffic) sqlite3 bindings.
In both languages, unless you do something stupid, the performance is bounded by the sqlite3 performance, not by the bindings.
Neither language's bindings are missing any uncommon but sometimes performance-critical functions (like an "exec many", manual transaction management, etc.).
There may be language-specific frameworks that are better or worse in how well they integrate with sqlite3, but at that point you're choosing between frameworks, not languages. | 0 | 150 | true | 0 | 1 | ruby or python for use with sqlite database? | 13,059,204 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 1 | Is there a way to dynamically download and install a package like AWS API from a PHP or Python script at runtime?
Thanks. | 0 | php,python | 2012-10-25T14:24:00.000 | 0 | 13,070,759 | Not at runtime - this would make no sense due to the overheads involved and the risk of the download failing. | 0 | 42 | false | 0 | 1 | Python/PHP - Downloading and installing AWS API | 13,070,806 |
1 | 2 | 0 | 1 | 0 | 0 | 1.2 | 0 | Is there any way with Python to directly get (only get, no modify) a single pixel (to get its RGB color) from an image (compressed format if possible) without having to load it in RAM nor processing it (to spare the CPU)?
More details:
My application is meant to have a huge database of images, and only of images.
So what I chose is to directly store images on harddrive, this will avoid the additional workload of a DBMS.
However I would like to optimize some more, and I'm wondering if there's a way to directly access a single pixel from an image (the only action on images that my application does), without having to load it in memory.
Does PIL pixel access allow that? Or is there another way?
The encoding of images is my own choice, so I can change whenever I want. Currently I'm using PNG or JPG. I can also store in raw, but I would prefer to keep images a bit compressed if possible. But I think harddrives are cheaper than CPU and RAM, so even if images must stay RAW in order to do that, I think it's still a better bet.
Thank you.
UPDATE
So, as I feared, it seems that it's impossible to do with variable compression formats such as PNG.
I'd like to refine my question:
Is there a constant compression format (not necessarily specific to an image format, I'll access it programmatically), which would allow to access any part by just reading the headers?
Technically, how to efficiently (read: fast and non blocking) access a byte from a file with Python?
SOLUTION
Thank's to all, I have successfully implemented the functionality I described by using run-length encoding on every row, and padding every row to the same length of the maximum row.
This way, by prepeding a header that describes the fixed number of columns for each row, I could easily access the row using first a file.readline() to get the headers data, then file.seek(headersize + fixedsize*y, 0) where y is the row currently selected.
Files are compressed, and in memory I only fetch a single row, and my application doesn't even need to uncompress it because I can compute where the pixel is exactly by just iterating over every RLE values. So it is also very easy on CPU cycles. | 0 | python,image-processing,pixel,imaging | 2012-10-25T21:04:00.000 | 0 | 13,077,263 | If you want to keep a compressed file format, you can break each image up into smaller rectangles and store them separately. Using a fixed size for the rectangles will make it easier to calculate which one you need. When you need the pixel value, calculate which rectangle it's in, open that image file, and offset the coordinates to get the proper pixel.
This doesn't completely optimize access to a single pixel, but it can be much more efficient than opening an entire large image. | 0 | 693 | true | 0 | 1 | Direct access to a single pixel using Python | 13,078,321 |
1 | 1 | 0 | 7 | 2 | 0 | 1.2 | 0 | Seems like with ever increasing frequency, I am bit by pyc files running outdated code.
This has led to deployment scripts scrubbing *.pyc each time, otherwise deployments don't seem to take effect.
I am wondering, what benefit (if any) is there to pyc files in a long-running WSGI application? So far as I know, the only benefit is improved startup time, but I can't imagine it's that significant--and even if it is, each time new code is deployed you can't really use the old pyc files anyways.
This makes me think that best practice would be to run a WSGI application with the PYTHONDONTWRITEBYTECODE environment variable set.
Am I mistaken? | 0 | python,django,wsgi,pyc | 2012-10-26T06:09:00.000 | 0 | 13,081,659 | The best strategy for doing deployments is to write the deployed files into a new directory, and then use a symlink or similar to swap the codebase over in a single change. This has the side-benefit of also automatically clearing any old .pyc files.
That way, you get the best of both worlds - clean and atomic deployments, and the caching of .pyc if your webapp needs to restart.
If you keep the last N deployment directories around (naming them by date/time is useful), you also have an easy way to "roll back" to a previously deployed version of the code. If you have multiple server machines, you can also deploy to all of the machines but wait to switch them over until all of them have gotten the new code. | 0 | 593 | true | 0 | 1 | Is there any benefit to pyc files in a WSGI app where deployments happen several times per week? | 13,081,746 |
1 | 3 | 0 | 2 | 5 | 1 | 0.132549 | 0 | Imagine I have a script, let's say my_tools.py that I import as a module. But my_tools.py is saved twice: at C:\Python27\Lib
and at the same directory from where the script is run that does the import.
Can I change the order where python looks for my_tools.py first? That is, to check first if it exists at C:\Python27\Lib and if so, do the import? | 0 | python,import | 2012-10-26T07:58:00.000 | 0 | 13,083,026 | if you don't want python to search builtin modules then search in current folder first,,
you can change sys.path
upon program startup, the first item of this list, path[0], is the directory containing the script that was used to invoke the Python interpreter
sys.path[0] is the empty string, which directs Python to search modules in the current directory first, you can put this at the end of the list, that way it will first search in all possible location before coming to current directory | 0 | 2,883 | false | 0 | 1 | Can I change the order where python looks for a module first? | 13,083,221 |
1 | 2 | 0 | 0 | 3 | 0 | 0 | 0 | I'm looking for an API or library that gives me access to all features of Gmail from a Django web application.
I know I can receive and send email using IMAP or POP3. However, what I'm looking for are all the GMail features such as marking emails with star or important marker, adding or removing tags, etc.
I know there is a Settings API that allows me to create or delete labels and filters, but I haven't found anything that actually allows me to set labels to emails, or set emails as starred, and so on.
Can anyone give me a pointer? | 0 | python,django,gmail,gmail-imap | 2012-10-26T11:18:00.000 | 0 | 13,085,946 | I would suggest you to look at context.io, I've used it before and it works great. | 0 | 1,086 | false | 1 | 1 | A Django library for Gmail | 13,087,381 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | I am not sure if this is the right forum to ask, but I give it a try.
A device is sending an E-Mail to my code in which I am trying to receive the email via a socket in python, and to decode the E-Mail with Messsage.get_payload() calls. However. I always have a \n.\n at the end of the message.
If the same device send the same message to a genuine email client (e.g. gmail), I get the correct original message without the \n.\n.
I would like to know what it is with this closing set of special characters in SMTP/E-Mail handling/sending, and how to encode it away. | 0 | python,sockets,smtp | 2012-10-28T11:56:00.000 | 0 | 13,108,615 | These are simply newline characters. In GMail they'll be processed and "displayed" so you don't see them. But they are still part of the email text message so it makes sense that get_payload() returns them. | 0 | 92 | false | 0 | 1 | Mysterious characters at the end of E-Mail, received with socket in python | 13,108,630 |
1 | 1 | 0 | 1 | 2 | 0 | 1.2 | 1 | There are several components involved in auth and the discovery based service api.
How can one test request handlers wrapped with decorators used from oauth2client (eg oauth_required, etc), httplib2, services and uploads?
Are there any commonly available mocks or stubs? | 0 | python,google-app-engine,google-drive-api,google-api-python-client | 2012-10-29T03:42:00.000 | 0 | 13,115,599 | There are the mock http and request classes that the apiclient package uses for its own testing. They are in apiclient/http.py and you can see how to use them throughout the test suite. | 0 | 259 | true | 1 | 1 | How can one test appengine/drive/google api based applications? | 13,125,588 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | Could I please have some ideas for a project utilising Heuristics
Thankyou in advance for your help | 0 | python,vb.net,heuristics | 2012-10-30T06:47:00.000 | 0 | 13,133,986 | Heuristic can roughly be tranlated into 'rule of thumb'
It's not a programming-specific concept. | 0 | 4,568 | false | 0 | 1 | Programming with Heuristics? | 13,134,922 |
1 | 1 | 1 | 2 | 2 | 0 | 1.2 | 0 | Is there a way to use android.py module without installing SL4A?
I mean I have Python running on android successfully from Terminal Emulator.
Can I use that module without installing that layer (or if I can't install it anymore)? | 0 | android,python,sl4a,android-scripting | 2012-10-30T10:46:00.000 | 0 | 13,137,341 | No, because the SL4A package provides the facade to make device API calls, acting as a middleman. Without it you might be able to import it, but you could not be able to make any API calls. | 0 | 1,256 | true | 0 | 1 | Using Python android.py module without SL4A | 13,215,655 |
1 | 2 | 0 | 1 | 4 | 0 | 1.2 | 0 | Is there a way to read the excel file properties using xlrd?
I refer not to cell presentation properties, but general workbook properties.
Thanks a lot in advance. | 0 | python,xlrd | 2012-10-30T13:22:00.000 | 0 | 13,139,949 | Apart from the username (last person to save the worksheet) the Book instance as returned by open_workbook does not seem to have any properties.
I recursively dumped the Book ( dumping its dict if a xlrd.BaseObject) and could not find anything in that way. The test files for sure had
an author, company and some custom metadata.
FWIW: LibreOffice does not seem to be able to find author and company either (or does not display them), but it does show custom metadata in the properties. | 0 | 1,896 | true | 0 | 1 | Read workbook properties using python and xlrd | 13,562,703 |
1 | 3 | 0 | 2 | 5 | 0 | 0.132549 | 0 | I'm looking into emacs as an alternative to Eclipse. One of my favorite features in Eclipse is being able to mouse over almost any python object and get a listing of its source, then clicking on it to go directly to its code in another file.
I know this must be possible in emacs, I'm just wondering if it's already implemented in a script somewhere and, if so, how to get it up and running on emacs.
Looks like my version is Version 24.2.
Also, since I'll be doing Django development, it would be great if there's a plugin that understands Django template syntax. | 0 | python,django,emacs,ide | 2012-10-31T14:28:00.000 | 0 | 13,160,217 | I also switched from Eclipse to Emacs and I must say that after adjusting to more text-focused ways of exploring code, I don't miss this feature at all.
In Emacs, you can just open a shell prompt (M-x shell). Then run IPython from within the Emacs shell and you're all set. I typically split my screen in half horizontally and make the bottom window thinner, so that it's like the Eclipse console used to be.
I added a feature in my .emacs that lets me "bring to focus" the bottom window and swap it into the top window. So when I am coding, if I come across something where I want to see the source code, I just type C-x c to swap the IPython shell into the top window, and then I type %psource < code thing > and it will display the source.
This covers 95%+ of the use cases I ever had for quickly getting the source in Eclipse. I also don't care about the need to type C-x b or C-x C-f to open the code files. In fact, after about 2 or 3 hours of programming, I find that almost every buffer I could possibly need will already be open, and I just type C-x b < start of file name > and then tab-complete it.
Since I have become more proficient at typing and not needing to move attention away to the mouse, I think this is now actually faster than the "quick" mouse-over plus F3 tactic in Eclipse. And to boot, having IPython open at the bottom is way better than the non-interactive Eclipse console. And you can use things like M-p and M-n to get the forward-backward behavior of IPython in terms of going back through commands.
The one thing I miss is tab completion in IPython. And for this, I think there are some add-ons that will do it but I haven't invested the time yet to install them.
Let me know if you want to see any of the elisp code for the options I mentioned above. | 0 | 314 | false | 1 | 1 | Link to python modules in emacs | 13,160,450 |
1 | 4 | 0 | 1 | 2 | 1 | 0.049958 | 0 | Not sure how to phrase this question properly, but this is what I intend to achieve using the hypothetical scenario outlined below -
A user's email to me has just the SUBJECT and BODY, the subject being the topic of email, and the body being a description of the topic in just one paragraph of max 1000 words. Now I would like to analyse this paragraph (in the BODY) using some computer language (python, maybe), and then come up with a list of most important words from the paragraph with respect to the topic mentioned in the SUBJECT field.
For example, if the topic of email is say iPhone, and the body is something like "the iPhone redefines user-interface design with super resolution and graphics. it is fully touch enabled and allows users to swipe the screen"
So the result I am looking for is a sort of list with the key terms from the paragraph as related to iPhone. Example - (user-interface, design, resolution, graphics, touch, swipe, screen).
So basically I am looking at picking the most relevant words from the paragraph. I am not sure on what I can use or how to use to achieve this result. Searching on google, I read a little about Natural Language Processing and python and classification etc. I just need a general approach on how to go about this - using what technology/language, which area I have to read on etc..
Thanks!
EDIT:::
I have been reading up in the meantime. To be precise, I am looking at HOW TO do this, using WHAT TOOL:
Generate related tags from a body of text using NLP which are based on synonyms, morphological similarity, spelling errors and contextual analysis. | 0 | python,nlp,classification,tagging,folksonomy | 2012-10-31T16:21:00.000 | 0 | 13,162,409 | I am not an expert but it seems like you really need to define a notion of "key term", "relevance", etc, and then put a ranking algorithm on top of that. This sounds like doing NLP, and as far as I know there is a python package called NLTK that might be useful in this field. Hope it helps! | 0 | 1,918 | false | 0 | 1 | picking the most relevant words from a paragraph | 13,162,597 |
1 | 1 | 0 | 2 | 6 | 0 | 0.379949 | 0 | I have a C++ project with a SWIG-generated Python front-end, which I build using CMake. I am now trying to find a convenient way to debug my mixed Python/C++ code. I am able to get a stack-trace of errors using gdb, but I would like to have some more fancy features such as the ability to step through the code and set breakpoints, for example using Eclipse.
Using the Eclipse generator for CMake I am able to generate a project that I am able to import into Eclipse. This works fine and I am also able to step through the pure C++ executables. But then the problem starts.
First of all, I am not able to build the Python front-end from inside Eclipse. From command line I just do "make python", but there is no target "python" in the Eclipse project.
Secondly, once I've compiled the Python front-end, I have no clue how to step through a Python script that contains calls to my wrapped C++ classes. Eclipse has debugging both for Python and for C++, but can they be combined? | 0 | c++,python,eclipse,cmake,swig | 2012-11-01T13:29:00.000 | 0 | 13,178,116 | some more fancy features such as the ability to step through the code and set breakpoints, for example using Eclipse
how are those features "fancy"? You can already do those in pdb for Python, or gdb for C++.
I'd suggest running the python code with pdb (or using pdb.set_trace() to interrupt execution at an interesting point), and attach gdb to the process in a separate terminal. Use pdb to set breakpoints in, and step through, your Python code. Use gdb to set breakpoints in, and step through, your C++ code. When pdb steps over a native call, gdb will take over. When gdb continue allows Python execution to resume, pdb will take over.
This should let you jump between C++ and Python breakpoints without needing to trace through the interpreter.
Disclaimer: I largely think IDEs are rubbish bloatware, so if Eclipse does have a good way to integrate this, I wouldn't know about it anyway. | 0 | 3,277 | false | 0 | 1 | Debugging mixed Python/C++ code in Eclipse | 13,178,922 |
1 | 1 | 0 | 1 | 2 | 0 | 0.197375 | 0 | I have a python messaging application that uses ZMQ. Each object has a PUB and a SUB queue, and they connect to each other. In some particular cases I want to wait for a particular message in the SUB queue, leaving the ones that I am not interested for later processing.
Right now, I am getting all messages and queuing those I am not interested in a Python Queue, until I found the one I am waiting for. But his means that in each processing routing I need to check first in the Python Queue for old messages. Is there a better way? | 0 | python,filter,zeromq,pyzmq | 2012-11-01T19:16:00.000 | 0 | 13,183,980 | A zmq publisher doesn't do any queueing... it drops messages when there isn't a SUB available to receive those messages.
The better way in your situation would be to create a generic sub who only will subscribe to certain messages of interest. That way you can spin up all of the different SUBs (even within one thread and using a zmq poller) and they will all process messages as they come from the PUB....
This is what the PUB/SUB pattern is primarily used for. Subs only subscribe to messages of interest, thus eliminating the need to cycle through a queue of messages at every loop looking for messages of interest. | 0 | 249 | false | 0 | 1 | Get a particular message from ZMQ Sub Queue | 13,200,669 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I'm using redis for python to store and process about 4 million keys and their values. Then I found Redis writes to disk too often. It really cost time. So I change "save 60 10000" in redis config file to "save 60 50000". But it still write to disk every 10000 key changes. I've reboot Redis server.
PS: I want to use dispy and Redis to make my application a distributed program. Is it feasible? I use "redis dispy distributed system" as keyword and get nothing from Google.
Thank you very much. | 0 | python,redis | 2012-11-02T06:13:00.000 | 0 | 13,190,253 | I've figure it out.
I'm using win7. Redis server doesn't load the config file each time it runs. I need to load it manually or change save frequency in redis-cli using 'save 60 50000'. | 0 | 310 | false | 0 | 1 | Redis save time config. It operates hardisk too often | 13,205,548 |
1 | 1 | 0 | 1 | 3 | 0 | 0.197375 | 1 | I'm trying to automate a process that collects data on one (or more) AWS instance(s), uploads the data to S3 hourly, to be retrieved by a decoupled process for parsing and further action. As a first step, I whipped up some crontab-initiated shell script (running in Ubuntu 12.04 LTS) that calls the boto utility s3multiput.
For the most part, this works fine, but very occasionally (maybe once a week) the file fails to appear in the s3 bucket, and I can't see any error or exception thrown to track down why.
I'm using the s3multiput utility included with boto 2.6.0. Python 2.7.3 is the default python on the instance. I have an IAM Role assigned to the instance to provide AWS credentials to boto.
I have a crontab calling a script that calls a wrapper that calls s3multiput. I included the -d 1 flag on the s3multiput call, and redirected all output on the crontab job with 2>&1 but the report for the hour that's missing data looks just like the report for the hour before and the hour after, each of which succeeded.
So, 99% of the time this works, but when it fails I don't know why and I'm having trouble figuring where to look. I only find out about the failure later when the parser job tries to pull the data from the bucket and it's not there. The data is safe and sound in the directory it should have uploaded from, so I can do it manually, but would rather not have to.
I'm happy to post the ~30-40 lines of related code if helpful, but wondered if anybody else had run into this and it sounded familiar.
Some grand day I'll come back to this part of the pipeline and rewrite it in python to obviate s3multiput, but we just don't have dev time for that yet.
How can I investigate what's going wrong here with the s3multiput upload? | 0 | python,crontab,boto | 2012-11-02T22:13:00.000 | 0 | 13,203,745 | First, I would try updating boto; a commit to the development branch mentions logging when a multipart upload fails. Note that doing so will require using s3put instead, as s3multiput is being folded into s3put. | 0 | 515 | false | 1 | 1 | Silent failure of s3multiput (boto) upload to s3 from EC2 instance | 13,203,938 |
1 | 1 | 0 | 1 | 1 | 0 | 0.197375 | 0 | I'm looking to write a script in Python 2.x that will scan physical drive (physical and not logical) for specific strings of text that will range in size (chat artifacts). I have my headers and footers for the strings and so I am just wondering how is best to scan through the drive? My concern is that if I split it into, say 250MB chunks and read this data into RAM before parsing through it for the header and footer, it may be the header is there but the footer is in the next chunk of 250MB.
So in essence, I want to scan PhysicalDevice0 for strings starting with "ABC" for example and ending in "XYZ" and copy all content from within. I'm unsure whether to scan the data as ascii or Hex too.
As drives get bigger, I'm looking to do this in the quickest manner possible.
Any suggestions? | 0 | python,computer-forensics | 2012-11-03T19:28:00.000 | 0 | 13,212,673 | Your problem can be formulated as "how do I search in a very long file with no line structure." It's no different from what you'd do if you were reading line-oriented text one line at a time: Imagine you're reading a text file block by block, but have a line-oriented regexp to search with; you'd search up to the last complete line in the block you've read, then hold on to the final incomplete line and read another block to extend it with. So, you don't start afresh with each new block read. Think of it as a sliding window; you only advance it to discard the parts you were able to search completely.
Do the same here: write your code so that the strings you're matching never hit the edge of the buffer. E.g., if the header you're searching for is 100 bytes long: read a block of text; check if the complete pattern appears in the block; advance your reading window to 100 bytes before the end of the current block, and add a new block's worth of text after it. Now you can search for the header without the risk of missing it. Once you find it, you're extracting text until you see the stop pattern (the footer). It doesn't matter if it's in the same block or five blocks later: your code should know that it's in extracting mode until the stop pattern is seen. | 0 | 197 | false | 0 | 1 | Extracting strings from a physical drive | 13,214,640 |
1 | 1 | 0 | 2 | 0 | 0 | 1.2 | 0 | Is is posible to run a python script from Cruisecontrol.Net ? Is there a CCNEt task or a nant task that can be used? | 0 | python,cruisecontrol.net,nant | 2012-11-05T08:50:00.000 | 1 | 13,228,677 | I don't think there is a built in python task, but you should be able to execute it by crafting an exec task. | 0 | 386 | true | 0 | 1 | Run python script from CruiseControl.NET | 13,293,387 |
1 | 2 | 0 | 5 | 1 | 1 | 1.2 | 0 | I have a Python program that has several slow imports. I'd like to delay importing them until they are needed. For instance, if a user is just trying to print a help message, it is silly to import the slow modules. What's the most Pythonic way to do this?
I'll add a solution I was playing with as a answer. I know you all can do better, though. | 0 | python,python-import | 2012-11-05T23:05:00.000 | 0 | 13,241,827 | Just import them where they're needed. After a module has been imported once, it is cached so that any subsequent imports will be quick. If you import the same module 20 times, only the first one will be slow. | 0 | 421 | true | 0 | 1 | What's the best way to do a just-in-time import of Python libraries? | 13,241,883 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | I am interested to build a mail service that allows you to incorporate custom logic in the your mail server.
For example, user A can reply to [email protected] once and subsequent emails from user A to [email protected] will not go through until certain actions are taken.
I am looking for something simple and customizable, preferably open-sourced. I am fluent in most modern languages.
What email servers do you guys recommend for this? | 0 | java,python,ruby,email,clojure | 2012-11-06T02:43:00.000 | 0 | 13,243,581 | Almost every mail server has some form of extensibility where you can insert logic in the mail-flow process, it's how some spam filters were implemented before they were built directly in to the servers. Personally, I use Exchange server which has a variety of points and APIs to extend it, such as SMTP Sinks.
However, this question is off-topic and shouldn't be on StackOverflow.
I suggest you build your own server - implementing a server-side version of SMTP and IMAP can be done by a single person, or use an existing library, it shouldn't take you more than a year if you put in a couple of hours each day. | 0 | 92 | false | 0 | 1 | Customizable mail server - what are my options? | 13,243,729 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I'm working on a project which requires counting the number of tweets that meet the parameters of a query. I'm working in Python, using Twython as my interface to Twitter.
A few questions though, how do you record which tweets have already been accounted for? Would you simply make a note of the last tweet ID and ignore it plus all previous? --What is the easiest implementation of this?
As another optimizations question, I want to make sure that the amount of tweets missed by the counter is minimal, is there any way to make sure of this?
Thanks so much. | 0 | python,twitter,twython | 2012-11-07T01:07:00.000 | 0 | 13,261,858 | considering the case of similar tweets and retweets, I would recommend making a semantic note of the whole tweet, extracting the text part of each tweet and doing a dictionary lookup.
but tweet id is more simpler with significant loss, usage as noted above. | 0 | 245 | false | 0 | 1 | How to count tweets from query without double counting? | 16,578,360 |
1 | 2 | 1 | 1 | 2 | 0 | 1.2 | 0 | Where does boost python register from python converters for builtin types such as from PyLong_Type to double?
I want to define a converter that can take a numpy.float128 from python and returns a long double for functions in C++. I already did it the other way round, the to_python converter. For that I tweaked builtin_converters.hpp but I didn't find how boost python does the from python conversion. | 0 | c++,python,boost-python | 2012-11-07T10:36:00.000 | 0 | 13,267,912 | The from python conversion is in fact done in builtin_converters.cpp and not in the header part of the library. I Copied this file and deleted everything except the converter for long double, which I was then able to modify. | 0 | 300 | true | 0 | 1 | From python converter for builtin types | 13,363,934 |
8 | 11 | 0 | 1 | 29 | 1 | 0.01818 | 0 | I am using eclipse for python.
How do I import an existing project into eclipse in the current workspace.
Thanks | 0 | python,eclipse | 2012-11-08T22:04:00.000 | 0 | 13,298,630 | First of all make sure that you have the same Python interpreter configured as the project has. You can change it under:
Window > Preferences > PyDev > Interpreters > Python Interpreters
As long the project was created using Eclipse you can use import functionality.
Go to:
File > Import... > General > Existing Projects into Workspace
Choose Select root directory: and browse to your project location. Click Finish and you are done. | 0 | 46,104 | false | 0 | 1 | How do I import a pre-existing python project into Eclipse? | 46,827,826 |
8 | 11 | 0 | 10 | 29 | 1 | 1 | 0 | I am using eclipse for python.
How do I import an existing project into eclipse in the current workspace.
Thanks | 0 | python,eclipse | 2012-11-08T22:04:00.000 | 0 | 13,298,630 | At time of writing none of the given answers worked.
This is how it's done:
Locate the directory containing the Pydev project
Delete the PyDev project files (important as Eclipse won't let you create a new project in the same location otherwise)
In Eclipse, File->New->Pydev Project
Name the project the same as your original project
For project contents, browse to location containing Pydev project
Select an interpreter
Follow rest of the menu through
Other answers using Eclipse project importing result in Pydev loosing track of packages, turning them all into folders only.
This does loose any project settings previously set, please edit this answer if it can be avoided. Hopefully Pydev devs will add project import functionality some time. | 0 | 46,104 | false | 0 | 1 | How do I import a pre-existing python project into Eclipse? | 25,244,825 |
8 | 11 | 0 | 0 | 29 | 1 | 0 | 0 | I am using eclipse for python.
How do I import an existing project into eclipse in the current workspace.
Thanks | 0 | python,eclipse | 2012-11-08T22:04:00.000 | 0 | 13,298,630 | I just suffered through this problem for a few hours. My issue may have been different than yours...Pydev did not show up as an import option (as opposed to C projects). My solution is to drag and drop. Just create a new project (name it the same as your old) and then drop your old project into the new project folder as displayed in eclipse...3 hours later and it's drag and drop... | 0 | 46,104 | false | 0 | 1 | How do I import a pre-existing python project into Eclipse? | 16,728,953 |
8 | 11 | 0 | 3 | 29 | 1 | 1.2 | 0 | I am using eclipse for python.
How do I import an existing project into eclipse in the current workspace.
Thanks | 0 | python,eclipse | 2012-11-08T22:04:00.000 | 0 | 13,298,630 | Following are the steps
Select pydev Perspective
right click on the project pan and click "import"
From the list select the existing project into workspace.
Select root directory by going next
Optionally you can select to copy the project into
thanks | 0 | 46,104 | true | 0 | 1 | How do I import a pre-existing python project into Eclipse? | 13,299,322 |
8 | 11 | 0 | 9 | 29 | 1 | 1 | 0 | I am using eclipse for python.
How do I import an existing project into eclipse in the current workspace.
Thanks | 0 | python,eclipse | 2012-11-08T22:04:00.000 | 0 | 13,298,630 | make sure pydev interpreter is added, add otherwise
windows->preferences->Pydev->Interpreter-Python
then create new pydev project,
give the same name
then don't use default location, browse to point the project location. | 0 | 46,104 | false | 0 | 1 | How do I import a pre-existing python project into Eclipse? | 22,244,064 |
8 | 11 | 0 | 14 | 29 | 1 | 1 | 0 | I am using eclipse for python.
How do I import an existing project into eclipse in the current workspace.
Thanks | 0 | python,eclipse | 2012-11-08T22:04:00.000 | 0 | 13,298,630 | In my case when i am trying to import my existing perforce project , it gives error no project found on windows machine. On linux i was able to import project nicely.
For Eclipse Kepler, i have done like below.
Open eclipse in pydev perspective.
Create a new pydev project in your eclipse workspace with the same name which project you want to import.
By now in your eclipse workspace project dir , you must be having .project and .pydevproject files.
Copy these two files and paste it to project dir which you want to import.
Now close and delete the pydev project you created and delete it from local disk as well.
Now you can use import utility to import project in eclipse. | 0 | 46,104 | false | 0 | 1 | How do I import a pre-existing python project into Eclipse? | 31,423,129 |
8 | 11 | 0 | 15 | 29 | 1 | 1 | 0 | I am using eclipse for python.
How do I import an existing project into eclipse in the current workspace.
Thanks | 0 | python,eclipse | 2012-11-08T22:04:00.000 | 0 | 13,298,630 | New Project
Dont use default Location
Browse to existing project location ...
if its an existing eclipse project with project files that have correct paths for your system you can just open the .proj file ... | 0 | 46,104 | false | 0 | 1 | How do I import a pre-existing python project into Eclipse? | 13,298,723 |
8 | 11 | 0 | 0 | 29 | 1 | 0 | 0 | I am using eclipse for python.
How do I import an existing project into eclipse in the current workspace.
Thanks | 0 | python,eclipse | 2012-11-08T22:04:00.000 | 0 | 13,298,630 | After following steps outlined by @Shan, if the folders under the root folder are not shown as packages,
Right-click on the root folder in PyDev Package Explorer
Select PyDev > Set as source-folder
It will add the root folder to the PYTHONPATH and now the folders will appear as packages | 0 | 46,104 | false | 0 | 1 | How do I import a pre-existing python project into Eclipse? | 28,258,101 |
1 | 1 | 0 | 6 | 5 | 0 | 1.2 | 0 | Our website has developed a need for real-time updates, and we are considering various comet/long-polling solutions. After researching, we have settled on nginx as a reverse proxy to 4 tornado instances (hosted on Amazon EC2). We are currently using the traditional LAMP stack and have written a substantial amount of code in PHP. We are willing to convert our PHP code to Python to better support this solution. Here are my questions:
Assuming a quad-core processor, is it ok for nginx to be running on the same server as the 4 tornado instances, or is it recommended to run two separate servers: one for nginx and one for the 4 tornado processes?
Is there a benefit to using HAProxy in front of Nginx? Doesn't Nginx handle load-balancing very well by itself?
From my research, Nginx doesn't appear to have a great URL redirecting module. Is it preferred to use Redis for redirects? If so, should Redis be in front of Nginx, or behind?
A large portion of our application code will not be involved in real-time updates. This code contains several database queries and filesystem reads, so it clearly isn't suitable for a non-blocking app server. From my research, I've read that the blocking issue is mitigated simply by having multiple Tornado instances, while others suggest using a separate app server (ex. Gunicorn/Django/Flask) for blocking calls. What is the best way to handle blocking calls when using a non-blocking server?
Converting our code from PHP to Python will be a lengthy process. Is it acceptable to simultaneously run Apache/PHP and Tornado behind Nginx, or should we just stick to on language (either tornado with gunicorn/django/flask or tornado by itself)? | 0 | php,python,django,nginx,tornado | 2012-11-08T22:31:00.000 | 1 | 13,299,023 | I'll go point by point:
Yes. It's ok to run tornado and nginx on one server. You can use nginx as reverse proxy for tornado also.
Haproxy will give you benefit, if you have more than one server instances. Also it will allow you to proxy websockets directly to tornado.
Actually, nginx can be used for redirects, with no problems. I haven't heard about using redis for redirects - it's key/value storage... may be you mean something else?
Again, you can write blocking part in django and non-blocking part in tornado. Also tornado has some non-blocking libs for db queries. Not sure that you need powers of django here.
Yes, it's ok to run apache behind nginx. A lot of projects use nginx in front of apache for serving static files.
Actually question is very basic - answer also. I can be more detailed on any of the point if you wish. | 0 | 2,544 | true | 1 | 1 | Apache/PHP to Nginx/Tornado/Python | 13,304,821 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | I wrote a python script to do some experiment with the Mandelbrot set. I used a simple function to find Mandelbrot set points. I was wondering how much efficiency I can achieve by calling a simple C function to do this part of my code? Please consider that this function should call many times from Python.
What is the effect of run time? And maybe other factors that should I aware? | 0 | python,c,performance | 2012-11-09T16:12:00.000 | 0 | 13,311,732 | You'll want the python calls to your C function to be as little as possible. If you can call the C function once from python and get it to do most/all of the work, that would be better. | 0 | 117 | false | 0 | 1 | Efficiency of calling C function from Python | 13,311,964 |
2 | 4 | 0 | 1 | 4 | 0 | 0.049958 | 0 | What's the fastest way to serve static files in Python? I'm looking for something equal or close enough to Nginx's static file serving.
I know of SimpleHTTPServer but not sure if it can handle serving multiple files efficiently and reliably.
Also, I don't mind it being a part of a lib/framework of some sort as long as its lib/framework is lightweight. | 0 | python | 2012-11-12T07:59:00.000 | 0 | 13,340,080 | I would highly recommend using a 3rd party HTTP server to serve static files.
Servers like nginx are heavily optimized for the task at hand, parallelized and written in fast languages.
Python is tied to one processor and interpreted. | 0 | 3,272 | false | 0 | 1 | Python fast static file serving | 13,340,136 |
2 | 4 | 0 | 1 | 4 | 0 | 0.049958 | 0 | What's the fastest way to serve static files in Python? I'm looking for something equal or close enough to Nginx's static file serving.
I know of SimpleHTTPServer but not sure if it can handle serving multiple files efficiently and reliably.
Also, I don't mind it being a part of a lib/framework of some sort as long as its lib/framework is lightweight. | 0 | python | 2012-11-12T07:59:00.000 | 0 | 13,340,080 | If you look for a oneliner you can do the following:
$> python -m SimpleHTTPServer
This will not fullfil all the task required but worth mentioning that this is the simplest way :-) | 0 | 3,272 | false | 0 | 1 | Python fast static file serving | 13,340,760 |
1 | 2 | 0 | 1 | 1 | 0 | 1.2 | 0 | I'm using debian with usbmount. I want to check if a USB memory stick is available to write to.
Currently I check if a specific dir exists on the USB drive. If this is True I can then write the rest of my files - os.path.isdir('/media/usb0/Test_Folder')
I would like to create Test_Folder if it doesn't exist. However /media/usb0/ exists even if no USB device is there so I can't just os.mkdir('/media/usb0/Test_Folder') As it makes the file locally.
I need a check that there is a usb drive available on /media/usb0/ to write to before creating the file. Is there a quick way of doing this? | 0 | python,usb,debian | 2012-11-12T14:11:00.000 | 1 | 13,345,239 | cat /etc/mtab | awk '{ print $2 }'
Will give you a list of mountpoints. You can as well read /etc/mtab yourself and just check if anything's mounted under /media/usb0 (file format: whitespace-divided, most likely single space). The second column is mount destination, the first is the source. | 0 | 905 | true | 0 | 1 | Python usbmount checking for device before writing | 13,345,336 |
1 | 4 | 0 | 5 | 4 | 0 | 0.244919 | 0 | I've written an analytical pipeline in Python that I think will be useful to other people. I'm wondering whether it is customary to publish such scripts in GitHub, whether there's a specific place to do this for Python scripts, or even if there's a more specific place for biology-related Python scripts. | 0 | python,publishing,bioinformatics,biopython | 2012-11-13T03:53:00.000 | 0 | 13,355,358 | While there are many approaches to this, one of the customary solutions would be to indeed publish it on github and then link to it from your research institution's website. | 0 | 521 | false | 0 | 1 | Where to deposit a Python script that performs bioinformatics analyses? | 13,355,383 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have a python script that runs as a daemon process. I want to be able to stop and start the process via a web page. I made a PHP script that runs exec() on the python daemon. Any idea?
Traceback (most recent call last): File
"/home/app/public_html/daemon/daemon.py", line 6, in from
socketServer import ExternalSocketServer, InternalSocketServer File
"/home/app/public_html/daemon/socketServer.py", line 3, in
import json, asyncore, socket, MySQLdb, hashlib, urllib, urllib2,
logging, traceback, sys File
"build/bdist.linux-x86_64/egg/MySQLdb/init.py", line 19, in
File "build/bdist.linux-x86_64/egg/_mysql.py", line 7, in
File "build/bdist.linux-x86_64/egg/_mysql.py", line 4, in
bootstrap File "build/bdist.linux-i686/egg/pkg_resources.py", line 882, in resource_filename File
"build/bdist.linux-i686/egg/pkg_resources.py", line 1351, in
get_resource_filename File
"build/bdist.linux-i686/egg/pkg_resources.py", line 1373, in
_extract_resource File "build/bdist.linux-i686/egg/pkg_resources.py", line 962, in
get_cache_path File "build/bdist.linux-i686/egg/pkg_resources.py",
line 928, in extraction_error pkg_resources.ExtractionError: Can't
extract file(s) to egg cache The following error occurred while
trying to extract file(s) to the Python egg cache: [Errno 13]
Permission denied: '//.python-eggs' The Python egg cache directory is
currently set to: //.python-eggs Perhaps your account does not
have write access to this directory? You can change the cache
directory by setting the PYTHON_EGG_CACHE environment variable to
point to an accessible directory. | 0 | php,python,caching,egg | 2012-11-13T05:33:00.000 | 1 | 13,356,024 | Make sure whatever user php is running under has appropriate permissions. You can try opening a pipe and changing users, or just use apache's suexec. | 0 | 189 | false | 0 | 1 | Running Python from PHP | 13,356,068 |
1 | 2 | 0 | 0 | 2 | 0 | 0 | 0 | i am facing difficulty when trying to run my tests. Here is what i did :
Create a java project with one class which has one method called hello(String name)
Exported this as a jar and kept it in the same directory where i keep my test case file.
my Test case looks like this.
Setting * * Value * * Value * * Value * * Value * * Value *
Library MyLibrary
Variable * * Value * * Value * * Value * * Value * * Value *
Test Case * * Action * * Argument * * Argument * * Argument * * Argument *
MyTest
hello World
Keyword * * Action * * Argument * * Argument * * Argument * * Argument *
I always get the following error :
Error in file 'C:\Users\yahiya\Desktop\robot-practice\testcase_template.tsv' in table 'Setting': Importing test library 'MyLibrary' failed: ImportError: No module named MyLibrary
I have configured Pythopath in the system variables in my windows machine.
Please let me know what am i doing wrong here.
Thanks | 0 | java,python,robotframework | 2012-11-13T07:58:00.000 | 0 | 13,357,227 | Try to put your Library into this folder:
...YourPythonFolder\Lib\site-packages\
or, if this doesn't work, make in the folder "site-packages" folder with the name "MyLibrary" and put your library there.
This should work. | 0 | 2,367 | false | 1 | 1 | Robot Framework - using User Libraries | 13,602,048 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.