Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 4 | 0 | 4 | 19 | 0 | 0.197375 | 0 | I'm writing unit tests for a portion of an application that runs as an HTTP server. The approach I have been trying to take is to import the module that contains the HTTP server, start it. Then, the unit tests will use urllib2 to connect, send data, and check the response.
Our HTTP server is using Twisted. One problem here is that I'm just not that familiar with Twisted :)
Now, I instantiate our HTTP server and start it in the setUp() method and then I stop it in the tearDown() method.
Problem is, Twisted doesn't appear to like this, and it will only run one unit test. After the first one, the reactor won't start anymore.
I've searched and searched and searched, and I just can't seem to find an answer that makes sense.
Am I taking the wrong approach entirely, or just missing something obvious? | 0 | python,unit-testing,twisted | 2009-10-16T01:03:00.000 | 1 | 1,575,966 | As others mentioned, you should be using Trial for unit tests in Twisted.
You also should be unit testing from the bottom up - that's what the "unit" in unit testing implies. Test your data and logic before you test your interface. For a HTTP interface, you should be calling processGET, processPOST, etc with a mock request, but you should only be doing this after you've tested what these methods are calling. Each test should assume that the units tested elsewhere are working as designed.
If you're speaking HTTP, or you need a running server or other state, you're probably making higher level tests such as functional or integration tests. This isn't a bad thing, but you might want to rephrase your question. | 0 | 5,134 | false | 0 | 1 | Python - Twisted and Unit Tests | 1,580,776 |
1 | 3 | 1 | 0 | 7 | 1 | 0 | 0 | I've got a library written in C++ which I wrap using SWIG and use in python. Generally there is one class with few methods. The problem is that calling these methods may be time consuming - they may hang my application (GIL is not released when calling these methods). So my question is:
What is the simplest way to release GIL for these method calls?
(I understand that if I used a C library I could wrap this with some additional C code, but here I use C++ and classes) | 0 | c++,python,swig,gil | 2009-10-16T08:07:00.000 | 0 | 1,576,737 | You can use the same API call as for C. No difference. Include "python.h" and call the appoproate function.
Also, see if SWIG doesn't have a typemap or something to indicate that the GIL shuold not be held for a specific function. | 0 | 3,393 | false | 0 | 1 | Releasing Python GIL while in C++ code | 1,576,959 |
1 | 2 | 0 | 1 | 1 | 0 | 1.2 | 0 | I am experiencing issues with my SVN post-commit hook and the fact that it is executed with an empty environment. Everything was working fine till about two weeks ago when my systems administrator upgraded a few things on the server.
My post-commit hook executes a Python script that uses a SVN module to email information about the commit to me. After the recent upgrades, however, Python cannot find the SVN module when executed via the hook. When executed by hand (ie with all environment variables intact) everything works fine.
I have tried setting the PYTHONPATH variable in my post-commit hook directly (PYTHONPATH=/usr/local/lib/svn-python), but that makes no difference.
How can I tell Python where the module is located? | 0 | python,svn | 2009-10-16T08:21:00.000 | 0 | 1,576,784 | Got it! I missed the export in my post-commit hook script!
It should have been:
export PYTHONPATH=/usr/local/lib/svn-python
Problem solved :) | 0 | 419 | true | 0 | 1 | SVN hook environment issues with Python script | 1,593,977 |
4 | 7 | 1 | 0 | 20 | 0 | 0 | 0 | I know a bunch of scripting languages, (python, ruby, lua, php) but I don't know any compiled languages like C/C++ , I wanted to try and speed up some python code using cython, which is essentially a python -> C compiler, aimed at creating C extensions for python. Basically you code in a stricter version of python which compiles into C -> native code.
here's the problem, I don't know C, yet the cython documentation is aimed at people who obviously already know C (nothing is explained, only presented), and is of no help to me, I need to know if there are any good cython tutorials aimed at python programmers, or if I'm gonna have to learn C before I learn Cython.
bear in mind I'm a competent python programmer, i would much rather learn cython from the perspective of the language I'm already good at, rather than learn a whole new language in order to learn cython.
1) PLEASE don't recommend psyco
edit: ANY information that will help understand the oficial cython docs is useful information | 0 | python,c,cython | 2009-10-17T12:33:00.000 | 0 | 1,582,105 | Cython does not support threads well at all. It holds the GIL (Global Intrepreter Lock) the entire time! This makes your code thread-safe by (virtually) disabling concurrent execution. So I wouldn't use it for general purpose development. | 0 | 8,650 | false | 0 | 1 | Noob-Ready Cython Tutorials | 4,445,452 |
4 | 7 | 1 | 0 | 20 | 0 | 0 | 0 | I know a bunch of scripting languages, (python, ruby, lua, php) but I don't know any compiled languages like C/C++ , I wanted to try and speed up some python code using cython, which is essentially a python -> C compiler, aimed at creating C extensions for python. Basically you code in a stricter version of python which compiles into C -> native code.
here's the problem, I don't know C, yet the cython documentation is aimed at people who obviously already know C (nothing is explained, only presented), and is of no help to me, I need to know if there are any good cython tutorials aimed at python programmers, or if I'm gonna have to learn C before I learn Cython.
bear in mind I'm a competent python programmer, i would much rather learn cython from the perspective of the language I'm already good at, rather than learn a whole new language in order to learn cython.
1) PLEASE don't recommend psyco
edit: ANY information that will help understand the oficial cython docs is useful information | 0 | python,c,cython | 2009-10-17T12:33:00.000 | 0 | 1,582,105 | About all the C that you really need to know is:
C types are much faster than Python types (adding to C ints or doubles can be done in a single clock cycle) but less safe (they are not arbitrarily sized and may silently overflow).
C function (cdef) calls are much faster than Python (def) function calls (but are less flexible).
This will get you most of the way there. If you want to eke out that last 10-20% speedup for most applications, there's no getting around knowing C, and how modern processes work (pointers, cache, ...). | 0 | 8,650 | false | 0 | 1 | Noob-Ready Cython Tutorials | 2,582,450 |
4 | 7 | 1 | 1 | 20 | 0 | 0.028564 | 0 | I know a bunch of scripting languages, (python, ruby, lua, php) but I don't know any compiled languages like C/C++ , I wanted to try and speed up some python code using cython, which is essentially a python -> C compiler, aimed at creating C extensions for python. Basically you code in a stricter version of python which compiles into C -> native code.
here's the problem, I don't know C, yet the cython documentation is aimed at people who obviously already know C (nothing is explained, only presented), and is of no help to me, I need to know if there are any good cython tutorials aimed at python programmers, or if I'm gonna have to learn C before I learn Cython.
bear in mind I'm a competent python programmer, i would much rather learn cython from the perspective of the language I'm already good at, rather than learn a whole new language in order to learn cython.
1) PLEASE don't recommend psyco
edit: ANY information that will help understand the oficial cython docs is useful information | 0 | python,c,cython | 2009-10-17T12:33:00.000 | 0 | 1,582,105 | You can do a lot of very useful things with Cython if you can answer the following C quiz...
(1) What is a double? What is an int?
(2) What does the word "compile" mean?
(3) What is a header (.h) file?
To answer these questions you don't need to read a whole C book! ...maybe chapter 1.
Once you can pass that quiz, try again with the tutorial.
What I usually do is start with pure python code, and add Cython elements bit by bit. In that situation, you can learn the Cython features bit by bit. For example I don't understand C strings, because so far I have not tried to cythonize code that involves strings. When I do, I will first look up how strings work in C, and then second look up how strings work in Cython.
Again, once you've gotten started with Cython, you will now and then run into some complication that requires learning slightly more C. And of course the more C you know, the more dextrous you will be with taking full advantage of Cython, not to mention troubleshooting if something goes wrong. But that shouldn't make you reluctant to start! | 0 | 8,650 | false | 0 | 1 | Noob-Ready Cython Tutorials | 11,103,468 |
4 | 7 | 1 | 1 | 20 | 0 | 0.028564 | 0 | I know a bunch of scripting languages, (python, ruby, lua, php) but I don't know any compiled languages like C/C++ , I wanted to try and speed up some python code using cython, which is essentially a python -> C compiler, aimed at creating C extensions for python. Basically you code in a stricter version of python which compiles into C -> native code.
here's the problem, I don't know C, yet the cython documentation is aimed at people who obviously already know C (nothing is explained, only presented), and is of no help to me, I need to know if there are any good cython tutorials aimed at python programmers, or if I'm gonna have to learn C before I learn Cython.
bear in mind I'm a competent python programmer, i would much rather learn cython from the perspective of the language I'm already good at, rather than learn a whole new language in order to learn cython.
1) PLEASE don't recommend psyco
edit: ANY information that will help understand the oficial cython docs is useful information | 0 | python,c,cython | 2009-10-17T12:33:00.000 | 0 | 1,582,105 | Cython does support concurrency (you can use native POSIX threads with c, that can be compiled in extent ion module) , you just need to be careful enough to not to modify any python objects when GIL is released and keep in mind the interpreter itself is not thread safe. You can also use multiprocessing with python to use more cores for parallelism which can in turn use your compiled cython extensions to speed up even more. But all in all you definitely have to know c programming model , static types etc | 0 | 8,650 | false | 0 | 1 | Noob-Ready Cython Tutorials | 10,643,399 |
3 | 4 | 0 | 3 | 4 | 0 | 0.148885 | 0 | A popular software development pattern seems to be:
Thrash out the logic and algorithms in Python.
Profile to find out where the slow bits are.
Replace those with C.
Ship code that is the best trade-off between high-level and speedy.
I say popular simply because I've seen people talk about it as being a great idea.
But are there any large projects that have actually used this method? Preferably Free software projects so I can have a look and see how they did it - and maybe learn some best practices. | 0 | python,c,refactoring,profiling | 2009-10-17T17:24:00.000 | 0 | 1,582,718 | There are lots of different ways that people approach development.
Sometimes people follow your three steps and discover that the slow bits are due to the external environment, therefore rewriting Python into C does not address the problem. That type of slowness can sometimes be solved on the system side, and sometimes it can be solved in Python by applying a different algorithm. For instance you can cache network responses so that you don't have to go to the network every time, or in SQL you can offload work into `stored procedures which run on the server and reduce the size of the result set. Generally, when you do have something that needs to be rewritten in C, the first thing to do is to look for a pre-existing library and just create a Python wrapper, if one does not already exist. Lots of people have been down these paths before you.
Often step 1 is to thrash out the application architecture, suspect that there may be a performance issue in some area, then choose a C library (perhaps already wrapped for Python) and use that. Then step 2 simply confirms that there are no really big performance issues that need to be addressed.
I would say that it is better for a team with one or more experienced developers to attempt to predict performance bottlenecks and mitigate them with pre-existing modules right from the beginning. If you are a beginner with python, then your 3-step process is perfectly valid, i.e. get into building and testing code, knowing that there is a profiler and the possibility of fast C modules if you need it. And then there is psyco, and the various tools for freezing an application into a binary executable.
An alternative approach to this, if you know that you will need to use some C or C++ modules, is to start from scratch writing the application in C but embedding Python to do most of the work. This works well for experienced C or C++ developers because they have a rough idea of the type of code that is tedious to do in C. | 0 | 279 | false | 0 | 1 | Best practice for the Python-then-profile-then-C design pattern? | 1,582,864 |
3 | 4 | 0 | 0 | 4 | 0 | 0 | 0 | A popular software development pattern seems to be:
Thrash out the logic and algorithms in Python.
Profile to find out where the slow bits are.
Replace those with C.
Ship code that is the best trade-off between high-level and speedy.
I say popular simply because I've seen people talk about it as being a great idea.
But are there any large projects that have actually used this method? Preferably Free software projects so I can have a look and see how they did it - and maybe learn some best practices. | 0 | python,c,refactoring,profiling | 2009-10-17T17:24:00.000 | 0 | 1,582,718 | Step 3 is wrong. In the modern world, more than half the time "the slow bits" are I/O or network bound, or limited by some other resource outside the process. Rewriting them in anything is only going to introduce bugs. | 0 | 279 | false | 0 | 1 | Best practice for the Python-then-profile-then-C design pattern? | 1,582,784 |
3 | 4 | 0 | 2 | 4 | 0 | 0.099668 | 0 | A popular software development pattern seems to be:
Thrash out the logic and algorithms in Python.
Profile to find out where the slow bits are.
Replace those with C.
Ship code that is the best trade-off between high-level and speedy.
I say popular simply because I've seen people talk about it as being a great idea.
But are there any large projects that have actually used this method? Preferably Free software projects so I can have a look and see how they did it - and maybe learn some best practices. | 0 | python,c,refactoring,profiling | 2009-10-17T17:24:00.000 | 0 | 1,582,718 | I also thought that way when I started using Python
I've done step 3 twice (that I can recall) in 12 years. Not often enough to call it a design pattern. Usually it's enough to wrap an existing C library. Usually someone else has already written the wrapper. | 0 | 279 | false | 0 | 1 | Best practice for the Python-then-profile-then-C design pattern? | 1,583,268 |
1 | 3 | 0 | 1 | 5 | 1 | 0.066568 | 0 | We've got a python library that we're developing. During development, I'd like to use some parts of that library in testing the newer versions of it. That is, use the stable code in order to test the development code. Is there any way of doing this in python?
Edit: To be more specific, we've got a library (LibA) that has many useful things. Also, we've got a testing library that uses LibA in order to provide some testing facilities (LibT). We want to test LibA using LibT, but because LibT depends on LibA, we'd rather it to use a stable version of LibA, while testing LibT (because we will change LibT to work with newer LibA only once tests pass etc.). So, when running unit-tests, LibA-dev tests will use LibT code that depends on LibA-stable.
One idea we've come up with is calling the stable code using RPyC on a different process, but it's tricky to implement in an air-tight way (making sure it dies properly etc, and allowing multiple instances to execute at the same time on the same computer etc.).
Thanks | 0 | python,testing,dependencies,circular-dependency | 2009-10-19T09:42:00.000 | 0 | 1,587,776 | "We want to test LibA using LibT, but because LibT depends on LibA, we'd rather it to use a stable version of LibA, while testing LibT "
It doesn't make sense to use T + A to test A. What does make sense is the following.
LibA is really two things mashed together: A1 and A2.
T depends on A1.
What's really happening is that you're upgrading and testing A2, using T and A1.
If you decompose LibA into the parts that T requires and the other parts, you may be able to break this circular dependency. | 0 | 680 | false | 0 | 1 | Using different versions of a python library in the same process | 1,588,192 |
1 | 2 | 0 | 2 | 1 | 1 | 1.2 | 0 | I have a web server that is dynamically creating various reports in several formats (pdf and doc files). The files require a fair amount of CPU to generate, and it is fairly common to have situations where two people are creating the same report with the same input.
Inputs:
raw data input as a string (equations, numbers, and
lists of words), arbitrary length, almost 99% will be less than about 200 words
the version of the report creation tool
When a user attempts to generate a report, I would like to check to see if a file already exists with the given input, and if so return a link to the file. If the file doesn't already exist, then I would like to generate it as needed.
What solutions are already out there? I've cached simple http requests before, but the keys were extremely simple (usually database id's)
If I have to do this myself, what is the best way. The input can be several hundred words, and I was wondering how I should go about transforming the strings into keys sent to the cache.
//entire input, uses too much memory, one to one mapping
cache['one two three four five six seven eight nine ten eleven...']
//short keys
cache['one two'] => 5 results, then I must narrow these down even more
Is this something that should be done in a database, or is it better done within the web app code (python in my case)
Thanks you everyone. | 0 | python | 2009-10-19T10:42:00.000 | 0 | 1,587,991 | This is what Apache is for.
Create a directory that will have the reports.
Configure Apache to serve files from that directory.
If the report exists, redirect to a URL that Apache will serve.
Otherwise, the report doesn't exist, so create it. Then redirect to a URL that Apache will serve.
There's no "hashing". You have a key ("a string (equations, numbers, and lists of words), arbitrary length, almost 99% will be less than about 200 words") and a value, which is a file. Don't waste time on a hash. You just have a long key.
You can compress this key somewhat by making a "slug" out of it: remove punctuation, replace spaces with _, that kind of thing.
You should create an internal surrogate key which is a simple integer.
You're simply translating a long key to a "report" which either exists as a file or will be created as a file. | 0 | 141 | true | 0 | 1 | Caching system for dynamically created files? | 1,588,007 |
2 | 3 | 0 | 1 | 3 | 1 | 0.066568 | 0 | Right now I'm running 50 PHP (in CLI mode) individual workers (processes) per machine that are waiting to receive their workload (job). For example, the job of resizing an image. In workload they receive the image (binary data) and the desired size. The worker does it's work and returns the resized image back. Then it waits for more jobs (it loops in a smart way). I'm presuming that I have the same executable, libraries and classes loaded and instantiated 50 times. Am I correct? Because this does not sound very effective.
What I'd like to have now is one process that handles all this work and being able to use all available CPU cores while having everything loaded only once (to be more efficient). I presume a new thread would be started for each job and after it finishes, the thread would stop. More jobs would be accepted if there are less than 50 threads doing the work. If all 50 threads are busy, no additional jobs are accepted.
I am using a lot of libraries (for Memcached, Redis, MogileFS, ...) to have access to all the various components that the system uses and Python is pretty much the only language apart from PHP that has support for all of them.
Can Python do what I want and will it be faster and more efficient that the current PHP solution? | 0 | php,python,multithreading | 2009-10-19T22:47:00.000 | 0 | 1,591,555 | If you are on a sane operating system then shared libraries should only be loaded once and shared among all processes using them. Memory for data structures and connection handles will obviously be duplicated, but the overhead of stopping and starting the systems may be greater than keeping things up while idle. If you are using something like gearman it might make sense to let several workers stay up even if idle and then have a persistent monitoring process that will start new workers if all the current workers are busy up until a threshold such as the number of available CPUs. That process could then kill workers in a LIFO manner after they have been idle for some period of time. | 0 | 958 | false | 0 | 1 | From PHP workers to Python threads | 1,591,593 |
2 | 3 | 0 | 4 | 3 | 1 | 1.2 | 0 | Right now I'm running 50 PHP (in CLI mode) individual workers (processes) per machine that are waiting to receive their workload (job). For example, the job of resizing an image. In workload they receive the image (binary data) and the desired size. The worker does it's work and returns the resized image back. Then it waits for more jobs (it loops in a smart way). I'm presuming that I have the same executable, libraries and classes loaded and instantiated 50 times. Am I correct? Because this does not sound very effective.
What I'd like to have now is one process that handles all this work and being able to use all available CPU cores while having everything loaded only once (to be more efficient). I presume a new thread would be started for each job and after it finishes, the thread would stop. More jobs would be accepted if there are less than 50 threads doing the work. If all 50 threads are busy, no additional jobs are accepted.
I am using a lot of libraries (for Memcached, Redis, MogileFS, ...) to have access to all the various components that the system uses and Python is pretty much the only language apart from PHP that has support for all of them.
Can Python do what I want and will it be faster and more efficient that the current PHP solution? | 0 | php,python,multithreading | 2009-10-19T22:47:00.000 | 0 | 1,591,555 | Most probably - yes. But don't assume you have to do multithreading. Have a look at the multiprocessing module. It already has an implementation of a Pool included, which is what you could use. And it basically solves the GIL problem (multithreading can run only 1 "standard python code" at any time - that's a very simplified explanation).
It will still fork a process per job, but in a different way than starting it all over again. All the initialisations done- and libraries loaded before entering the worker process will be inherited in a copy-on-write way. You won't do more initialisations than necessary and you will not waste memory for the same libarary/class if you didn't actually make it different from the pre-pool state.
So yes - looking only at this part, python will be wasting less resources and will use a "nicer" worker-pool model. Whether it will really be faster / less CPU-abusing, is hard to tell without testing, or at least looking at the code. Try it yourself.
Added: If you're worried about memory usage, python may also help you a bit, since it has a "proper" garbage collector, while in php GC is a not a priority and not that good (and for a good reason too). | 0 | 958 | true | 0 | 1 | From PHP workers to Python threads | 1,591,616 |
1 | 12 | 0 | 12 | 105 | 1 | 1 | 0 | Usually I use shell command time. My purpose is to test if data is small, medium, large or very large set, how much time and memory usage will be.
Any tools for Linux or just Python to do this? | 0 | python,unix,shell,benchmarking | 2009-10-20T07:40:00.000 | 1 | 1,593,019 | I usually do a quick time ./script.py to see how long it takes. That does not show you the memory though, at least not as a default. You can use /usr/bin/time -v ./script.py to get a lot of information, including memory usage. | 0 | 101,454 | false | 0 | 1 | Is there any simple way to benchmark Python script? | 5,544,739 |
2 | 5 | 0 | 1 | 1 | 0 | 0.039979 | 0 | I'm using a config file to get the info for my database. It always gets the hostname and then figures out what database options to use from this config file. I want to be able to tell if I'm inside a unittest here and use the in memory sqlite database instead. Is there a way to tell at that point whether I'm inside a unittest, or will I have to find a different way? | 0 | python,unit-testing,sqlite | 2009-10-21T14:43:00.000 | 0 | 1,601,308 | Use some sort of database configuration and configure which database to use, and configure the in-memory database during unit tests. | 0 | 108 | false | 0 | 1 | Is there a way to tell whether a function is getting executed in a unittest? | 1,601,338 |
2 | 5 | 0 | 0 | 1 | 0 | 0 | 0 | I'm using a config file to get the info for my database. It always gets the hostname and then figures out what database options to use from this config file. I want to be able to tell if I'm inside a unittest here and use the in memory sqlite database instead. Is there a way to tell at that point whether I'm inside a unittest, or will I have to find a different way? | 0 | python,unit-testing,sqlite | 2009-10-21T14:43:00.000 | 0 | 1,601,308 | This is kind of brute force but it works. Have an environmental variable UNIT_TEST that your code checks, and set it inside your unit test driver. | 0 | 108 | false | 0 | 1 | Is there a way to tell whether a function is getting executed in a unittest? | 1,601,336 |
1 | 2 | 0 | -1 | 2 | 0 | -0.099668 | 0 | I have a website that right now, runs by creating static html pages from a cron job that runs nightly.
I'd like to add some search and filtering features using a CGI type script, but my script will have enough of a startup time (maybe a few seconds?) that I'd like it to stay resident and serve multiple requests.
This is a side-project I'm doing for fun, and it's not going to be super complex. I don't mind using something like Pylons, but I don't feel like I need or want an ORM layer.
What would be a reasonable approach here?
EDIT: I wanted to point out that for the load I'm expecting and processing I need to do on a request, I'm confident that a single python script in a single process could handle all requests without any slowdowns, especially since my dataset would be memory-resident. | 0 | python,frameworks,cgi,pylons | 2009-10-21T18:03:00.000 | 0 | 1,602,516 | maybe you should direct your search towards inter process commmunication and make a search process that returns the results to the web server. This search process will be running all the time assuming you have your own server. | 0 | 602 | false | 0 | 1 | I want to create a "CGI script" in python that stays resident in memory and services multiple requests | 1,603,021 |
4 | 16 | 0 | 11 | 213 | 0 | 1 | 0 | I have written a Python script that checks a certain e-mail address and passes new e-mails to an external program. How can I get this script to execute 24/7, such as turning it into daemon or service in Linux. Would I also need a loop that never ends in the program, or can it be done by just having the code re executed multiple times? | 0 | python,linux,scripting,daemons | 2009-10-21T19:36:00.000 | 1 | 1,603,109 | how about using $nohup command on linux?
I use it for running my commands on my Bluehost server.
Please advice if I am wrong. | 0 | 365,270 | false | 0 | 1 | How to make a Python script run like a service or daemon in Linux | 8,956,634 |
4 | 16 | 0 | 7 | 213 | 0 | 1 | 0 | I have written a Python script that checks a certain e-mail address and passes new e-mails to an external program. How can I get this script to execute 24/7, such as turning it into daemon or service in Linux. Would I also need a loop that never ends in the program, or can it be done by just having the code re executed multiple times? | 0 | python,linux,scripting,daemons | 2009-10-21T19:36:00.000 | 1 | 1,603,109 | If you are using terminal(ssh or something) and you want to keep a long-time script working after you log out from the terminal, you can try this:
screen
apt-get install screen
create a virtual terminal inside( namely abc): screen -dmS abc
now we connect to abc: screen -r abc
So, now we can run python script: python keep_sending_mails.py
from now on, you can directly close your terminal, however, the python script will keep running rather than being shut down
Since this keep_sending_mails.py's PID is a child process of the virtual screen rather than the
terminal(ssh)
If you want to go back check your script running status, you can use screen -r abc again | 0 | 365,270 | false | 0 | 1 | How to make a Python script run like a service or daemon in Linux | 35,008,431 |
4 | 16 | 0 | 1 | 213 | 0 | 0.012499 | 0 | I have written a Python script that checks a certain e-mail address and passes new e-mails to an external program. How can I get this script to execute 24/7, such as turning it into daemon or service in Linux. Would I also need a loop that never ends in the program, or can it be done by just having the code re executed multiple times? | 0 | python,linux,scripting,daemons | 2009-10-21T19:36:00.000 | 1 | 1,603,109 | Use whatever service manager your system offers - for example under Ubuntu use upstart. This will handle all the details for you such as start on boot, restart on crash, etc. | 0 | 365,270 | false | 0 | 1 | How to make a Python script run like a service or daemon in Linux | 20,908,406 |
4 | 16 | 0 | 12 | 213 | 0 | 1 | 0 | I have written a Python script that checks a certain e-mail address and passes new e-mails to an external program. How can I get this script to execute 24/7, such as turning it into daemon or service in Linux. Would I also need a loop that never ends in the program, or can it be done by just having the code re executed multiple times? | 0 | python,linux,scripting,daemons | 2009-10-21T19:36:00.000 | 1 | 1,603,109 | cron is clearly a great choice for many purposes. However it doesn't create a service or daemon as you requested in the OP. cron just runs jobs periodically (meaning the job starts and stops), and no more often than once / minute. There are issues with cron -- for example, if a prior instance of your script is still running the next time the cron schedule comes around and launches a new instance, is that OK? cron doesn't handle dependencies; it just tries to start a job when the schedule says to.
If you find a situation where you truly need a daemon (a process that never stops running), take a look at supervisord. It provides a simple way to wrapper a normal, non-daemonized script or program and make it operate like a daemon. This is a much better way than creating a native Python daemon. | 0 | 365,270 | false | 0 | 1 | How to make a Python script run like a service or daemon in Linux | 19,515,492 |
1 | 2 | 0 | 2 | 1 | 0 | 0.197375 | 0 | I've been using Python's built-in cProfile tool with some pretty good success. But I'd like to be able to access more information such as how long I'm waiting for I/O (and what kind of I/O I'm waiting on) or how many cache misses I have. Are there any Linux tools to help with this beyond your basic time command? | 0 | python,linux,profiling | 2009-10-22T14:29:00.000 | 1 | 1,607,641 | I'm not sure if python will provide the low level information you are looking for. You might want to look at oprofile and latencytop though. | 0 | 1,160 | false | 0 | 1 | What profiling tools exist for Python on Linux beyond the ones included in the standard library? | 1,608,157 |
1 | 1 | 0 | 5 | 2 | 0 | 1.2 | 0 | I would like to redirect stderr and stdout to files when run inside of pythonw. How can I determine whether a script is running in pythonw or in python? | 0 | python,pythonw | 2009-10-23T05:31:00.000 | 1 | 1,611,543 | sys.executable -- "A string giving the name of the executable binary for the Python interpreter, on systems where this makes sense." | 0 | 1,325 | true | 0 | 1 | Determine if a script is running in pythonw? | 1,611,558 |
1 | 4 | 1 | 4 | 18 | 0 | 0.197375 | 0 | Does anyone know of a automated GUI testing package for that works with PyQT besides Squish? Nothing against Squish I am just looking for other packages. It would be cool if there were an open source package. I am doing my testing under Linux. | 0 | python,testing,pyqt | 2009-10-23T22:19:00.000 | 0 | 1,616,228 | It looks like PyQT4 includes a QtTest object that can be used for unit testing. | 0 | 3,768 | false | 0 | 1 | PyQT GUI Testing | 1,829,332 |
1 | 1 | 1 | 4 | 2 | 1 | 1.2 | 0 | I have a C++ app that uses Python to load some scripts. It calls some functions in the scripts, and everything works fine until the app exits and calls Py_Finalize. Then it displays the following: (GetName is a function in one of the scripts)
Exception AttributeError: "'module' object has no attribute 'GetName'" in 'garbage collection' ignored
Fatal Python error: unexpected exception during garbage collection
Then the app crashes.
I'm using Python 3.1 on Windows. Any advice would be appreciated. | 0 | python,exception,attributes | 2009-10-25T03:24:00.000 | 0 | 1,619,908 | From the docs to Py_Finalize():
Bugs and caveats: The destruction of
modules and objects in modules is done
in random order; this may cause
destructors (__del__() methods) to
fail when they depend on other objects
(even functions) or modules.
Dynamically loaded extension modules
loaded by Python are not unloaded.
Small amounts of memory allocated by
the Python interpreter may not be
freed (if you find a leak, please
report it). Memory tied up in circular
references between objects is not
freed. Some memory allocated by
extension modules may not be freed.
Some extensions may not work properly
if their initialization routine is
called more than once; this can happen
if an application calls
Py_Initialize() and Py_Finalize() more
than once.
Most likely a __del__ contains a call to <somemodule>.GetName(), but that module has already been destroyed by the time __del__ is called. | 0 | 1,194 | true | 0 | 1 | What is causing this Python exception? | 1,619,944 |
1 | 5 | 0 | 0 | 3 | 0 | 0 | 0 | I have a Python script that outputs something every second or two, but takes a long while to finish completely. I want to set up a website such that someone can directly invoke the script, and the output is sent to the screen while the script is running.
I don't want the user to wait until the script finishes completely, because then all the output is displayed at once. I also tried that, and the connection always times out.
I don't know what this process is called, what terms I'm looking for, and what I need to use. CGI? Ajax? Need some serious guidance here, thanks!
If it matters, I plan to use Nginx as the webserver. | 0 | python,ajax,cgi,web-applications,fastcgi | 2009-10-25T17:10:00.000 | 0 | 1,621,430 | As suggested by a few of the others you can use a keep alive connection and instead of "return" statements use yield statements and instead of "print" statements also use yield statements. This will basically show everything that happens in the python script onto the website page.
After extensive searching and testing I would advise nginx as a reverse proxy with gevent & bottle as the backend which allows for peace of mind as nginx will not serve up the python source file ever. | 0 | 5,553 | false | 1 | 1 | How do I display real-time python script output on a website? | 11,067,328 |
1 | 3 | 0 | 4 | 5 | 1 | 0.26052 | 0 | I have an email that I'm reading with the Python email lib that I need to modify the attachments of. The email Message class has the "attach" method, but does not have anything like "detach". How can I remove an attachment from a multipart message? If possible, I want to do this without recreating the message from scratch.
Essentially I want to:
Load the email
Remove the mime attachments
Add a new attachment | 0 | python,email,mime | 2009-10-26T18:14:00.000 | 0 | 1,626,403 | The way I've figured out to do it is:
Set the payload to an empty list with set_payload
Create the payload, and attach to the message. | 0 | 6,827 | false | 0 | 1 | Python email lib - How to remove attachment from existing message? | 1,626,650 |
3 | 5 | 1 | 5 | 1 | 0 | 0.197375 | 0 | I'm now thinking, is it possible to integrate Python, Perl and C/C++ and also doing a GUI application with this very nice mix of languages? | 0 | c++,python,c,perl,integration | 2009-10-27T00:04:00.000 | 0 | 1,628,001 | Anything is "possible", but whether it is necessary or beneficial is debatable and highly depends on your requirements. Don't mix if you don't need to. Use the language that best fits the domain or target requirements.
I can't think of a scenario where one needs to mix Python and Perl as their domain is largely the same.
Using C/C++ can be beneficial in cases where you need hardcore system integration or specialized machine dependent services. Or when you need to extend Python or Perl itself (both are written in C/C++).
EDIT: if you want to do a GUI application, it is probably easier to choose a language that fits the OS you want your GUI to run in. I.e. something like (but not limited to) C# for Windows, Objective-C for iPhone or Mac, Qt + C++ for Linux etc. | 0 | 735 | false | 0 | 1 | Python, Perl And C/C++ With GUI | 1,628,011 |
3 | 5 | 1 | 1 | 1 | 0 | 0.039979 | 0 | I'm now thinking, is it possible to integrate Python, Perl and C/C++ and also doing a GUI application with this very nice mix of languages? | 0 | c++,python,c,perl,integration | 2009-10-27T00:04:00.000 | 0 | 1,628,001 | Everything is possible - but why add two and a half more levels of complexity? | 0 | 735 | false | 0 | 1 | Python, Perl And C/C++ With GUI | 1,628,012 |
3 | 5 | 1 | 1 | 1 | 0 | 0.039979 | 0 | I'm now thinking, is it possible to integrate Python, Perl and C/C++ and also doing a GUI application with this very nice mix of languages? | 0 | c++,python,c,perl,integration | 2009-10-27T00:04:00.000 | 0 | 1,628,001 | Python & Perl? together?
I can only think of an editor. | 0 | 735 | false | 0 | 1 | Python, Perl And C/C++ With GUI | 1,628,013 |
2 | 3 | 0 | 11 | 10 | 0 | 1.2 | 0 | Is there any reason to favor Python or Java over the other for developing on Android phones, other than the usual Python v. Java issues? | 0 | java,python,android | 2009-10-28T23:30:00.000 | 0 | 1,640,806 | Java is "more native" on the Android platform; Python is coming after and striving to get parity but not quite there yet AFAIK. Roughly the reverse situation wrt App Engine, where Python's been around for a year longer than Java and so is still more mature and complete (even though Java's catching up).
So, in any situation where you'd be at all undecided between Java and Python if the deployment was due to happen on some general purpose platform such as Linux, I think the maturity and completeness arguments could sway you towards Python for deployment on App Engine, and towards Java for deployment on Android. | 0 | 4,643 | true | 1 | 1 | Android: Java v. Python | 1,641,125 |
2 | 3 | 0 | 2 | 10 | 0 | 0.132549 | 0 | Is there any reason to favor Python or Java over the other for developing on Android phones, other than the usual Python v. Java issues? | 0 | java,python,android | 2009-10-28T23:30:00.000 | 0 | 1,640,806 | On the mobile platform performance and memory usage are much more critical than desktop or server. The JVM that runs on Android is highly optimized for the mobile platform. Based on the links I have seen about Python on Android none of them seem to have an optimized VM for mobile platform. | 0 | 4,643 | false | 1 | 1 | Android: Java v. Python | 1,641,263 |
1 | 4 | 0 | 1 | 1 | 0 | 0.049958 | 0 | I am looking for a simple Python webserver that is easy to kill from within code. Right now, I'm playing with Bottle, but I can't find any way at all to kill it in code. If you know how to kill Bottle (in code, no Ctrl+C) that would be super, but I'll take anything that's Python, simple, and killable. | 0 | python | 2009-10-29T12:26:00.000 | 1 | 1,643,362 | Raise exeption and handle it in main or use sys.exit | 0 | 410 | false | 0 | 1 | Killing Python webservers | 1,643,387 |
1 | 1 | 0 | 1 | 1 | 1 | 1.2 | 0 | I have mod_python installed on a debian box with python 2.4 and 2.6 installed. I want mod_python to use 2.6 but it is finding 2.4. How can set it to use the other version. | 0 | python,apache,mod-python | 2009-10-29T19:29:00.000 | 0 | 1,646,017 | The version of Python used is set when mod_python is compiled. If you need to use a version other than the default, you'll need to recompile it, or you may be able to find a different package from the repository. | 0 | 145 | true | 0 | 1 | Setting mod_python's interperter | 1,646,473 |
1 | 3 | 0 | 2 | 2 | 1 | 1.2 | 0 | I'm using Python (under Google App Engine), and I have some RSA private keys that I need to export in PKCS#12 format. Is there anything out there that will assist me with this? I'm using PyCrypto/KeyCzar, and I've figured out how to import/export RSA keys in PKCS8 format, but I really need it in PKCS12.
Can anybody point me in the right direction? If it helps, the reason I need them in PKCS12 format is so that I can import them on the iPhone, which seems to only allow key-import in that format. | 0 | python,google-app-engine,cryptography,rsa,pkcs#12 | 2009-10-30T01:35:00.000 | 0 | 1,647,568 | If you can handle some ASN.1 generation, you can relatively easily convert a PKCS#8-file into a PKCS#12-file. A PKCS#12-file is basically a wrapper around a PKCS#8 and a certificate, so to make a PKCS#12-file, you just have to add some additional data around your PKCS#8-file and your certificate.
Usually a PKCS#12-file will contain the certificate(s) in an encrypted structure, but all compliant parsers should be able to read it from an unencrypted structure. Also, PKCS#12-files will usually contain a MacData-structure for integrity-check, but this is optional and a compliant parser should work fine without it. | 0 | 2,305 | true | 0 | 1 | How to encode an RSA key using PKCS12 in Python? | 1,648,617 |
1 | 2 | 0 | 1 | 3 | 0 | 0.099668 | 0 | I am looking for a solution to programmatically return all available serial ports with python.
At the moment I am entering ls /dev/tty.* or ls /dev/cu.* into the terminal to list ports and hardcoding them into the pyserial class. | 0 | python,macos,serial-port | 2009-11-02T03:16:00.000 | 1 | 1,659,283 | What about just doing the os.listdir / glob equivalent of ls to perform the equivalent of that ls? Of course it's not going to be the case that some usable device is connected to each such special file (but, that holds for ls as well;-), but for "finding all serial ports", as you ask in your Q's title, I'm not sure how else you might proceed. | 0 | 2,216 | false | 0 | 1 | MacPython: programmatically finding all serial ports | 1,659,294 |
6 | 8 | 0 | 0 | 3 | 1 | 0 | 0 | If you have to choose a scripting language, why would you choose Python? | 0 | python,scripting | 2009-11-02T05:33:00.000 | 0 | 1,659,559 | I would try a number of "scripting" languages (as well as some languages with good static type inference), and then select the language(s) that best fit the problem.
This may be for a number of reasons including, but not limited to: Runtime targets and performance (as dictated by functional requirements), library support (don't re-invent the wheel all the time), existing tool support, existing integration support (if X supports Y, is it real feasible to get X to support Z just to use Z?), and most important to a subjective question: personal choice and zealot fanaticism :)
The term "scripting language" is absolutely horrid -- unless perhaps you really DO mean SH or MIRC "script". The phrase "dynamically typed language" is a much better qualifier. | 0 | 11,442 | false | 0 | 1 | What makes Python a good scripting language? | 1,659,616 |
6 | 8 | 0 | 2 | 3 | 1 | 0.049958 | 0 | If you have to choose a scripting language, why would you choose Python? | 0 | python,scripting | 2009-11-02T05:33:00.000 | 0 | 1,659,559 | I think it depends on your definition of scripting language. There are (at least) two camps. One is that scripting language should be embeddable, so the core should be small (like Lua or Tcl). The second camp is scripting for system administration, and Perl is definitely in this camp.
Python is a general programming language, not particularly in either camp (but also not unsuitable), probably most useful for writing small or medium sized programs. | 0 | 11,442 | false | 0 | 1 | What makes Python a good scripting language? | 1,659,629 |
6 | 8 | 0 | 0 | 3 | 1 | 0 | 0 | If you have to choose a scripting language, why would you choose Python? | 0 | python,scripting | 2009-11-02T05:33:00.000 | 0 | 1,659,559 | I haven't programmed in python before but my guess would be the libraries available and the size of the userbase. | 0 | 11,442 | false | 0 | 1 | What makes Python a good scripting language? | 1,659,630 |
6 | 8 | 0 | 0 | 3 | 1 | 0 | 0 | If you have to choose a scripting language, why would you choose Python? | 0 | python,scripting | 2009-11-02T05:33:00.000 | 0 | 1,659,559 | It's very intuitive, has a ton of libraries, helps you whip up a script VERY FAST. You can use it for small projects or big projects and can compile into an EXE for windows, an APP for mac or into a cross platform application.
I has possibly the cleanest syntax of any language I have seen to date and can do everything from adding numbers to system calls to reading various different types of files. Hell, you can even do web programming with it.
I see no reason why I would advise against python... ever. | 0 | 11,442 | false | 0 | 1 | What makes Python a good scripting language? | 1,698,597 |
6 | 8 | 0 | 11 | 3 | 1 | 1 | 0 | If you have to choose a scripting language, why would you choose Python? | 0 | python,scripting | 2009-11-02T05:33:00.000 | 0 | 1,659,559 | Depends on what you mean by "scripting language". If you mean I'm going to be extensively typing it in at a shell prompt, I want the mysterious but utter conciseness of Bash or zsh; if you mean I'm going to have to embed it in 2000 apps in each of which it will typically be used for "customization" scripts of 2 or 3 lines, I probably want the minimalist simplicity of Lua (I may not like programming in Lua all that much, but 2-3 lines is indeed "scripting" more than "programming", and the near-zero cost of embedding Lua in anything will then dominate).
Python, like Perl or Ruby, is mostly used to write MUCH more substantial "scripts" (impossible to distinguish from "programs", except maybe by total bigots;-) -- in which case, very different considerations apply wrt "real" scripting languages such as bash or zsh, or lua or tcl for a different definition of "scripting language". Basically, if what you want is a dynamically (but strongly) typed language, with full capacity to scale up to very large software systems, and yet quite good at "playing with others"... then you surely have a particularly weird definition of "scripting", my friend!-) But that's the arena where Python, Ruby and Perl mostly play -- and where one could debate one against the other (but any one of them would crush any other popular language I know -- yeah, I've known and loved and used rexx, scheme, Smalltalk, and many many others, but none could hold a candle to the Big Three I just mentioned in this arena!-).
But unless you clarify your terminology, "scripting language" remains an empty, meaning-free sound, and any debate surrounding it utterly useless and void of significance. | 0 | 11,442 | false | 0 | 1 | What makes Python a good scripting language? | 1,659,617 |
6 | 8 | 0 | 20 | 3 | 1 | 1.2 | 0 | If you have to choose a scripting language, why would you choose Python? | 0 | python,scripting | 2009-11-02T05:33:00.000 | 0 | 1,659,559 | Because it has clean and agile syntax, it's fast, well documented, well connected to C, has a lot of libraries, it's intuitive, and it's not perl. | 0 | 11,442 | true | 0 | 1 | What makes Python a good scripting language? | 1,659,564 |
4 | 7 | 0 | 4 | 1 | 1 | 0.113791 | 0 | What is that needs to be coded in Python instead of C/C++ etc? I know its advantages etc. I want to know why exactly makes Python The language for people in this industry? | 0 | python,oop,animation | 2009-11-02T05:55:00.000 | 0 | 1,659,620 | A few other points I've not seen in the existing answers:
it's free
it's fast [enough]
it runs on every platform I know of (AIX, HPUX, Linux, Mac OS X, Windows..)
quick to learn
large, powerful libraries
numeric
graphical
etc
simple, consistent syntax
the existing user-base is large
because it's easy-to-learn, you don't have to be a "programmer" to use it | 0 | 1,359 | false | 0 | 1 | Why is Python a favourite among people working in animation industry? | 1,659,774 |
4 | 7 | 0 | 3 | 1 | 1 | 0.085505 | 0 | What is that needs to be coded in Python instead of C/C++ etc? I know its advantages etc. I want to know why exactly makes Python The language for people in this industry? | 0 | python,oop,animation | 2009-11-02T05:55:00.000 | 0 | 1,659,620 | Because Python is what Basic should have been ;)
Its a language designed from the beginning to be used by non-programmers, but with the power to be truly used as a general purpose programming language. | 0 | 1,359 | false | 0 | 1 | Why is Python a favourite among people working in animation industry? | 1,659,654 |
4 | 7 | 0 | 2 | 1 | 1 | 0.057081 | 0 | What is that needs to be coded in Python instead of C/C++ etc? I know its advantages etc. I want to know why exactly makes Python The language for people in this industry? | 0 | python,oop,animation | 2009-11-02T05:55:00.000 | 0 | 1,659,620 | Aside from the fact that it's already in use, the main advantage is that it's quick to use. Java, C, and friends almost all require tedious coding that merely restates what you already know. Python is designed to be quick to write, quick to modify, and as general as possible.
As an example, functions in java require you to declare the type of each of the input variables. In python, as long as you pass input variables that work with the function, it's valid. This makes your code extremely flexible. You don't waste time declaring variables as one type or another, you just use them.
Some people will tell you that java produces code that is "more correct", but in animation and graphics, producing code that works in as short a time as possible is usually the goal. | 0 | 1,359 | false | 0 | 1 | Why is Python a favourite among people working in animation industry? | 1,659,703 |
4 | 7 | 0 | 1 | 1 | 1 | 0.028564 | 0 | What is that needs to be coded in Python instead of C/C++ etc? I know its advantages etc. I want to know why exactly makes Python The language for people in this industry? | 0 | python,oop,animation | 2009-11-02T05:55:00.000 | 0 | 1,659,620 | My guess is that it is the tool for the job because it is easy to prototype extra features. | 0 | 1,359 | false | 0 | 1 | Why is Python a favourite among people working in animation industry? | 1,659,637 |
3 | 3 | 0 | 3 | 2 | 0 | 1.2 | 1 | I am interested in your opinions on unittesting code that uses Corba to communicate with a server.
Would you mock the Corba objects? In Python that's sort of a pain in the ass because all the methods of Corba objects are loaded dynamically. So you're basically stuck with "mock anything".
Thanks!
Note:
I believe I have not made myself clear enough, so I'll try to give a somewhat more concrete example:
A web application needs to display a page containing data received from the server. It obtains the data by calling server_pagetable.getData() and then formats the data, converts them to the correct python types (because Corba does not have e.g. a date type etc.) and finally creates the HTML code to be displayed.
And this is what I would like to test - the methods that receive the data and do all the transformations and finally create the HTML code.
I believe the most straightforward decision is to mock the Corba objects as they essentially comprise both the networking and db functionality (which ought not to be tested in unit tests).
It's just that this is quite a lot of "extra work" to do - mocking all the Corba objects (there is a User object, a server session object, the pagetable object, an admin object etc.). Maybe it's just because I'm stuck with Corba and therefore I have to reflect the object hierarchy dictated by the server with mocks. On the other hand, it could be that there is some cool elegant solution to testing code using Corba that just did not cross my mind. | 0 | python,unit-testing,mocking,corba | 2009-11-02T08:38:00.000 | 0 | 1,660,049 | Don't try to unittest Corba. Assume that Corba works. Unittest your own code. This means:
Create a unit test which checks that you correctly set up Corba and that you can invoke a single method and read a property. If that works, all other methods and properties will work, too.
After that, test that all the exposed objects work correctly. You don't need Corba for this. | 0 | 625 | true | 0 | 1 | Unittesting Corba in Python | 1,660,185 |
3 | 3 | 0 | 1 | 2 | 0 | 0.066568 | 1 | I am interested in your opinions on unittesting code that uses Corba to communicate with a server.
Would you mock the Corba objects? In Python that's sort of a pain in the ass because all the methods of Corba objects are loaded dynamically. So you're basically stuck with "mock anything".
Thanks!
Note:
I believe I have not made myself clear enough, so I'll try to give a somewhat more concrete example:
A web application needs to display a page containing data received from the server. It obtains the data by calling server_pagetable.getData() and then formats the data, converts them to the correct python types (because Corba does not have e.g. a date type etc.) and finally creates the HTML code to be displayed.
And this is what I would like to test - the methods that receive the data and do all the transformations and finally create the HTML code.
I believe the most straightforward decision is to mock the Corba objects as they essentially comprise both the networking and db functionality (which ought not to be tested in unit tests).
It's just that this is quite a lot of "extra work" to do - mocking all the Corba objects (there is a User object, a server session object, the pagetable object, an admin object etc.). Maybe it's just because I'm stuck with Corba and therefore I have to reflect the object hierarchy dictated by the server with mocks. On the other hand, it could be that there is some cool elegant solution to testing code using Corba that just did not cross my mind. | 0 | python,unit-testing,mocking,corba | 2009-11-02T08:38:00.000 | 0 | 1,660,049 | I would set up a test server, and do live tests on that. Unittesting can be tricky with network stuff, so it's best to keep it as real as possible. Any mocking would be done on the test server, for instance if you need to communicate to three different servers, it could be set up with three different IP addresses to play the role of all three servers. | 0 | 625 | false | 0 | 1 | Unittesting Corba in Python | 1,660,187 |
3 | 3 | 0 | 0 | 2 | 0 | 0 | 1 | I am interested in your opinions on unittesting code that uses Corba to communicate with a server.
Would you mock the Corba objects? In Python that's sort of a pain in the ass because all the methods of Corba objects are loaded dynamically. So you're basically stuck with "mock anything".
Thanks!
Note:
I believe I have not made myself clear enough, so I'll try to give a somewhat more concrete example:
A web application needs to display a page containing data received from the server. It obtains the data by calling server_pagetable.getData() and then formats the data, converts them to the correct python types (because Corba does not have e.g. a date type etc.) and finally creates the HTML code to be displayed.
And this is what I would like to test - the methods that receive the data and do all the transformations and finally create the HTML code.
I believe the most straightforward decision is to mock the Corba objects as they essentially comprise both the networking and db functionality (which ought not to be tested in unit tests).
It's just that this is quite a lot of "extra work" to do - mocking all the Corba objects (there is a User object, a server session object, the pagetable object, an admin object etc.). Maybe it's just because I'm stuck with Corba and therefore I have to reflect the object hierarchy dictated by the server with mocks. On the other hand, it could be that there is some cool elegant solution to testing code using Corba that just did not cross my mind. | 0 | python,unit-testing,mocking,corba | 2009-11-02T08:38:00.000 | 0 | 1,660,049 | I have got similar work to tackle but I probably will not write a test for implementation of CORBA objects or more specifically COM objects (implementation of CORBA). I have to write tests for work that uses these structures as oppose to the structures themselves (although I could land myself in that role too if I ask too many questions). In the end of the day, unittest is integration on a smaller scale so whenever I write tests I am always thinking of input and outputs rather than actual structures. From the way you have written your problem my concentration would be on the data of server_pagetable.getData() and the output HTML without caring too much about what happens inbetween (because that is the code you are testing, you don't want to define the code in the test but ensure that output is correct). If you want to test individual functions inbetween then I would get mock data (essentially still data, so you can generate mock data rather than mock class if possible). Mocks are only used when you don't have parts of the full code and those functions needs some input from those parts of the code but as you are not interested in them or don't have them you simplify the interaction with them. This is just my opinion. | 0 | 625 | false | 0 | 1 | Unittesting Corba in Python | 51,438,774 |
2 | 4 | 0 | 3 | 6 | 1 | 1.2 | 0 | I was writing a script to inspect python's version on my system and I've noticed that python -V writes to the error stream, while python -h, for instance, uses the standard output. Is there a good reason for this behavior? | 0 | python | 2009-11-04T09:34:00.000 | 0 | 1,672,650 | The -h option also used to print to stderr because it is not part of the output of your program, i.e. the output is not produced by your Python script but by the Python interpreter itself.
As for why they changed the -h to use stdout? Try typing python -h with your terminal window set to the standard 24 lines. It scrolls off the screen.
Now most people would react by trying python -h |less but that only works if you send the output of -h to the stdout instead of stderr. So there was a good reason for making -h go to stdout, but no good reason for changing -V. | 0 | 340 | true | 0 | 1 | Why does python -V write to the error stream? | 1,675,051 |
2 | 4 | 0 | 2 | 6 | 1 | 0.099668 | 0 | I was writing a script to inspect python's version on my system and I've noticed that python -V writes to the error stream, while python -h, for instance, uses the standard output. Is there a good reason for this behavior? | 0 | python | 2009-11-04T09:34:00.000 | 0 | 1,672,650 | Why?
Because it's not the actual output of your actual script.
That's the long-standing, standard, common, typical, ordinary use for standard error: everything NOT output from your script. | 0 | 340 | false | 0 | 1 | Why does python -V write to the error stream? | 1,673,210 |
4 | 7 | 0 | 0 | 4 | 0 | 0 | 1 | I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it.
I want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time.
how to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow)
btw. workers are using read-only data so there is no need to maintain locking and communication between them | 0 | python,nginx,load-balancing,wsgi,reverse-proxy | 2009-11-04T15:51:00.000 | 1 | 1,674,696 | Another option is a queue table in the database.
The worker processes run in a loop or off cron and poll the queue table for new jobs. | 0 | 2,509 | false | 0 | 1 | how to process long-running requests in python workers? | 1,718,183 |
4 | 7 | 0 | 0 | 4 | 0 | 0 | 1 | I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it.
I want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time.
how to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow)
btw. workers are using read-only data so there is no need to maintain locking and communication between them | 0 | python,nginx,load-balancing,wsgi,reverse-proxy | 2009-11-04T15:51:00.000 | 1 | 1,674,696 | I think you can configure modwsgi/Apache so it will have several "hot" Python interpreters
in separate processes ready to go at all times and also reuse them for new accesses
(and spawn a new one if they are all busy).
In this case you could load all the preprocessed data as module globals and they would
only get loaded once per process and get reused for each new access. In fact I'm not sure this isn't the default configuration
for modwsgi/Apache.
The main problem here is that you might end up consuming
a lot of "core" memory (but that may not be a problem either).
I think you can also configure modwsgi for single process/multiple
thread -- but in that case you may only be using one CPU because
of the Python Global Interpreter Lock (the infamous GIL), I think.
Don't be afraid to ask at the modwsgi mailing list -- they are very
responsive and friendly. | 0 | 2,509 | false | 0 | 1 | how to process long-running requests in python workers? | 1,675,726 |
4 | 7 | 0 | 1 | 4 | 0 | 0.028564 | 1 | I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it.
I want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time.
how to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow)
btw. workers are using read-only data so there is no need to maintain locking and communication between them | 0 | python,nginx,load-balancing,wsgi,reverse-proxy | 2009-11-04T15:51:00.000 | 1 | 1,674,696 | The most simple solution in this case is to use the webserver to do all the heavy lifting. Why should you handle threads and/or processes when the webserver will do all that for you?
The standard arrangement in deployments of Python is:
The webserver start a number of processes each running a complete python interpreter and loading all your data into memory.
HTTP request comes in and gets dispatched off to some process
Process does your calculation and returns the result directly to the webserver and user
When you need to change your code or the graph data, you restart the webserver and go back to step 1.
This is the architecture used Django and other popular web frameworks. | 0 | 2,509 | false | 0 | 1 | how to process long-running requests in python workers? | 1,682,864 |
4 | 7 | 0 | 0 | 4 | 0 | 0 | 1 | I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it.
I want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time.
how to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow)
btw. workers are using read-only data so there is no need to maintain locking and communication between them | 0 | python,nginx,load-balancing,wsgi,reverse-proxy | 2009-11-04T15:51:00.000 | 1 | 1,674,696 | You could use nginx load balancer to proxy to PythonPaste paster (which serves WSGI, for example Pylons), that launches each request as separate thread anyway. | 0 | 2,509 | false | 0 | 1 | how to process long-running requests in python workers? | 1,676,102 |
3 | 4 | 0 | 1 | 4 | 0 | 0.049958 | 0 | I hear Python is very good for pentesting. It has got good modules for that. But it's not a good framework, like Metasploit. | 0 | python,penetration-tools | 2009-11-04T19:01:00.000 | 0 | 1,675,904 | well i think that c is more powerful than both languages, and is better for pen-testing. | 0 | 7,782 | false | 0 | 1 | Is Python or Ruby good for penetration testing? | 2,305,866 |
3 | 4 | 0 | 1 | 4 | 0 | 0.049958 | 0 | I hear Python is very good for pentesting. It has got good modules for that. But it's not a good framework, like Metasploit. | 0 | python,penetration-tools | 2009-11-04T19:01:00.000 | 0 | 1,675,904 | But C isn't a scriptng language there is many arguments that proof that python/ruby are better for pen testing . For example with C you can't automate so fast as with python/ruby , python/ruby is high-level language and writing programs on them are a lot easier than C . But if you want to deal with pen-testing you should learn Python or any other scripting language and C/C++ or other languages like PHP depent what are you testing but you should know a least one scripting language they make things a lot easier some times . | 0 | 7,782 | false | 0 | 1 | Is Python or Ruby good for penetration testing? | 19,335,182 |
3 | 4 | 0 | 4 | 4 | 0 | 1.2 | 0 | I hear Python is very good for pentesting. It has got good modules for that. But it's not a good framework, like Metasploit. | 0 | python,penetration-tools | 2009-11-04T19:01:00.000 | 0 | 1,675,904 | Any language that has good, easy string handling capabilities is a good match for penetration testing. This is why you see scripting languages as the most used languages in this sort of tasks.
To answer your question, they're just as good. | 0 | 7,782 | true | 0 | 1 | Is Python or Ruby good for penetration testing? | 1,675,945 |
2 | 7 | 0 | 16 | 187 | 1 | 1 | 0 | How can I get a reference to a module from within that module? Also, how can I get a reference to the package containing that module? | 0 | python,self-reference | 2009-11-04T21:42:00.000 | 0 | 1,676,835 | If you have a class in that module, then the __module__ property of the class is the module name of the class. Thus you can access the module via sys.modules[klass.__module__]. This is also works for functions. | 0 | 72,566 | false | 0 | 1 | How to get a reference to a module inside the module itself? | 1,676,861 |
2 | 7 | 0 | 0 | 187 | 1 | 0 | 0 | How can I get a reference to a module from within that module? Also, how can I get a reference to the package containing that module? | 0 | python,self-reference | 2009-11-04T21:42:00.000 | 0 | 1,676,835 | If all you need is to get access to module variable then use globals()['bzz'] (or vars()['bzz'] if it's module level). | 0 | 72,566 | false | 0 | 1 | How to get a reference to a module inside the module itself? | 70,034,466 |
1 | 4 | 0 | 1 | 1 | 1 | 0.049958 | 0 | I have made some changes in a python module in my checked out copy of a repository, and need to test them. However, when I try to run a script that uses the module, it keeps importing the module from the trunk of the repository, which is of no use to me.
I tried setting PYTHONPATH, which did nothing at all. After some searching around, I found that anything in the .pth files under site-packages directory will be put in even before PYTHONPATH (which to me defeats the purpose of having it). I believe this is the cause for my module not being picked.
Am I correct? If so, what is the way to override this (without modifying the script to have a sys.path.insert(0,path) )?
Edit: In reply to NicDumz - the original repository was under /projects/spam. The python modules were part of this in /projects/spam/sources/python/a/b/. However, these are 'built' every night using a homegrown make variant which then puts them into /projects/spam/build/lib/python/a/b/. The script is using the module under this last path only.
I have checked out the entire repository to under /home/sundar/spam, and made changes in /home/sundar/spam/sources/python/a/b/mymodule.py. I've set my PYTHONPATH to /home/sundar/spam/sources/python and tried to import a.b.mymodule with no success. | 0 | python,import,path,module | 2009-11-05T10:35:00.000 | 0 | 1,679,673 | Your current working directory is first in the sys.path. Anything there trumps anything else on the path.
Copy the "test version" to some place closer to the front of the list of directories in sys.path, like your current working directory. | 0 | 4,291 | false | 0 | 1 | How do I make Python pick the correct module without manually modifying sys.path? | 1,679,860 |
1 | 1 | 0 | 1 | 2 | 1 | 1.2 | 0 | There are lots of tutorials/instructions on how to embed python in an application, but nothing (that I've seen) on overall design for how the embedded interpreter should be used and interact with the application.
The only idea I could think of would be to simply give the user a method (menu option, etc) of executing scripts in the program. So certain classes, functions, objects, etc. would be exported to python, some script would do something, then said script could be run from the program.
Would such a design be "safe?" Meaning is it feasible for a malicious/poorly-written script to "damage" the program and/or computer? I assume its possible depending on the functions available to the script (e.g: it could try to overwrite some important files, etc.) How might one prevent such from happening? (e.g: script certification, program design, etc.)
This is implementation specific, but is it possible/feasible to have the effects of the script stay after its done running? Meaning if a script computes something, will the result be available to the program after execution of the script has finished? I think it is possible to do if the program were setup to interact with a specific script, but the program will be released before most scripts are written; and such a setup seems like a misuse of embedding a scripting language. Is there actually cases where you would want the result of a scripts execution to be available, or is this a contrived situation that doesn't really occur?
Are there any other designs for embedding python?
What about using python in a way similar to a plugin architecture?
Thanks,
Matthew A. Todd | 0 | python,embedding | 2009-11-05T19:03:00.000 | 0 | 1,682,831 | The only idea I could think of would be to simply give the user a method (menu option, etc) of executing scripts in the program.
Correct.
So certain classes, functions, objects, etc. would be exported to python, some script would do something, then said script could be run from the program.
Correct.
Would such a design be "safe?"
Yes. Unless your users are malicious, psychotic sociopaths. They want to make your program do useful things. They bought/downloaded the software in the first place. They think it has value.
They trusted your software. Why not trust them?
Meaning if a script computes something, will the result be available to the program after execution of the script has finished?
Programs like Apache do this all the time. You screw up the configuration ("script"), it crashes. Lesson learned? Don't screw up the configuration. | 0 | 282 | true | 0 | 1 | Embedding Python Design | 1,682,872 |
1 | 4 | 1 | -1 | 8 | 0 | -0.049958 | 0 | I have a lot of APIs/Classes that I have developed in Ruby and Python that I would like to use in my .NET apps. Is it possible to instantiate a Ruby or Python Object in C# and call its methods?
It seems that libraries like IronPython do the opposite of this. Meaning, they allow Python to utilize .NET objects, but not the reciprocal of this which is what I am looking for... Am I missing something here?
Any ideas? | 0 | c#,.net,python,ruby | 2009-11-05T22:37:00.000 | 0 | 1,684,145 | I have seen ways to call into Ruby / Python from c#. But it's easier the other way around. | 0 | 5,027 | false | 0 | 1 | Call Ruby or Python API in C# .NET | 1,684,168 |
4 | 8 | 0 | 24 | 12 | 0 | 1 | 0 | I'm a Java programmer and if there's one thing that I dislike about it, it would be speed. Java seems really slow, but a lot of the Python scriptsprograms I have written so far seem really fast.
So I was just wondering if Python is faster than Java, or C# and how that compares to C/C++ (which I figure it'll be slower than)? | 0 | python,performance | 2009-11-06T08:27:00.000 | 0 | 1,686,192 | This totally depends on the usecase. For long running applications (like servers), Java has proven to be extremely fast - even faster than C. This is possible as the JVM might compile hot bytecode to machine code. While doing this, it may take fully advantage of each and every feature of the CPU. This typically isn't possible for C, at least as soon as you leave your laboratory environment: just assume distributing a dozen of optimized builds to your clients - that simply won't work.
But back to your question: it really depends. E.g. if startup time is an issue (which isn't an issue for a server application for instance) Java might not be the best choice. It may also depend on where your hot code areas are: If they are within native libraries with some Python code to simply glue them together, you will be able to get C like performance with Python as well.
Typically, scripting languages will tend to be slower though - at least most of the time. | 0 | 29,026 | false | 0 | 1 | How fast is Python? | 1,686,232 |
4 | 8 | 0 | 0 | 12 | 0 | 0 | 0 | I'm a Java programmer and if there's one thing that I dislike about it, it would be speed. Java seems really slow, but a lot of the Python scriptsprograms I have written so far seem really fast.
So I was just wondering if Python is faster than Java, or C# and how that compares to C/C++ (which I figure it'll be slower than)? | 0 | python,performance | 2009-11-06T08:27:00.000 | 0 | 1,686,192 | For the Python, velocity depends also for the interpreter implementations... I saw that pypy is generally faster than cpython. | 0 | 29,026 | false | 0 | 1 | How fast is Python? | 7,529,473 |
4 | 8 | 0 | 2 | 12 | 0 | 0.049958 | 0 | I'm a Java programmer and if there's one thing that I dislike about it, it would be speed. Java seems really slow, but a lot of the Python scriptsprograms I have written so far seem really fast.
So I was just wondering if Python is faster than Java, or C# and how that compares to C/C++ (which I figure it'll be slower than)? | 0 | python,performance | 2009-11-06T08:27:00.000 | 0 | 1,686,192 | It is very hard to make a truly objective and general comparison of the runtime speed of two languages. In comparing any two languages X and Y, one often finds X is faster than Y in some respects while being slower in others. For me, this makes any benchmarks/comparisons available online largely useless. The best way is to test it yourself and see how fast each language is for the job that you are doing.
Having said that, there are certain things one should remember when testing languages like Java and Python. Code in these languages can often be speeded up significantly by using constructions more suited to the language (e.g. list comprehensions in Python, or using char[] and StringBuilder for certain String operations in Java). Moreover, for Python, using psyco can greatly boost the speed of the program. And then there is the whole issue of using appropriate data structures and keeping an eye on the runtime complexity of your code. | 0 | 29,026 | false | 0 | 1 | How fast is Python? | 1,686,811 |
4 | 8 | 0 | 0 | 12 | 0 | 0 | 0 | I'm a Java programmer and if there's one thing that I dislike about it, it would be speed. Java seems really slow, but a lot of the Python scriptsprograms I have written so far seem really fast.
So I was just wondering if Python is faster than Java, or C# and how that compares to C/C++ (which I figure it'll be slower than)? | 0 | python,performance | 2009-11-06T08:27:00.000 | 0 | 1,686,192 | It's a question you can't answer properly, because it all depends when it has to be fast.
Java is good for huge servers, it's bad when you have to re-compile and test a lot of times your code (compilation is sooooooo slow). Python doesn't even have to be compiled to test !
In production environment, it's totally silly to say Java is faster than C... it's like saying C is faster than assembly.
Anyway it's not possible to answer precisely : it all depends on what you want / need. | 0 | 29,026 | false | 0 | 1 | How fast is Python? | 1,686,388 |
2 | 7 | 0 | 7 | 50 | 1 | 1 | 0 | Python has a flag -O that you can execute the interpreter with. The option will generate "optimized" bytecode (written to .pyo files), and given twice, it will discard docstrings. From Python's man page:
-O Turn on basic optimizations. This changes the filename extension
for compiled (bytecode) files from .pyc to .pyo. Given twice,
causes docstrings to be discarded.
This option's two major features as I see it are:
Strip all assert statements. This trades defense against corrupt program state for speed. But don't you need a ton of assert statements for this to make a difference? Do you have any code where this is worthwhile (and sane?)
Strip all docstrings. In what application is the memory usage so critical, that this is a win? Why not push everything into modules written in C?
What is the use of this option?
Does it have a real-world value? | 0 | python,optimization,assert,bytecode | 2009-11-07T13:51:00.000 | 0 | 1,693,088 | I have never encountered a good reason to use -O. I have always assumed its main purpose is in case at some point in the future some meaningful optimization is added. | 0 | 12,397 | false | 0 | 1 | What is the use of Python's basic optimizations mode? (python -O) | 1,693,940 |
2 | 7 | 0 | 4 | 50 | 1 | 0.113791 | 0 | Python has a flag -O that you can execute the interpreter with. The option will generate "optimized" bytecode (written to .pyo files), and given twice, it will discard docstrings. From Python's man page:
-O Turn on basic optimizations. This changes the filename extension
for compiled (bytecode) files from .pyc to .pyo. Given twice,
causes docstrings to be discarded.
This option's two major features as I see it are:
Strip all assert statements. This trades defense against corrupt program state for speed. But don't you need a ton of assert statements for this to make a difference? Do you have any code where this is worthwhile (and sane?)
Strip all docstrings. In what application is the memory usage so critical, that this is a win? Why not push everything into modules written in C?
What is the use of this option?
Does it have a real-world value? | 0 | python,optimization,assert,bytecode | 2009-11-07T13:51:00.000 | 0 | 1,693,088 | You've pretty much figured it out: It does practically nothing at all. You're almost never going to see speed or memory gains, unless you're severely hurting for RAM. | 0 | 12,397 | false | 0 | 1 | What is the use of Python's basic optimizations mode? (python -O) | 1,693,128 |
2 | 3 | 0 | 1 | 10 | 1 | 0.066568 | 0 | I would like to give sources for what I'm saying but I just dont have them, it's something I heard.
Once a programming professor told me that some software benchmarking done to .net vs Python in some particular items it gave a relation of 5:8 in favor of .NET . That was his argument in favor of Python not being so much slower than .NET
Here it's the thing, I would like to try IronPython since I could combine the web framework I know the most (asp.net) with the language I like the most (Python) and I was wondering about the speed of programs in asp.net in Python vs the speed of programs in ASP.NET with VB.net or C#. Is there any software benchmarking on this?
Also, shouldnt the speeds of IronPython compared to other .NET languages be similar, since IronPython unlike Python have to compile to the .NET intermediate code? Can someone enlight me on these issues?
Greetings | 0 | .net,python,performance,ironpython | 2009-11-07T14:27:00.000 | 0 | 1,693,205 | You could enable .net tracing, which outputs timing information at the bottom of the page. Make an app in C#/.Net and an app using Python and look at the differences in timing. That will give you a definitive answer.
In all honesty I think you're better off just using C#, it's "faster" to develop since the VS environment is there for you and it's going to run faster since it doesn't have to use the dynamic language runtime. | 0 | 6,879 | false | 0 | 1 | How does ironpython speed compare to other .net languages? | 1,694,288 |
2 | 3 | 0 | 0 | 10 | 1 | 0 | 0 | I would like to give sources for what I'm saying but I just dont have them, it's something I heard.
Once a programming professor told me that some software benchmarking done to .net vs Python in some particular items it gave a relation of 5:8 in favor of .NET . That was his argument in favor of Python not being so much slower than .NET
Here it's the thing, I would like to try IronPython since I could combine the web framework I know the most (asp.net) with the language I like the most (Python) and I was wondering about the speed of programs in asp.net in Python vs the speed of programs in ASP.NET with VB.net or C#. Is there any software benchmarking on this?
Also, shouldnt the speeds of IronPython compared to other .NET languages be similar, since IronPython unlike Python have to compile to the .NET intermediate code? Can someone enlight me on these issues?
Greetings | 0 | .net,python,performance,ironpython | 2009-11-07T14:27:00.000 | 0 | 1,693,205 | IronPython will be considerably slower than C#. You could think of the comparison as very roughly between CPython and C, but with the gap somewhat smaller. | 0 | 6,879 | false | 0 | 1 | How does ironpython speed compare to other .net languages? | 2,617,858 |
1 | 1 | 0 | 3 | 3 | 0 | 1.2 | 1 | How do I get started with getting going with XML-RPC with joomla? I've been looking around for documentation and finding nothing...
I'd like to connect to a joomla server, (after enabling the Core Joomla XML-RPC plugin), and be able to do things like login and add an article, and tweak all the parameters of the article if possible.
My xml-rpc client implementation will be in python. | 0 | python,joomla,xml-rpc | 2009-11-07T19:53:00.000 | 0 | 1,694,205 | the book "Mastering Joomla 1.5 Extension and Framework Development" has a nice explanation of that.
Joomla has a fex XML-RPC plugins that let you do a few things, like the blogger API interface. (plugins/xmlrpc/blogger.php)
You should create your own XML-RPC plugin to do the custom things you want. | 0 | 2,670 | true | 0 | 1 | Joomla and XMLRPC | 1,696,183 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | currently im making some crawler script,one of problem is
sometimes if i open webpage with PAMIE,webpage can't open and hang forever.
are there any method to close PAMIE's IE or win32com's IE ?
such like if webpage didn't response or loading complete less than 10sec or so .
thanks in advance | 0 | python,time,multithreading,pamie | 2009-11-08T23:40:00.000 | 0 | 1,698,362 | I think what you are looking for is somewhere to set the timeout on your request. I would suggest looking into the documentation on PAMIE. | 0 | 294 | false | 1 | 1 | win32com and PAMIE web page open timeout | 1,698,371 |
2 | 2 | 0 | 2 | 0 | 0 | 1.2 | 1 | currently im making some crawler script,one of problem is
sometimes if i open webpage with PAMIE,webpage can't open and hang forever.
are there any method to close PAMIE's IE or win32com's IE ?
such like if webpage didn't response or loading complete less than 10sec or so .
thanks in advance | 0 | python,time,multithreading,pamie | 2009-11-08T23:40:00.000 | 0 | 1,698,362 | Just use, to initialize your PAMIE instance, PAMIE(timeOut=100) or whatever. The units of measure for timeOut are "tenths of a second" (!); the default is 3000 (300 seconds, i.e., 5 minutes); with 300 as I suggested, you'd time out after 10 seconds as you request.
(You can pass the timeOut= parameter even when you're initializing with a URL, but in that case the timeout will only be active after the initial navigation). | 0 | 294 | true | 1 | 1 | win32com and PAMIE web page open timeout | 1,698,422 |
2 | 2 | 0 | 2 | 6 | 0 | 0.197375 | 0 | I want to pass data between a Python and a C# application in Windows (I want the channel to be bi-directional)
In fact I wanna pass a struct containing data about a network packet that I've captured with C# (SharpPcap) to the Python app and then send back a modified packet to the C# program.
What do you propose ? (I rather it be a fast method)
My searches so far revealed that I can use these technologies, but I don't know which:
JSON-RPC
Use WCF (run the project
under IronPython using Ironclad)
WCF (use Python for .NET) | 0 | c#,python,ipc,rpc,bidirectional | 2009-11-09T10:33:00.000 | 0 | 1,700,228 | Use JSON-RPC because the experience that you gain will have more practical use. JSON is widely used in web applications written in all of the dozen or so most popular languages. | 0 | 2,195 | false | 0 | 1 | IPC between Python and C# | 1,700,287 |
2 | 2 | 0 | 2 | 6 | 0 | 1.2 | 0 | I want to pass data between a Python and a C# application in Windows (I want the channel to be bi-directional)
In fact I wanna pass a struct containing data about a network packet that I've captured with C# (SharpPcap) to the Python app and then send back a modified packet to the C# program.
What do you propose ? (I rather it be a fast method)
My searches so far revealed that I can use these technologies, but I don't know which:
JSON-RPC
Use WCF (run the project
under IronPython using Ironclad)
WCF (use Python for .NET) | 0 | c#,python,ipc,rpc,bidirectional | 2009-11-09T10:33:00.000 | 0 | 1,700,228 | Why not use a simple socket communication, or if you wish you can start a simple http server, and/or do json-rpc over it. | 0 | 2,195 | true | 0 | 1 | IPC between Python and C# | 1,700,631 |
2 | 13 | 1 | 10 | 10 | 1 | 1 | 0 | I need an IronPython\Python example that would show C#/VB.NET developers how awesome this language really is.
I'm looking for an easy to understand code snippet or application I can use to demo Python's capabilities.
Any thoughts? | 0 | python,ironpython | 2009-11-10T13:49:00.000 | 0 | 1,708,103 | Rewrite any small C# app in IronPython, and show them how many lines of code it took you. If that's not impressing, I don't know what is.
I'm referring to one of your internal apps. | 0 | 1,848 | false | 0 | 1 | How to impress developers with IronPython/Python | 1,708,382 |
2 | 13 | 1 | 3 | 10 | 1 | 0.046121 | 0 | I need an IronPython\Python example that would show C#/VB.NET developers how awesome this language really is.
I'm looking for an easy to understand code snippet or application I can use to demo Python's capabilities.
Any thoughts? | 0 | python,ironpython | 2009-11-10T13:49:00.000 | 0 | 1,708,103 | I have to agree Geo. Show a C# or VB app next to the same app written in IronPython. When I've done my IronPython talks, I've had a lot of success morphing C# code into Python. It makes for a very dramatic presentation.
I'm also a big fan of showing off how duck typing makes your code more testable. | 0 | 1,848 | false | 0 | 1 | How to impress developers with IronPython/Python | 1,711,358 |
1 | 3 | 0 | 7 | 6 | 1 | 1 | 0 | We currently run a small shared hosting service for a couple of hundred small PHP sites on our servers. We'd like to offer Python support too, but from our initial research at least, a server restart seems to be required after each source code change.
Is this really the case? If so, we're just not going to be able to offer Python hosting support. Giving our clients the ability to upload files is easy, but we can't have them restart the (shared) server process!
PHP is easy -- you upload a new version of a file, the new version is run.
I've a lot of respect for the Python language and community, so find it hard to believe that it really requires such a crazy process to update a site's code. Please tell me I'm wrong! :-) | 0 | python,web-hosting | 2009-11-10T21:50:00.000 | 0 | 1,711,483 | Python is a compiled language; the compiled byte code is cached by the Python process for later use, to improve performance. PHP, by default, is interpreted. It's a tradeoff between usability and speed.
If you're using a standard WSGI module, such as Apache's mod_wsgi, then you don't have to restart the server -- just touch the .wsgi file and the code will be reloaded. If you're using some weird server which doesn't support WSGI, you're sort of on your own usability-wise. | 0 | 1,525 | false | 0 | 1 | Python web hosting: Why are server restarts necessary? | 1,711,705 |
1 | 1 | 0 | 2 | 0 | 0 | 1.2 | 0 | I'm a beginner with Pylons and I've mostly developed on my localhost using the built-in web server. I think it's time to start deployment for my personal blog, I have a Debian Lenny server with apache2-mpm-prefork module and mod_wsgi - I've never really used mod_wsgi or fastcgi and I hear either of these are the way to go.
My questions:
Should I go with mod_wsgi or fastcgi and why?
Where should I be creating my web application? Should I create an entirely new user for it? Should I store it in /home/meder/web-app ? I currently have some php websites being hosted on my server and they live in /www/ which is a directory I created. Is there any sorta gotcha with static binary files such as images, as there is with django? | 0 | python,apache,deployment,apache2,pylons | 2009-11-11T03:50:00.000 | 0 | 1,712,883 | mod_wsgi. It's more efficient. FastCGI can be troublesome to setup, whereas I've never known anyone to have a problem using mod_wsgi with a supported version of Python (2.5, 2.6, 3.1 included). WSGI exists for Python (by Python, &c.) and so it makes for a more "Pythonic" experience. Prior to WSGI I used to serve small Pylons apps via paste behind mod_proxy (due to massive issues with fastcgi).
Anywhere is fine, any user is fine. If you're worried about security, you may wish to add another user. You could create a home folder in /www/ if you were so inclined :) Static binary files, images, etc., should be served separately if you can, but Pylons had (actually, I believe still does have) a method of serving these (this should be the 'public' folder). I would still use a separate mount as Apache is more efficient at serving these than passing them through Pylons. | 0 | 450 | true | 1 | 1 | Pylons deployment questions | 1,712,913 |
1 | 1 | 0 | 4 | 4 | 0 | 1.2 | 0 | Are there database testing tools for python (like sqlunit)? I want to test the DAL that is built using sqlalchemy | 1 | python,database,testing,sqlalchemy | 2009-11-12T01:27:00.000 | 0 | 1,719,279 | Follow the design pattern that Django uses.
Create a disposable copy of the database. Use SQLite3 in-memory, for example.
Create the database using the SQLAlchemy table and index definitions. This should be a fairly trivial exercise.
Load the test data fixture into the database.
Run your unit test case in a database with a known, defined state.
Dispose of the database.
If you use SQLite3 in-memory, this procedure can be reasonably fast. | 0 | 601 | true | 0 | 1 | Are there database testing tools for python (like sqlunit)? | 1,719,347 |
1 | 2 | 0 | 6 | 5 | 0 | 1 | 0 | I am developing a twisted server. I need to control the memory usage. It is not a good idea to modify code, insert some memory logging command and restart the server. I think it is better to use a "remote console", so that I can type heapy command and see the response from the server directly. All I need is a remote console, I can build one by myself, but I don't like to rebuild a wheel. My question is: is there already any remote console for twisted?
Thanks. | 0 | python,console,twisted | 2009-11-12T11:55:00.000 | 1 | 1,721,699 | Take a look at twisted.manhole | 0 | 1,027 | false | 0 | 1 | Is there any "remote console" for twisted server? | 1,721,715 |
3 | 4 | 1 | 0 | 3 | 0 | 0 | 0 | I need to load large models and other structured binary data on an older CD-based game console as efficiently as possible. What's the best way to do it? The data will be exported from a Python application. This is a pretty elaborate hobby project.
Requierements:
no reliance on fully standard compliant STL - i might use uSTL though.
as little overhead as possible. Aim for a solution so good. that it could be used on the original Playstation, and yet as modern and elegant as possible.
no backward/forward compatibility necessary.
no copying of large chunks around - preferably files get loaded into RAM in background, and all large chunks accessed directly from there later.
should not rely on the target having the same endianness and alignment, i.e. a C plugin in Python which dumps its structs to disc would not be a very good idea.
should allow to move the loaded data around, as with individual files 1/3 the RAM size, fragmentation might be an issue. No MMU to abuse.
robustness is a great bonus, as my attention span is very short, i.e. i'd change saving part of the code and forget the loading one or vice versa, so at least a dumb safeguard would be nice.
exchangeability between loaded data and runtime-generated data without runtime overhead and without severe memory management issues would be a nice bonus.
I kind of have a semi-plan of parsing in Python trivial, limited-syntax C headers which would use structs with offsets instead of pointers, and convenience wrapper structs/classes in the main app with getters which would convert offsets to properly typed pointers/references, but i'd like to hear your suggestions.
Clarification: the request is primarily about data loading framework and memory management issues. | 0 | c++,python,embedded,playstation | 2009-11-13T06:56:00.000 | 0 | 1,727,594 | Consider storing your data as BLOBs in a SQLite DB. SQLite is extremely portable and lighweight, ANSI C, has both C++ and Python interfaces. This will take care of large files, no fragmentation, variable-length records with fast access, and so on. The rest is just serialization of structs to these BLOBs. | 0 | 270 | false | 0 | 1 | Optimal datafile format loading on a game console | 1,727,732 |
3 | 4 | 1 | 3 | 3 | 0 | 0.148885 | 0 | I need to load large models and other structured binary data on an older CD-based game console as efficiently as possible. What's the best way to do it? The data will be exported from a Python application. This is a pretty elaborate hobby project.
Requierements:
no reliance on fully standard compliant STL - i might use uSTL though.
as little overhead as possible. Aim for a solution so good. that it could be used on the original Playstation, and yet as modern and elegant as possible.
no backward/forward compatibility necessary.
no copying of large chunks around - preferably files get loaded into RAM in background, and all large chunks accessed directly from there later.
should not rely on the target having the same endianness and alignment, i.e. a C plugin in Python which dumps its structs to disc would not be a very good idea.
should allow to move the loaded data around, as with individual files 1/3 the RAM size, fragmentation might be an issue. No MMU to abuse.
robustness is a great bonus, as my attention span is very short, i.e. i'd change saving part of the code and forget the loading one or vice versa, so at least a dumb safeguard would be nice.
exchangeability between loaded data and runtime-generated data without runtime overhead and without severe memory management issues would be a nice bonus.
I kind of have a semi-plan of parsing in Python trivial, limited-syntax C headers which would use structs with offsets instead of pointers, and convenience wrapper structs/classes in the main app with getters which would convert offsets to properly typed pointers/references, but i'd like to hear your suggestions.
Clarification: the request is primarily about data loading framework and memory management issues. | 0 | c++,python,embedded,playstation | 2009-11-13T06:56:00.000 | 0 | 1,727,594 | I note that nowhere in your description do you ask for "ease of programming". :-)
Thus, here's what comes to mind for me as a way of creating this:
The data should be in the same on-disk format as it would be in the target's memory, such that it can simply pull blobs from disk into memory with no reformatting it. Depending on how much freedom you want in putting things into memory, the "blobs" could be the whole file, or could be smaller bits within it; I don't understand your data well enough to suggest how to subdivide it but presumably you can. Because we can't rely on the same endianness and alignment on the host, you'll need to be somewhat clever about translating things when writing the files on the host-side, but at least this way you only need the cleverness on one side of the transfer rather than on both.
In order to provide a bit of assurance that the target-side and host-side code matches, you should write this in a form where you provide a single data description and have some generation code that will generate both the target-side C code and the host-side Python code from it. You could even have your generator generate a small random "version" number in the process, and have the host-side code write this into the file header and the target-side check it, and give you an error if they don't match. (The point of using a random value is that the only information bit you care about is whether they match, and you don't want to have to increment it manually.) | 0 | 270 | false | 0 | 1 | Optimal datafile format loading on a game console | 1,728,074 |
3 | 4 | 1 | 4 | 3 | 0 | 0.197375 | 0 | I need to load large models and other structured binary data on an older CD-based game console as efficiently as possible. What's the best way to do it? The data will be exported from a Python application. This is a pretty elaborate hobby project.
Requierements:
no reliance on fully standard compliant STL - i might use uSTL though.
as little overhead as possible. Aim for a solution so good. that it could be used on the original Playstation, and yet as modern and elegant as possible.
no backward/forward compatibility necessary.
no copying of large chunks around - preferably files get loaded into RAM in background, and all large chunks accessed directly from there later.
should not rely on the target having the same endianness and alignment, i.e. a C plugin in Python which dumps its structs to disc would not be a very good idea.
should allow to move the loaded data around, as with individual files 1/3 the RAM size, fragmentation might be an issue. No MMU to abuse.
robustness is a great bonus, as my attention span is very short, i.e. i'd change saving part of the code and forget the loading one or vice versa, so at least a dumb safeguard would be nice.
exchangeability between loaded data and runtime-generated data without runtime overhead and without severe memory management issues would be a nice bonus.
I kind of have a semi-plan of parsing in Python trivial, limited-syntax C headers which would use structs with offsets instead of pointers, and convenience wrapper structs/classes in the main app with getters which would convert offsets to properly typed pointers/references, but i'd like to hear your suggestions.
Clarification: the request is primarily about data loading framework and memory management issues. | 0 | c++,python,embedded,playstation | 2009-11-13T06:56:00.000 | 0 | 1,727,594 | On platforms like the Nintendo GameCube and DS, 3D models are usually stored in a very simple custom format:
A brief header, containing a magic number identifying the file, the number of vertices, normals, etc., and optionally a checksum of the data following the header (Adler-32, CRC-16, etc).
A possibly compressed list of 32-bit floating-point 3-tuples for each vector and normal.
A possibly compressed list of edges or faces.
All of the data is in the native endian format of the target platform.
The compression format is often trivial (Huffman), simple (Arithmetic), or standard (gzip). All of these require very little memory or computational power.
You could take formats like that as a cue: it's quite a compact representation.
My suggestion is to use a format most similar to your in-memory data structures, to minimize post-processing and copying. If that means you create the format yourself, so be it. You have extreme needs, so extreme measures are needed. | 0 | 270 | false | 0 | 1 | Optimal datafile format loading on a game console | 1,728,071 |
9 | 10 | 0 | 1 | 10 | 1 | 0.019997 | 0 | Unit tests have different requirements than production code. For example, unit tests may not have to be as performant as the production code.
Perhaps it sometimes makes sense to write your unit tests in a language that is better suited to writing unit tests? The specific example I have in mind is writing an application in C# but using IronRuby or IronPython to write the tests.
As I see it, using IronPython and IronRuby have several advantages over C# code as a testing language:
Mocking can be simpler in dynamically typed languages
IronPython has less verbose type annotations that are not needed in unit tests
Experimental invocation of tests without recompilation by typing commands at the interpreter
What are the tradeoffs in using two different languages for tests and production code? | 0 | c#,.net,unit-testing,ironpython | 2009-11-13T15:09:00.000 | 0 | 1,729,791 | I agree with the other answers that there are disadvantages, but I'm surprised that so few have talked about advantages.
TDD'ing across languages is a great way to learn new languages: write the tests in a language you know well, and the implementation in the language you are learning. As you are learning, you will discover better ways of writing the code than you first did, but refactoring is easy because you have a test suite.
Having to master multiple languages keeps you (and your team) sharp.
You get better verification that your API is interoperable across languages. | 0 | 1,494 | false | 0 | 1 | What are the (dis)advantages of writing unit tests in a different language to the code? | 1,731,244 |
9 | 10 | 0 | 2 | 10 | 1 | 0.039979 | 0 | Unit tests have different requirements than production code. For example, unit tests may not have to be as performant as the production code.
Perhaps it sometimes makes sense to write your unit tests in a language that is better suited to writing unit tests? The specific example I have in mind is writing an application in C# but using IronRuby or IronPython to write the tests.
As I see it, using IronPython and IronRuby have several advantages over C# code as a testing language:
Mocking can be simpler in dynamically typed languages
IronPython has less verbose type annotations that are not needed in unit tests
Experimental invocation of tests without recompilation by typing commands at the interpreter
What are the tradeoffs in using two different languages for tests and production code? | 0 | c#,.net,unit-testing,ironpython | 2009-11-13T15:09:00.000 | 0 | 1,729,791 | I think that this is an excellent question and an idea worthy of consideration - particularly in an environment like Visual Studio/.NET where this is easily supported
The plus side - as you suggest - is that you can choose to use a language/tool to create tests that is more suited to creating tests than perhaps the code you are using to create code and for this reason alone its worth a thought.
The down side is, as suggested, that your developers - those creating the tests (and we must remember not to confuse Unit Testing with Test Driven Design) probably ought to be fluent in more than one language (I'd suggest that the ability to be so is fairly important to a good developer but I'm biased!) - and more importantly that you may have to worry about structural differences between the two (though again, if you're talking about .NET languages that should be covered for you).
It gets even more interesting if you go beyond "unit" tests to tests at all levels where the specific capabilities of particular languages may give advantages in building up a test case.
The ultimate question is whether the advantages outweigh the disadvantages... and that's probably somewhat case specific. | 0 | 1,494 | false | 0 | 1 | What are the (dis)advantages of writing unit tests in a different language to the code? | 1,730,103 |
9 | 10 | 0 | 1 | 10 | 1 | 0.019997 | 0 | Unit tests have different requirements than production code. For example, unit tests may not have to be as performant as the production code.
Perhaps it sometimes makes sense to write your unit tests in a language that is better suited to writing unit tests? The specific example I have in mind is writing an application in C# but using IronRuby or IronPython to write the tests.
As I see it, using IronPython and IronRuby have several advantages over C# code as a testing language:
Mocking can be simpler in dynamically typed languages
IronPython has less verbose type annotations that are not needed in unit tests
Experimental invocation of tests without recompilation by typing commands at the interpreter
What are the tradeoffs in using two different languages for tests and production code? | 0 | c#,.net,unit-testing,ironpython | 2009-11-13T15:09:00.000 | 0 | 1,729,791 | The biggest disadvantage is that you are also testing compiler dependencies in your unit tests. In a sense, this also makes them integration tests. That might make them preferable if you expect your code to be usable from multiple languages, but it's adding one level of complexity that you may not need if your code will only be used in production with the language that it's developed in.
One advantage that I can see is that it further isolates that code being developed from the test itself. By separating the act of writing the test even further from the actual code under development, it forces you to really consider how the code should be written to pass the test rather than simply moving the initial development of the code into the test itself. | 0 | 1,494 | false | 0 | 1 | What are the (dis)advantages of writing unit tests in a different language to the code? | 1,729,896 |
9 | 10 | 0 | 1 | 10 | 1 | 0.019997 | 0 | Unit tests have different requirements than production code. For example, unit tests may not have to be as performant as the production code.
Perhaps it sometimes makes sense to write your unit tests in a language that is better suited to writing unit tests? The specific example I have in mind is writing an application in C# but using IronRuby or IronPython to write the tests.
As I see it, using IronPython and IronRuby have several advantages over C# code as a testing language:
Mocking can be simpler in dynamically typed languages
IronPython has less verbose type annotations that are not needed in unit tests
Experimental invocation of tests without recompilation by typing commands at the interpreter
What are the tradeoffs in using two different languages for tests and production code? | 0 | c#,.net,unit-testing,ironpython | 2009-11-13T15:09:00.000 | 0 | 1,729,791 | When building an API or library, I often deliver unit tests as a form of documentation on how best to use the API or library. Most of the time I build a C# library, I'm delivering to a client who will be writing C# code to use the library.
For documentation sake, at least some of my tests will always be written in the target language. | 0 | 1,494 | false | 0 | 1 | What are the (dis)advantages of writing unit tests in a different language to the code? | 1,729,874 |
9 | 10 | 0 | 1 | 10 | 1 | 0.019997 | 0 | Unit tests have different requirements than production code. For example, unit tests may not have to be as performant as the production code.
Perhaps it sometimes makes sense to write your unit tests in a language that is better suited to writing unit tests? The specific example I have in mind is writing an application in C# but using IronRuby or IronPython to write the tests.
As I see it, using IronPython and IronRuby have several advantages over C# code as a testing language:
Mocking can be simpler in dynamically typed languages
IronPython has less verbose type annotations that are not needed in unit tests
Experimental invocation of tests without recompilation by typing commands at the interpreter
What are the tradeoffs in using two different languages for tests and production code? | 0 | c#,.net,unit-testing,ironpython | 2009-11-13T15:09:00.000 | 0 | 1,729,791 | The biggest tradeoff would be if someone was looking at using Unit Tests to figure out how a certain action may be performed, writing it in a different language would make it harder to use. Also you would have to make your C# code CLS compliant. | 0 | 1,494 | false | 0 | 1 | What are the (dis)advantages of writing unit tests in a different language to the code? | 1,729,838 |
9 | 10 | 0 | 0 | 10 | 1 | 0 | 0 | Unit tests have different requirements than production code. For example, unit tests may not have to be as performant as the production code.
Perhaps it sometimes makes sense to write your unit tests in a language that is better suited to writing unit tests? The specific example I have in mind is writing an application in C# but using IronRuby or IronPython to write the tests.
As I see it, using IronPython and IronRuby have several advantages over C# code as a testing language:
Mocking can be simpler in dynamically typed languages
IronPython has less verbose type annotations that are not needed in unit tests
Experimental invocation of tests without recompilation by typing commands at the interpreter
What are the tradeoffs in using two different languages for tests and production code? | 0 | c#,.net,unit-testing,ironpython | 2009-11-13T15:09:00.000 | 0 | 1,729,791 | Experience 1
In the past year I worked on a project that had two main pieces:
A Grails web application
and a Java Swing application
we wrote our Java unit tests using Groovy and it worked out well. Unit testing with groovy took a lot less time verbosity wise and also made it possible to fake static methods and so forth. Although there were a couple places where we ran into unexpected results due to Groovy's dynamic typing, on the whole it was a positive experience.
Experience 2
Another recent project I worked on was a C# ASP.NET MVC 3 web application where I wrote all the unit tests with F#. I chose C# for the web app because I felt it worked better with MVC 3. I chose F# for unit tests because it is my favorite language. The only issues I ran into were minor annoyances with types like null vs. option.
Conclusion
When we have two languages targeting the same runtime (well, C# / F# and Java / Groovy on CLR and JRE respectively) where one is used for production code and one for unit tests, there isn't too much to worry about regarding compatibility at least. Then I think it's really a questions of whether you and your team feel comfortable enough in both languages (and I suppose you might be kind enough to consider future maintainers as well). Indeed, I think the times you'd be compelled to use a different language for unit testing is when the unit testing language is actually your language of comfort or choice over your production language.
There are some exceptions I'd admit to this liberal attitude. If you're are designing a library to be consumed by language X then it may be a smart idea to write your unit tests in language X (some at least). But for application code, I've never found writing unit tests in the same language as your production code particularly advantageous. | 0 | 1,494 | false | 0 | 1 | What are the (dis)advantages of writing unit tests in a different language to the code? | 9,560,983 |
9 | 10 | 0 | 7 | 10 | 1 | 1 | 0 | Unit tests have different requirements than production code. For example, unit tests may not have to be as performant as the production code.
Perhaps it sometimes makes sense to write your unit tests in a language that is better suited to writing unit tests? The specific example I have in mind is writing an application in C# but using IronRuby or IronPython to write the tests.
As I see it, using IronPython and IronRuby have several advantages over C# code as a testing language:
Mocking can be simpler in dynamically typed languages
IronPython has less verbose type annotations that are not needed in unit tests
Experimental invocation of tests without recompilation by typing commands at the interpreter
What are the tradeoffs in using two different languages for tests and production code? | 0 | c#,.net,unit-testing,ironpython | 2009-11-13T15:09:00.000 | 0 | 1,729,791 | One obvious potential problem is a confusing debugging experience. I haven't personally tried to debug across languages in .NET - what happens if you step into C# code from IronPython?
The other problem is that anyone wanting to develop on your code base has to know both languages. | 0 | 1,494 | false | 0 | 1 | What are the (dis)advantages of writing unit tests in a different language to the code? | 1,729,848 |
9 | 10 | 0 | 11 | 10 | 1 | 1.2 | 0 | Unit tests have different requirements than production code. For example, unit tests may not have to be as performant as the production code.
Perhaps it sometimes makes sense to write your unit tests in a language that is better suited to writing unit tests? The specific example I have in mind is writing an application in C# but using IronRuby or IronPython to write the tests.
As I see it, using IronPython and IronRuby have several advantages over C# code as a testing language:
Mocking can be simpler in dynamically typed languages
IronPython has less verbose type annotations that are not needed in unit tests
Experimental invocation of tests without recompilation by typing commands at the interpreter
What are the tradeoffs in using two different languages for tests and production code? | 0 | c#,.net,unit-testing,ironpython | 2009-11-13T15:09:00.000 | 0 | 1,729,791 | Disadvantages that come to my mind:
Depending on the language, you need another development environment (additional dependency of the project, additional effort to setup a development machine, additional licenses, additional training ...)
Refactoring is sometimes supported by the IDE - but most probably not for this other language. So you have to refactor them manually.
Unit tests can also be used as programming examples. Tests show how the tested classes are intended to be used. This does not work so well if the tests are written in a different language. | 0 | 1,494 | true | 0 | 1 | What are the (dis)advantages of writing unit tests in a different language to the code? | 1,729,849 |
9 | 10 | 0 | 2 | 10 | 1 | 0.039979 | 0 | Unit tests have different requirements than production code. For example, unit tests may not have to be as performant as the production code.
Perhaps it sometimes makes sense to write your unit tests in a language that is better suited to writing unit tests? The specific example I have in mind is writing an application in C# but using IronRuby or IronPython to write the tests.
As I see it, using IronPython and IronRuby have several advantages over C# code as a testing language:
Mocking can be simpler in dynamically typed languages
IronPython has less verbose type annotations that are not needed in unit tests
Experimental invocation of tests without recompilation by typing commands at the interpreter
What are the tradeoffs in using two different languages for tests and production code? | 0 | c#,.net,unit-testing,ironpython | 2009-11-13T15:09:00.000 | 0 | 1,729,791 | Main disadvantage as I see it is maintainablity. If you code in C#, your development team are competent in that language, as will new hires be. You need a multi-functional dev team.
I think that it's also worth noting that you probably don't want people writing their tests in a language that is maybe not their strongest. Your test code needs to be robust.
You also need to be switching between syntaxes whilst writing codes/tests - this is a bit of a nuisance. | 0 | 1,494 | false | 0 | 1 | What are the (dis)advantages of writing unit tests in a different language to the code? | 1,729,845 |
1 | 2 | 1 | 2 | 1 | 1 | 1.2 | 0 | I am having an issue with an embedded 64bit Python instance not liking PIL. Before i start exhausting more methods to get a compiled image editor to read the pixels for me (such as ImageMagick) i am hoping perhaps anyone here can think of a purely Python solution that will be comparable in speeds to the compiled counterparts.
Now i am aware that the compiled friends will always be much faster, but i am hoping that because i "just" want to read the alpha of a group of pixels, that perhaps a fast enough pure Python solution can be conjured up. Anyone have any bright ideas?
Though, i have tried PyPNG and that is far too slow, so i'm not expecting any magic solutions. None the less, i had to ask.
Thanks to any replies!
And just for reference, the images i'll be reading will be on average around 512*512 to 2048*2048, and i'll be reading anywhere from one to all of the pixels alpha (multiplied by a few million times, but the values can be stored so reading twice isn't done). | 0 | python,png | 2009-11-14T00:30:00.000 | 0 | 1,732,761 | Getting data out of a PNG requires unpacking data and decompressing it. These are likely going to be too slow in Python for your application. One possibility is to start with PyPNG and get rid of anything in it that you don't need. For example, it is probably storing all of the data it reads from the PNG, and some of the slow speed you see may be due to the memory allocations. | 0 | 955 | true | 0 | 1 | Reading Alpha of a PNG Pixel. Fast way via pure Python? | 1,732,962 |
1 | 1 | 0 | 1 | 4 | 0 | 0.197375 | 0 | Can anyone suggest a good Python 3 Library for sending / receiving reatime MIDI? | 0 | python,python-3.x,midi | 2009-11-16T14:06:00.000 | 0 | 1,742,382 | Why Python 3? It generally doesn't have many libraries yet. Generally you want to look into high-level C-libraries with Python wrappers. I doubt many of these work under Python 3 at the moment. | 0 | 618 | false | 0 | 1 | Python 3 Library for Realtime Midi Communication | 2,089,977 |
2 | 2 | 1 | 27 | 39 | 0 | 1 | 0 | From the web I've gleaned that WSGI is a CGI for python web development/frameworks. FCGI seems to be a more generalised gateway for a variety of languages. Don't know the performance difference between the two in reference to the languages python and C/++. | 0 | python,wsgi,fastcgi | 2009-11-17T07:59:00.000 | 0 | 1,747,266 | They are two different things. WSGI is a Python specific interface for writing web applications. There are wrappers for about any web server protocol to provide the WSGI interface. FastCGI (FCGI) is one of such web server protocols. So, WSGI is an abstraction layer, while CGI / FastCGI / mod_python are how the actual web servers talk to the application. Some code has to translate the native interface to WSGI (there is a CGI module in wsgiref, there is flup for FastCGI, etc.). There is also mod_wsgi for Apache, which does the translation directly in an Apache module, so you don't need any Python wrapper. | 0 | 25,057 | false | 1 | 1 | Is there a speed difference between WSGI and FCGI? | 1,747,336 |
2 | 2 | 1 | 80 | 39 | 0 | 1.2 | 0 | From the web I've gleaned that WSGI is a CGI for python web development/frameworks. FCGI seems to be a more generalised gateway for a variety of languages. Don't know the performance difference between the two in reference to the languages python and C/++. | 0 | python,wsgi,fastcgi | 2009-11-17T07:59:00.000 | 0 | 1,747,266 | Correct, WSGI is a Python programmatic API definition and FASTCGI is a language agnostic socket wire protocol definition. Effectively they are at different layers with WSGI being a higher layer. In other words, one can implement WSGI on top of something that so happened to use FASTCGI to communicate with a web server, but not the other way around.
In general, FASTCGI being a socket wire protocol means that you always need some type of programmatic interface on top to use it. For Python one such option is WSGI. As FASTCGI is just a means to an end, one can't really compare its performance to WSGI in that case because WSGI isn't a comparable socket wire protocol, but a user of FASTCGI itself.
One could try and compare performance of different language interfaces on top of FASTCGI, but in general that is quite meaningless in itself as the lower network layer and server request handling aren't the bottleneck. Instead your application code and database will be. | 0 | 25,057 | true | 1 | 1 | Is there a speed difference between WSGI and FCGI? | 1,748,161 |
1 | 9 | 0 | 1 | 10 | 0 | 0.022219 | 0 | I want to do full integration testing for a web application. I want to test many things like AJAX, positioning and presence of certain phrases and HTML elements using several browsers. I'm seeking a tool to do such automated testing.
On the other hand; this is my first time using integration testing. Are there any specific recommendations when doing such testing? Any tutorial as well?
(As a note: My backend code is done using Perl, Python and Django.)
Thanks! | 0 | python,ruby,perl,automated-tests,integration-testing | 2009-11-17T09:59:00.000 | 0 | 1,747,772 | I would also recommend Selenium. It got a really nice Firefox Plugin, that you can use to create your integration tests. | 0 | 5,279 | false | 1 | 1 | Integration Testing for a Web App | 1,747,828 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.