Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 2 | 0 | 3 | 2 | 0 | 0.291313 | 0 |
I've just installed a base gentoo stage 3 and I get the following error when i try and call time.time():
sbx / # python
import time
Python 2.7.1 (r271:86832, May 22 2011, 14:53:09)
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import time
>>> time.time()
Traceback (most recent call last):
File "", line 1, in
IOError: [Errno 0] Error
I found this because when I try and run emerge I get:
sbx / # emerge
Traceback (most recent call last):
File "/usr/bin/emerge", line 32, in
from _emerge.main import emerge_main
File "/usr/lib/portage/pym/_emerge/main.py", line 6, in
import logging
File "/usr/lib/python2.7/logging/__init__.py", line 94, in
_startTime = time.time()
IOError: [Errno 11] Resource temporarily unavailable
This is a custom kernel and I just made sure I compiled in RTC support, but still no luck. Any ideas on why this is happening?
| 0 |
python,linux,gentoo
|
2011-05-25T18:26:00.000
| 1 | 6,129,054 |
Did it work before your custom kernel? Boot into a rescue CD, chroot into your gentoo env, and run your script. If it works, it's your kernel. That's about as specific as I can be.
| 0 | 757 | false | 0 | 1 |
Python time.time() -> IOError
| 6,129,347 |
1 | 3 | 1 | 1 | 1 | 1 | 0.066568 | 0 |
I am reading a little bit on Objective-C, and since it comes from Smalltalk, like Ruby does, so I wonder, is it true that if using Ruby, (if Apple change XCode to support it), to develop iPhone or Mac app, it is not suitable really because of the speed? For example, Angry Birds probably isn't as smooth as it is now if using Ruby, vs if Objective-C, it is compiled and running as machine code.
But currently, is there any way of compiling Ruby or Python code into machine code so that it is running in the similar speed zone as Objective-C programs can?
| 0 |
python,objective-c,ruby,ios
|
2011-05-25T18:28:00.000
| 0 | 6,129,086 |
Python and Ruby themselves are not too fast, but many programs today are written to basically do none of the heavy lifting alone. For instance, if you want to save a PNG file as a JPG, you are certainly going to use the built in system calls to do it. In that case it does not really matter if it takes 0.00001 seconds in Obj-C, or 0.001 seconds in Python to process the mouse click, when the optimized system code to convert the image is the exact same code in both programs, and takes, say 1/2 a second to run. On the other hand if you are making a low level data munger of your own design, then you might want some C. Basically all languages that you would use on a Mac allow you to write something in C, and call it from that language, so even that does not slow you down.
Pick the language for the task based usually on what everyone else is doing for that problem, intersected with your abilities.
| 0 | 1,121 | false | 0 | 1 |
Is Ruby or Python not suitable for iPhone or Mac app development because of speed, and any compiler that can help boost the speed?
| 6,129,509 |
1 | 1 | 1 | 2 | 2 | 0 | 1.2 | 0 |
My Python backend (Django) has to request to a C++ library to get a result (with help of ctypes module).
Is it normal to call a C++ method directly? Or may be I need an intermediate thread manager that starts a new thread when python script wants a result?
| 0 |
c++,python,django,ctypes,backend
|
2011-05-25T18:54:00.000
| 0 | 6,129,383 |
Basically you have to decide what kind of operation flow you want. If you prefer synchronous processing you can call you method directly, if you favor asynchronous processing you will need an intermediate solution.
However, you have to be aware, that when you call the C++ routine directly form your Django app the call will end in the execution path that is triggered via the web application. If the processing takes more time than you want to wait, a job management system will be the better choice.
In any case I would recommend such a solution if the execution of your C++ routine takes too much time. You could then use polling to wait until the result is ready using e.g. Web Sockets.
| 0 | 339 | true | 0 | 1 |
High Perfomance: Do call C++ methods directly from a python backend?
| 6,129,485 |
1 | 5 | 0 | 3 | 5 | 1 | 0.119427 | 0 |
I added a print line to a python script while the script was executing, and now all the text is highlighted in red when I open the file. Opening and closing the file doesn't get rid of it. Opening a second python file momentarily fixed the problem, but then closing file and reopening brought the problem back. Now it wont go away at all. Any body know what could cause this?
| 0 |
python,vim,vi
|
2011-05-25T19:30:00.000
| 0 | 6,129,789 |
Old thread, but hope this helps.
By mistake I did a "/." on my vim screen, which highlighted all lines in red. If I open any other file, the red highlighting stays.
Try searching for some other keyword, let's say "/word" - doesn't matter word exists or not. It restores the color.
| 0 | 10,751 | false | 0 | 1 |
vim highlighting everything in red
| 42,373,091 |
1 | 1 | 0 | 4 | 3 | 0 | 1.2 | 0 |
Why would a Comet Server like Tornado be especially prone to memory leaks if written in PHP?
Are there genuine weaknesses particular to PHP for implementing a long polling framework/service like Tornado?
Thanks
| 0 |
php,python,comet,tornado,long-polling
|
2011-05-25T22:13:00.000
| 1 | 6,131,620 |
The gist of it is that PHP was originally written with the intent of having a brand new process for every request that you could just throw away once said request ended, at a time where things like Comet and long polling weren't really on the table.
As such there are quite a few areas - notably the garbage collector - where at its origin PHP just wasn't made for running during a long period of time, and it didn't care much because every http request got a brand new php instance.
It got clearly better in the recent years, but I still wouldn't use it for creating that sort of long-lifetime applications.
| 0 | 254 | true | 0 | 1 |
Memory Leaks Comet Server in PHP
| 6,132,278 |
1 | 3 | 0 | 1 | 0 | 1 | 0.066568 | 0 |
First off , this isn't a homework assignment!!! :p What I want to do is this:
Given a data file(be it text or numbers) saved on the desktop i.e., I want to be able to search that file and pull out only the data I want and print it to the screen. I may want to do other stuff with it but I have no idea what options there are.
Also, would python or c++ be more appropriate. I'm not familiar much with python and it's been years since I've picked up c++ but I've heard that python is more efficient and although this program's efficiency may or may not be a big deal I have heard python is much easier to understand.
Examples,Code, Templates(<-- would be awesome)
Thanks all!
| 0 |
c++,python,search
|
2011-05-26T23:50:00.000
| 0 | 6,146,359 |
C++ will be faster (maybe, if you write it well), but, it will be harder, but easier to start since you know it.
Python will take some time to get used to, and it will probably run a wee bit slower, but, will be easier (once you learn the language).
This is a very easy problem solved numerous times, so, what language you pick really doesn't matter.
If you like a GUI, then look at GUI libraries.
| 0 | 407 | false | 0 | 1 |
Searching a data file: coding in python vs c++
| 6,146,444 |
1 | 1 | 0 | 1 | 4 | 1 | 1.2 | 0 |
Unfortunately, I'm working with an extremely large corpus which is spread into hundreds of .gz files -- 24 gigabytes (packed) worth, in fact. Python is really my native language (hah) but I was wondering if I haven't run up against a problem that will necessitate learning a "faster" language?
Each .gz file contains a single document in plain text, is about 56MB gzipped, and about 210MB unzipped.
On each line is an n-gram (bigram, trigram, quadrigram, etc.) and, to the right, a frequency count. I need to basically create a file that stores the substring frequencies for each quadrigram alongside its whole-string frequency count (i.e., 4 unigram frequencies, 3 bigram frequencies, and 2 trigram frequencies for a total of 10 data points). Each type of n-gram has its own directory (e.g., all bigrams appear in their own set of 33 .gz files).
I know an easy, brute force solution, and which module to import to work with gzipped files in Python, but I was wondering if there was something that wouldn't take me weeks of CPU time? Any advice on speeding this process up, however slightly, would be much appreciated!
| 0 |
python,gzip,large-files,large-data-volumes,corpus
|
2011-05-27T03:19:00.000
| 0 | 6,147,504 |
It would help to have an example of a few lines and expected output. But from what I understand, here are some ideas.
You certainly don't want to process all files every time you process a single file or, worse, a single 4-gram. Ideally you'd go through each file once. So my first suggestion is to maintain an intermediate list of frequencies (these sets of 10 data points), where they first only take into account one file. Then when you process the second file, you'll update all the frequencies for items that you encounter (and presumably add new items). Then you'll keep going like this, increasing frequencies as you find more matching n-grams. At the end write everything out.
More specifically, at each iteration I would read a new input file into memory as a map of string to number, where the string is, say, a space-separated n-gram, and the number is its frequency. I would then process the intermediate file from the last iteration, which would contain your expected output (with incomplete values), e.g. "a b c d : 10 20 30 40 5 4 3 2 1 1" (kind of guessing the output you are looking for here). For each line, I'd look up in the map all the sub-grams in my map, update the count, and write out the updated line to the new output file. That one will be used in the next iteration, until I've processed all input files.
| 0 | 433 | true | 0 | 1 |
Python - Search for items in hundreds of large, gzipped files
| 6,148,016 |
1 | 2 | 1 | 1 | 2 | 0 | 1.2 | 0 |
I need to send a binary file (or a Bitmap object) from Android to a PC, which runs a Python script to receive it.
Has anybody been in the same situation or has any hints, what could be best practice here? Options are sockets or Webservice (besides from workarounds with samba etc) I guess, but what is the easiest and fastest to implement?
Cheers,
Marc
| 0 |
java,android,python,networking,binaryfiles
|
2011-05-27T11:03:00.000
| 0 | 6,151,335 |
Just doing a HTTP POST containing data to a web server should do the job. This way you have a myriad of frameworks to choose from, which saves you from doing the dirty work of pushing bits back and forth. Sure, there is some overhead, but unless you have specific reasons to avoid that (which were not mentioned in the question), I think this is the most straightforward approach.
Additionally, when the application grows, you can extend this to a full REST style interface later on.
| 0 | 490 | true | 0 | 1 |
How to best send binary file from Android to Python script on PC?
| 6,151,433 |
1 | 1 | 0 | 2 | 1 | 0 | 0.379949 | 0 |
I am trying to send sms using python and a GSM modem connected to my local machine, I have successfully sent the SMS using AT commands but now I have a technical problem for which I need help, the server of my website is located in United States while I live in Australia, so if I want to use the SMS feature on actual site I have to fly all the way to USA and attach the modem to my server, I just want to know if there is a simple solution for my problem, something like passing request from server to my machine and send SMS from my local machine.
Thanks in Advance.
| 0 |
python,gsm,at-command
|
2011-05-27T15:26:00.000
| 0 | 6,154,409 |
A practical solution would be to connect to SMS gateway service instead of implementing your own service. Nowadays they are really cheap or even free.
| 0 | 1,660 | false | 0 | 1 |
Sending SMS using Python
| 6,154,435 |
1 | 1 | 0 | 8 | 9 | 0 | 1.2 | 0 |
What is the difference in how they are handled?
Specifically, why is it common to find Python used in production-level long lived applications like web-servers while PHP isn't given their similar efficiency levels?
| 0 |
php,python,memory-management,garbage-collection,webserver
|
2011-05-27T21:46:00.000
| 0 | 6,158,033 |
PHP was designed as a hypertext scripting language. Every process was designed to end after a very short time. So memory management and GC basically didn't matter.
However the ease and popularity of PHP have invoked its usage in long lived programs such as daemons, extensive calculations, socket servers etc.
PHP 5.3 introduced a lot of features and fixes that made it suitable for such applications, however in my opinion memory management was of lower significance on that matter.
PHPs error management is quite good now, but as in every programming language that I know of you can produce memory leaks.
You still cannot code in the same style that you can code Java or Python applications. A lot of PHP programs will probably show severe problems where Java/Python do not.
You can characterize this as "worse", but I would not. PHP just is a different set of tools that you have to handle different.
The company I work at has a lot of system programs and daemons written in PHP that run like a charm.
I think the biggest caveat for PHP when it comes to as you describe "production-level long lived applications" is its multi-processing and threading ability (the 2nd is basically nonexistent).
Of course there is the possibility to fork processes, access shared memory, do inter process communications and have message queues and stuff. But Python is far ahead on that matter, because it was designed bottom up for jobs like that.
| 0 | 1,270 | true | 0 | 1 |
How is memory management in PHP different from that in Python?
| 6,208,958 |
1 | 4 | 0 | 1 | 7 | 1 | 0.049958 | 0 |
Are there any methods in (C)Python to inspect the process' current memory usage? In particular, I'd like to determine the high-water mark of memory usage in a testing script, but if necessary I don't mind periodically checking memory usage and calculating the high water mark for myself.
EDIT: I'm looking for either a pure-python solution, or something which works on OS X.
| 0 |
python,macos,memory-management
|
2011-05-28T01:13:00.000
| 0 | 6,159,053 |
you can use os.getpid() to get your current PID and then use that PID to find the process in the output of a subprocess calling top/free/ps etc.
i'm not an OSX/BSD expert, so im unsure of the flags to which command will give you memory usage by process
| 0 | 7,123 | false | 0 | 1 |
Is it possible to get a "high water mark" of memory usage from Python?
| 6,159,207 |
1 | 2 | 0 | 1 | 2 | 0 | 0.099668 | 1 |
I've been thinking about how to implement mirror picking in Python. When I call on service API I get response with IP address. Now I want to take that address and check if it's close to me or not. If not, retry. I thought about pinging, as I have only ~1ms ping to the IP addresses hosted in same data center, but much higher across the world. I looked up some examples of how to implement pinging in Python, but it seems fairly complicated and feels a bit hackish (like checking if target IP is less than 10ms). There may be better ways to tackle this issue, that I may not be aware of.
What are your ideas? I can't download any test file each time to test speed. GeoIP or ping? Or something else?
| 0 |
python,ping,geo,geoip
|
2011-05-28T01:47:00.000
| 0 | 6,159,173 |
Call all the service API instances and use which ever responds quickest.
| 0 | 881 | false | 0 | 1 |
How to choose closest/fastest mirror in Python?
| 6,159,184 |
2 | 4 | 0 | -2 | 15 | 1 | -0.099668 | 0 |
I have a pure C module for Python and I'd like to be able to invoke it using the python -m modulename approach. This works fine with modules implemented in Python and one obvious workaround is to add an extra file for that purpose. However I really want to keep things to my one single distributed binary and not add a second file just for this workaround.
I don't care how hacky the solution is.
If you do try to use a C module with -m then you get an error message No code object available for <modulename>.
| 0 |
python
|
2011-05-29T03:43:00.000
| 0 | 6,165,824 |
I think that you need to start by making a separate file in Python and getting the -m option to work. Then, turn that Python file into a code object and incorporate it into your binary in such a way that it continues to work.
Look up setuptools in PyPi, download the .egg and take a look at the file. You will see that the first few bytes contain a Python script and these are followed by a .ZIP file bytestream. Something similar may work for you.
| 0 | 3,104 | false | 0 | 1 |
Getting python -m module to work for a module implemented in C
| 6,196,366 |
2 | 4 | 0 | 0 | 15 | 1 | 0 | 0 |
I have a pure C module for Python and I'd like to be able to invoke it using the python -m modulename approach. This works fine with modules implemented in Python and one obvious workaround is to add an extra file for that purpose. However I really want to keep things to my one single distributed binary and not add a second file just for this workaround.
I don't care how hacky the solution is.
If you do try to use a C module with -m then you get an error message No code object available for <modulename>.
| 0 |
python
|
2011-05-29T03:43:00.000
| 0 | 6,165,824 |
Does your requirement of single distributed binary allow for the use of an egg? If so, you could package your module with a __main__.py with your calling code and the usual __init__.py...
If you're really adamant, maybe you could extend pkgutil.ImpLoader.get_code to return something for C modules (e.g., maybe a special __code__ function). To do that, I think you're going to have to actually change it in the Python source. Even then, pkgutil uses exec to execute the code block, so it would have to be Python code anyway.
TL;DR: I think you're euchred. While Python modules have code at the global level that runs at import time, C modules don't; they're mostly just a dict namespace. Thus, running a C module doesn't really make sense from a conceptual standpoint. You need some real Python code to direct the action.
| 0 | 3,104 | false | 0 | 1 |
Getting python -m module to work for a module implemented in C
| 7,943,332 |
1 | 1 | 0 | 3 | 4 | 1 | 1.2 | 0 |
What interpreters have been made using the PyPy Translator Toolchain besides PyPy itself?
| 0 |
interpreter,pypy,rpython
|
2011-05-29T18:54:00.000
| 0 | 6,169,726 |
The two most complete (besides the Python one) are Javascript and Prolog, but there are also Squeak, Scheme, Brainfuck, and Haskell in various levels of completeness.
| 0 | 205 | true | 0 | 1 |
What interpreters have been made using the PyPy Translator Toolchain?
| 6,170,275 |
1 | 2 | 0 | 9 | 13 | 1 | 1.2 | 0 |
I am trying to work with PIL in my project but the pydev can't seem to find it in my project. First of all I can see it when I enter the python shell, I can import it and I see it in the python sys.path. Second, I Added it to the PYTHONPATH in eclipse.
I restarted eclipse, but still, when I try to do "from PIL import Image" I am getting: "unresolved import".
Can any one please help me here, all other packages I used until now worked great the same way.... and i really need to use PIL
| 0 |
python,django,eclipse,python-imaging-library,pydev
|
2011-05-30T02:20:00.000
| 0 | 6,171,749 |
Had the same problem here.
Got it resolved by adding /usr/share/pyshared to the Libraries tab in window->preferences->pydev->Interpreter - Python.
There were a lot of /usr/lib/python* paths with the compiled libraries (the C stuff with python bindings) where included already, but not /usr/share... parts with the source.
| 0 | 7,037 | true | 0 | 1 |
How do I add PIL to PyDev in Eclipse, so i could import it and use it in my project?
| 6,486,750 |
1 | 2 | 0 | 0 | 4 | 1 | 0 | 0 |
Does anyone know what the complexity of the os.path.exists function is in python with a ext4 filesystem?
| 0 |
python,linux,complexity-theory,ext4
|
2011-05-30T12:55:00.000
| 1 | 6,176,547 |
Chances are good that the complexity is O(n) with n being the depth in the filesystem (e.g. / would have n=1, /something n=2, ...)
| 0 | 810 | false | 0 | 1 |
python: complexity of os.path.exists with a ext4 filesystem?
| 6,176,569 |
1 | 3 | 0 | 1 | 11 | 0 | 0.066568 | 0 |
After uploading a binary distribution of my Python C extension with python setup.py bdist upload, easy_install [my-package-name] fails on "error: Couldn't find a setup script in /tmp/easy_install/package-name-etc-etc".
What am I doing wrong?
| 0 |
python,binary,easy-install,python-c-extension
|
2011-05-30T16:29:00.000
| 1 | 6,178,664 |
Sometimes you don't actually really intend to easy_install the 'directory', which will look for a setup.py file.
In simple words, you may be doing easy_install xyz/
while what you really want to do is easy_install xyz
| 0 | 23,071 | false | 0 | 1 |
easy_install fails on error "Couldn't find setup script" after binary upload?
| 39,579,939 |
1 | 3 | 0 | 3 | 26 | 1 | 0.197375 | 0 |
In the Python unittest framework, is there a way to pass a unit test if an exception wasn't raised, and fail with an AssertRaise otherwise?
| 0 |
python,unit-testing
|
2011-05-30T23:23:00.000
| 0 | 6,181,555 |
Simply call your functionality, e.g. do_something(). If an unhandled exception gets raised, the test automatically fails! There is really no reason to do anything else. This is also the reason why assertDoesNotRaise() does not exist.
Credit: comment by Sven
| 0 | 31,748 | false | 0 | 1 |
Pass a Python unittest if an exception isn't raised
| 45,474,275 |
1 | 6 | 0 | 2 | 9 | 0 | 0.066568 | 0 |
I want to choose an embedded scripting language that i will use on C++. It should connect a database such as Oracle. My host application is a server application. That will pass raw data to script. The script will parse and do some specific logics. Also updates database. Then script will returns raw data as result.
Can you help me to choose it?
Thanx
| 0 |
c++,python,ruby,scripting,lua
|
2011-05-31T14:05:00.000
| 0 | 6,188,798 |
TCL would be another option for an easy to embed scripting language.
personally I'd go with the scripting language you and/or whoever will be using the scripting language is most familiar with already, especially if end users will be able to run custom scripts, you will need to know what, if any, languages they are familiar with in their business domain e.g. CAD/CAM people may know TCL, gaming people may know Lua etc.
| 0 | 10,847 | false | 0 | 1 |
Choosing embedded scripting language for C++
| 6,190,078 |
2 | 5 | 0 | 2 | 6 | 1 | 0.07983 | 0 |
The simple study is:
Ant life simulation
I'm creating an OO structure that see a Class for the Anthill, a Class for the Ant and a Class for the whole simulator.
Now I'm brainstorming on "how to" make Ants 'live'...
I know that there are projects like this just started but I'm brainstorming, I'm not looking for a just-ready-to-eat-dish.
Sincerely I have to make some tests for understand on "what is better", AFAIK Threads, in Python, use less memory than Processes.
What "Ants" have to do when you start the simulation is just: moving around with random direction, if they found food ->eat/bring to the anthill, if they found another ant from another anthill that is transporting food -> attack -> collect food -> do what have to do.... and so on...that means that I have to "share" information across ants and across the whole enviroment.
so I rewrite:
It's better to create a Process/Thread for each Ant or something else?
EDIT:
In cause of my question "what is better", I'd upvoted all the smart answers that I received, and I also put a comment on them.
After my tests, I'll accept the best answer.
| 0 |
python,multithreading,resources,simulation,multiprocess
|
2011-05-31T14:49:00.000
| 0 | 6,189,398 |
I wrote an ant simulation (for finding a good TSP-solution) and a wouldnt recommend a Thread-Solution. I use a loop to calculate for each ant the next step, so my ants do not really behave concurrently (but synchronize after each step).
I don't see any reason to model those ants with Threads. Its no advantage in terms of run-time behavior nor is it an advantage in terms of elegancy (of the code)!
It might be, admittedly, slightly more realistic to use Threads since real ants are concurrent, but for simulations purposes this is IMHO neglectable.
| 0 | 1,030 | false | 1 | 1 |
Ant simulation: it's better to create a Process/Thread for each Ant or something else?
| 6,189,789 |
2 | 5 | 0 | 1 | 6 | 1 | 0.039979 | 0 |
The simple study is:
Ant life simulation
I'm creating an OO structure that see a Class for the Anthill, a Class for the Ant and a Class for the whole simulator.
Now I'm brainstorming on "how to" make Ants 'live'...
I know that there are projects like this just started but I'm brainstorming, I'm not looking for a just-ready-to-eat-dish.
Sincerely I have to make some tests for understand on "what is better", AFAIK Threads, in Python, use less memory than Processes.
What "Ants" have to do when you start the simulation is just: moving around with random direction, if they found food ->eat/bring to the anthill, if they found another ant from another anthill that is transporting food -> attack -> collect food -> do what have to do.... and so on...that means that I have to "share" information across ants and across the whole enviroment.
so I rewrite:
It's better to create a Process/Thread for each Ant or something else?
EDIT:
In cause of my question "what is better", I'd upvoted all the smart answers that I received, and I also put a comment on them.
After my tests, I'll accept the best answer.
| 0 |
python,multithreading,resources,simulation,multiprocess
|
2011-05-31T14:49:00.000
| 0 | 6,189,398 |
I agree with @delan - it seems like overkill to allocate a whole thread per Ant, especially if you are looking to scale this to a whole anthill with thousands of the critters running around.
Instead you might consider using a thread to update many ants in a single "cycle". Depending on how you write it - you need to carefully consider what data needs to be shared - you might even be able to use a pool of these threads to scale up your simulation.
Also keep in mind that in CPython the GIL prevents multiple native threads from executing code at the same time.
| 0 | 1,030 | false | 1 | 1 |
Ant simulation: it's better to create a Process/Thread for each Ant or something else?
| 6,189,548 |
2 | 4 | 0 | 1 | 3 | 0 | 0.049958 | 0 |
I coded a python application which was running OK as a cron job. Later I added some libraries (e.g. pynotify and other *) because I wanted to be notified with the message describing what is happening, but it seems that cron can't run such an application.
Do you know some alternative how to run this application every five minutes? I'm using Xubuntu.
import gtk, pygtk, os, os.path, pynotify
I can run the application without cron without problems.
Cron seems to run the application but it won't show the notification message. In /var/log/cron.log there are no errors. The application executed every minute without problems.
my crontab:
*/1 * * * * /home/xralf/pythonsrc/app
thank you
| 0 |
python,cron
|
2011-05-31T18:06:00.000
| 1 | 6,191,624 |
If the cron job runs as "you", and if you set the DISPLAY var (export DISPLAY=:0) you should have no issues.
| 0 | 728 | false | 0 | 1 |
Notification as a cron job
| 6,192,123 |
2 | 4 | 0 | 0 | 3 | 0 | 0 | 0 |
I coded a python application which was running OK as a cron job. Later I added some libraries (e.g. pynotify and other *) because I wanted to be notified with the message describing what is happening, but it seems that cron can't run such an application.
Do you know some alternative how to run this application every five minutes? I'm using Xubuntu.
import gtk, pygtk, os, os.path, pynotify
I can run the application without cron without problems.
Cron seems to run the application but it won't show the notification message. In /var/log/cron.log there are no errors. The application executed every minute without problems.
my crontab:
*/1 * * * * /home/xralf/pythonsrc/app
thank you
| 0 |
python,cron
|
2011-05-31T18:06:00.000
| 1 | 6,191,624 |
I don't see any problem in cron job with pynotify? What is the error you are getting?
Can you run your python code separately to check whether your python code is working really well but only fails with cron?
Celery is distributed job queue & task manager written in Python but it may be too much for your needs.
Supervisord also can do some sort of cron task if you know that your program shall close in 5 minutes. So you can configure supervisord to start the task soon after. None of them are not easier like cron job.
| 0 | 728 | false | 0 | 1 |
Notification as a cron job
| 6,191,715 |
2 | 3 | 1 | 0 | 1 | 0 | 1.2 | 0 |
The engine I've been wanting to remake is from a PlayStation 1 game called Final Fantasy Tactics, and the game is basically a 2.5D game I guess you could say. Low-resolution sprites and textures, and 3D maps for battlefields. The plan is to mainly load the graphics from a disc, or .iso (I already know the sectors the graphics need to be read from) and fill in the rest with game logic and graphics routines, and probably load other things from the disc like the map data.
I want this to be a multiplatform project, because I use Linux and would like for more people to join in on the project once I have enough done (and it's easy to get more people through platforms like Windows). I'll be making a website to host the project. Also, none of the graphics will be distributed, they'll have to be loaded from your own disc. I'd rather not have to deal with any legal issues.. At least not soon after the project is hosted on my site.
But anyway, here's my dilemma- I know quite a bit of Java, and some Python, but I'm worried about performance/feature issues if I make this engine using one of these two languages. I chose them due to familiarity and platform independence, but I haven't gotten too into graphics programming yet. I'm very much willing to learn, however, and I've done quite a bit of ASM work on the game- looking at graphics routines and whatnot. What would be the best route to take for a project like this? Oh, and keep in mind I'll eventually want to add higher-resolution textures in an .iso restructuring patch or something.
I'm assuming based on my results on Google that I could go with something like Pygame + OpenGL, JOGL, Pyglet, etc. Any suggestions on an API? Which has plenty of documentation/support for game or graphics programming? Do they have any serious performance hits?
Thank you for your time.
| 0 |
java,python
|
2011-06-01T02:04:00.000
| 0 | 6,195,508 |
I'd recommend going with PySFML, and, of course, Python.
If you do your Python programming correctly , and if you really are willing to fiddle with C or ASM Python plugins for faster computations, you shouldn't really have too much performace hits.
| 0 | 213 | true | 1 | 1 |
Game Engine Remake - Trouble Choosing a Language /API(Java or Python)
| 6,195,610 |
2 | 3 | 1 | 0 | 1 | 0 | 0 | 0 |
The engine I've been wanting to remake is from a PlayStation 1 game called Final Fantasy Tactics, and the game is basically a 2.5D game I guess you could say. Low-resolution sprites and textures, and 3D maps for battlefields. The plan is to mainly load the graphics from a disc, or .iso (I already know the sectors the graphics need to be read from) and fill in the rest with game logic and graphics routines, and probably load other things from the disc like the map data.
I want this to be a multiplatform project, because I use Linux and would like for more people to join in on the project once I have enough done (and it's easy to get more people through platforms like Windows). I'll be making a website to host the project. Also, none of the graphics will be distributed, they'll have to be loaded from your own disc. I'd rather not have to deal with any legal issues.. At least not soon after the project is hosted on my site.
But anyway, here's my dilemma- I know quite a bit of Java, and some Python, but I'm worried about performance/feature issues if I make this engine using one of these two languages. I chose them due to familiarity and platform independence, but I haven't gotten too into graphics programming yet. I'm very much willing to learn, however, and I've done quite a bit of ASM work on the game- looking at graphics routines and whatnot. What would be the best route to take for a project like this? Oh, and keep in mind I'll eventually want to add higher-resolution textures in an .iso restructuring patch or something.
I'm assuming based on my results on Google that I could go with something like Pygame + OpenGL, JOGL, Pyglet, etc. Any suggestions on an API? Which has plenty of documentation/support for game or graphics programming? Do they have any serious performance hits?
Thank you for your time.
| 0 |
java,python
|
2011-06-01T02:04:00.000
| 0 | 6,195,508 |
At the end of the day if you're passionate about the project and committed to getting the most out of the language you choose, the performance difference between java and python will be minimal if non-existent.
Personally speaking, the biggest challenge is finishing the project once it loses novelty and initial momentum. I suggest you go with whichever language you're more passionate about and are interested in plumbing the depths of, or one that could boost your resume.
Secondly, as you mention you're hoping to attract contributors, you may want to factor that into your decision. I can't comment much here, but have a look at similar projects with lots of activity.
Good luck!
| 0 | 213 | false | 1 | 1 |
Game Engine Remake - Trouble Choosing a Language /API(Java or Python)
| 6,195,870 |
2 | 3 | 0 | 1 | 3 | 1 | 0.066568 | 0 |
I will separate my business code into a script language. it would be Lua or Python.
My question is my business code written in script file is viewable by others.
Because of script file will not compiled that is open. Anyone can see it.
How can I hide it?
I think if I use Python, it would be compiled (.pyo), but Lua looks like more suitable for me.
| 0 |
python,lua,compilation,obfuscation
|
2011-06-01T07:36:00.000
| 0 | 6,197,760 |
(For Lua)
Depends on how safe it should be. For keeping out dumb edits you can just change the extension, and configure the path to recognize it anyhow.
For keeping out people who know how to change extensions, you can ship files compiled with luac. For deciphering that you already have to put considerable effort in it.
But to be really save I guess the only way is to encrypt/sign the code, and perhaps modify the core such that it'll only run files who's signature checks out to be OK, or which can be decrypted.
| 0 | 1,177 | false | 0 | 1 |
Compiling script files Lua or Python
| 6,198,605 |
2 | 3 | 0 | 1 | 3 | 1 | 0.066568 | 0 |
I will separate my business code into a script language. it would be Lua or Python.
My question is my business code written in script file is viewable by others.
Because of script file will not compiled that is open. Anyone can see it.
How can I hide it?
I think if I use Python, it would be compiled (.pyo), but Lua looks like more suitable for me.
| 0 |
python,lua,compilation,obfuscation
|
2011-06-01T07:36:00.000
| 0 | 6,197,760 |
You will not be able to hide this easily. You can encrypt and decipher on the fly. The problem is that people will be able to look at you're process memory and see the code clear as day. If you want to prevent people from changing the lua you can create a hash which the text file is checked against on each run.
| 0 | 1,177 | false | 0 | 1 |
Compiling script files Lua or Python
| 6,197,862 |
1 | 3 | 0 | 0 | 1 | 1 | 0 | 0 |
I am doing various projects acroos diff computers , servers and diff languages like php python java etc.
Now on every computer i have to install / download various supporting libraries like javascript libraries for PHP , Jar files for Java and many python modules.
Is there way so that i can make online folder on server with only libraries and then automatically sync them across different computers. There may be some solution out there for this but i don't know it
For java and php there is no need to install them but i don't know whether python modules or libraraies work this way or not like south, PIL, matolib etc.
Is there any thing which can help me with this
| 0 |
java,php,python,linux,sync
|
2011-06-01T09:23:00.000
| 0 | 6,198,941 |
For Java projects, you could give Maven a try, and configure you own repository on your server. Don't know if it can be used for the other languages, however.
| 0 | 229 | false | 1 | 1 |
How can keep my libraries , modules or jar files Synced across diff projects
| 6,199,290 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 |
Well, i do want to make a web app in php, where the user could create an account and log in, then download a desktop app made in python, log in there also with the username and the password from the web app and then run in tray. The purpose of this project is none, i want to do it for fun and practice, but i do have some problems. How i could link a web app to a desktop application? That desktop app should gather information about the user's system, harware memory used (like the windows rating) and then send it to the web app and display it in the user's panel. Any ideas ? thanks
| 0 |
php,python
|
2011-06-01T11:55:00.000
| 0 | 6,200,707 |
There are a million ways to do this. I suggest you write the local data gathering first, to know the amount and format of the data (and because getting Windows hardware information via Python seems to be the hardest part).
Then write the web page login, to see if you can get HTTPS or have to take care of security yourself. With these constrains it is much easier to make a recommendation.
| 0 | 227 | false | 1 | 1 |
Python desktop app linked to PHP web app?
| 6,200,853 |
1 | 3 | 0 | 1 | 16 | 1 | 0.066568 | 0 |
I have a package I am developing. This package is already installed as an egg file parked in the site-packages directory, egg path added to easy-install.pth.
I now realized I have a bug in the package, so I invoked python setup.py develop to hook up the development dir. The path of the source dir is correctly added to easy-install.pth, but it's added latest, meaning that the already installed egg will be chosen and imported first with I issue import mypackage.
How can I have the development hook override the already installed package ?
Eventually, if I am doing it wrong, please explain what's the proper strategy to solve this use case.
| 0 |
python,distutils
|
2011-06-01T12:57:00.000
| 0 | 6,201,503 |
I would use a virtual environment, that is, an isolated Python installation that is not affected by distributions installed system-wide. See virtualenv and virtualenvwrapper on PyPI.
| 0 | 6,964 | false | 0 | 1 |
python setup.py develop to override installed version
| 9,918,482 |
1 | 1 | 0 | 2 | 0 | 0 | 1.2 | 0 |
If I schedule print "Hello World!"; to run every hour with crontab, where will Hello World! be printed? Is there a log file?
If I do it with Java or C instead of Python, will it make any difference?
Thanks!
| 0 |
python,crontab
|
2011-06-01T22:02:00.000
| 1 | 6,208,274 |
They will be sent to the email address defined at the top of the crontab, or to the crontab's owner by default. See the crontab(5) man page for more details.
| 0 | 109 | true | 0 | 1 |
Where do scheduled python programs "print"?
| 6,208,294 |
1 | 1 | 0 | 1 | 5 | 0 | 1.2 | 0 |
My (rather small) company develops a popular Windows application, but one thing we've always struggled with is testing - it frequently is only tested by the developers on a system similar to the one they developed it on, and when an update is pushed out to customers, there is a segment of our base that experiences issues due to some weird functionality with a Windows patch, or in the case of certain paranoid antivirus applications (I'm looking at you, Comodo and Kaspersky!), they will false-positive on our app.
We do manual testing on what 70% of our users use, but it's slow and painful, and sometimes isn't as complete as it should be. Management keeps insisting that we need to do better, but they keep punting on the issue when it comes time to release (testing will take HOW LONG? Just push it out and we'll issue a patch to customers who experience issues!).
I'd like to design a better system of automated testing using VMs, but could use some ideas on how to implement it, or if there's a COTS product out there, any suggestions would be great. I'm hacking a Python script together that "runs" every feature of our product, but I'm not sure how to go about testing if we get a Windows crash (besides just checking to see if it's still in the process list), or worse yet, if Comodo has flagged it for some stupid reason.
To best simulate the test environment, I'm trying to keep the VM as "pure" as possible and not load a lot of crap on it outside of the OS and the antivirus, and some common apps (Acrobat Reader, Firefox etc).
Any ideas would be most appreciated!
| 0 |
python,testing,automation,functional-testing,cots
|
2011-06-01T22:14:00.000
| 1 | 6,208,385 |
Interesting problem. One thing to avoid is using the antivirus APIs to check to see if your application triggers them. You want a real live deployment of your application, on the expected operating system, with a real live AV install monitoring it. That way you'll trigger the heuristics monitoring as well as the simple "does this code match that checksum" that the API works with.
You haven't told us what your application is written in, but if your test suite for your application actually exercises portions of the application, rather than testing single code paths, that may be a good start. Ideally, your integration test suite is the same test suite you use to check for problems on your deploy targets. Your integration testing should verify the input AND the output for each test in a live environment, which SHOULD catch crashes and the like. Also, don't forget to check for things that take much longer than they should, that's an unfortunately common failure mode. Most importantly, your test suite needs to be easy enough to write, change, and improve that it actually stays in sync with the product. Tests that don't test everything are useless, and tests that aren't run are even worse. If we had more information about how your program works, we could give better advice about how to automate that.
You'll probably want a suite of VM images across your intended deploy targets, in various states of patch (and unpatch). For some applications, you'll need a separate VM for each variant of IE, since that changes other aspects of the system. Be very careful about which combination of things you have in each VM. Don't test more than one AV at a time. Update the AVs in your snapshots before running your tests. If you have a large enough combination software in your images, you might need to automate image creation - get a base system build, update to the latest patch level, then script the installation of AV and other application combinations.
Yes, maintaining this farm of VMs will be a pain, but if you script the deploy of your application, and have good snapshots and a plan for patching and updating the snapshots, the actual test suite itself shouldn't take all that long to run given appropriate hardware. You'll need to investigate the VM solutions, but I'd probably start with VMWare.
| 0 | 1,381 | true | 0 | 1 |
How can I automate antivirus/WSUS patch testing of my Windows driver and binary?
| 6,208,551 |
1 | 1 | 0 | 2 | 4 | 0 | 1.2 | 0 |
I have Jenkins running a python script that makes some SVN calls, my problem is that Jenkins tries to run this script as SYSTEM user which doesn't seem to have permission to access the SVN. It prompts me for a password for 'SYSTEM' upon my svn call.
If I run the python script by itself, I have no problems accessing the repository. Is there a way to have Jenkins run its Windows batch command as a non-SYSTEM user? I would rather not hardcode the SVN username and password in my script.
Edit: I found a way to change the user Jenkins runs under, it is accessed through:
Start > Control Panel > Administrative Tools > Services > Right Click, Properties for jenkins > Log On.
| 0 |
python,svn,windows-7,credentials,jenkins
|
2011-06-01T23:34:00.000
| 1 | 6,208,922 |
Create a new Jenkins job, and use Subversion as the revision control system. Put in the URL of the Subversion repository you want to manipulate in your Python script. Under the URL will appear a link to let you set the login. Click the link and log in.
Once you're done, you can delete the job. The whole purpose was to allow Jenkins to set up Subversion to allow that user to login in for that repository URL.
| 0 | 1,833 | true | 0 | 1 |
SVN having credential problems with Jenkins running as SYSTEM
| 6,209,784 |
1 | 4 | 1 | 0 | 3 | 0 | 0 | 0 |
I want to develop an application targeting both android and iphone. I guess they use Java and Objective-C. Can I use single language like Python? Is it the best route? Will I lose performance, features, etc. by using Python. Are there any limitations that I will run into?
| 0 |
python,android,iphone,beeware
|
2011-06-02T15:47:00.000
| 0 | 6,216,890 |
phonegap is one of the way that you can use to target the iphone & android.through the javascript,html.
| 0 | 1,256 | false | 1 | 1 |
target both android and iphone with Python
| 6,217,111 |
1 | 2 | 0 | 1 | 0 | 0 | 1.2 | 0 |
I have a ipy script that can be called either from an embedded console in a larger application or directly from the command line and I'm looking for a quick way to determine at run time which has occurred without having to pass an argument to differentiate the events.
Additionally the script has to run on both mono/linux and .net/windows.
Thanks in advance for any assistance.
| 0 |
.net,mono,ironpython
|
2011-06-02T16:49:00.000
| 1 | 6,217,545 |
You could use System.AppDomain.CurrentDomain.GetAssemblies() (assuming you don't use AppDomain isolation, of course) and see if that contains an assembly that would only be preset when your application is running.
| 0 | 112 | true | 0 | 1 |
How to detect in Iron Python what the script is being called from?
| 6,218,295 |
1 | 3 | 0 | 40 | 23 | 0 | 1.2 | 1 |
I've setup an Amazon EC2 server. I have a Python script that is supposed to download large amounts of data from the web onto the server. I can run the script from the terminal through ssh, however very often I loose the ssh connection. When I loose the connection, the script stops.
Is there a method where I tell the script to run from terminal and when I disconnect, the script is still running on the server?
| 0 |
python,amazon-ec2
|
2011-06-03T20:53:00.000
| 1 | 6,232,564 |
You have a few options.
You can add your script to cron to be run regularly.
You can run your script manually, and detach+background it using nohup.
You can run a tool such as GNU Screen, and detach your terminal and log out, only to continue where you left off later. I use this a lot.
For example:
Log in to your machine, run: screen.
Start your script and either just close your terminal or properly detach your session with: Ctrl+A, D, D.
Disconnect from your terminal.
Reconnect at some later time, and run screen -rD. You should see your stuff just as you left it.
You can also add your script to /etc/rc.d/ to be invoked on book and always be running.
| 0 | 15,157 | true | 0 | 1 |
How to continuously run a Python script on an EC2 server?
| 6,232,612 |
1 | 1 | 0 | 4 | 0 | 0 | 1.2 | 1 |
I tried using mechanize to see the URL of the image, but its a dynamic page generating a different image each time. I was wondering if there was any way to "capture" this image for viewing/saving.
Thanks!
| 0 |
python
|
2011-06-03T21:17:00.000
| 0 | 6,232,780 |
The only way to save the image would be to make a single call to the CATPCHA URL programatically, save the result, and then present that saved result to the user. The whole point of CAPTCHA is that each request generates a unique/different reponse/image.
| 0 | 540 | true | 1 | 1 |
Capturing CAPTCHA image with Python
| 6,232,816 |
4 | 5 | 0 | 0 | 6 | 1 | 0 | 0 |
I have a list of objects in python that I would regularly check and destroy some of them - those which haven't been accessed lately (i.e. no method was called).
I can maintain the last time accessed and update it in every method, but is there any more elegant way to achieve this?
| 0 |
python,class,object,tracking
|
2011-06-05T06:55:00.000
| 0 | 6,241,448 |
What is the scope of your objects? Can you lock down where they are stored and accessed? If so, you should consider creating some kind of special container that will timestamp when the object was last used or accessed. Access to the objects would be tightly controlled by a function which could time-stamp last access time.
| 0 | 565 | false | 0 | 1 |
What's the most elegant way of keeping track of the last time a python object is accessed?
| 6,241,493 |
4 | 5 | 0 | 2 | 6 | 1 | 0.07983 | 0 |
I have a list of objects in python that I would regularly check and destroy some of them - those which haven't been accessed lately (i.e. no method was called).
I can maintain the last time accessed and update it in every method, but is there any more elegant way to achieve this?
| 0 |
python,class,object,tracking
|
2011-06-05T06:55:00.000
| 0 | 6,241,448 |
If you are using python 3.2 then have a look at functions.lru_cache() and see if that does what you want. It won't give you a last modified time, but it will do the cleanup of unused object.
For older versions it might provide the pattern you want to use but you'll have to provide the code.
| 0 | 565 | false | 0 | 1 |
What's the most elegant way of keeping track of the last time a python object is accessed?
| 6,241,583 |
4 | 5 | 0 | 0 | 6 | 1 | 0 | 0 |
I have a list of objects in python that I would regularly check and destroy some of them - those which haven't been accessed lately (i.e. no method was called).
I can maintain the last time accessed and update it in every method, but is there any more elegant way to achieve this?
| 0 |
python,class,object,tracking
|
2011-06-05T06:55:00.000
| 0 | 6,241,448 |
Keep a dict of the IMAP connections, keyed by the user ID. Write a function, that given a user ID, returns an IMAP connection. The function will look up the user ID in the dict, and if the user ID is there, get the corresponding IMAP connection, and check that it's still alive. If alive, the function returns that connection. If not alive, or if the user ID was not in the dict, it creates a new connection, adds it to the dict, and returns the new connection. No timestamps required.
| 0 | 565 | false | 0 | 1 |
What's the most elegant way of keeping track of the last time a python object is accessed?
| 6,243,773 |
4 | 5 | 0 | 1 | 6 | 1 | 0.039979 | 0 |
I have a list of objects in python that I would regularly check and destroy some of them - those which haven't been accessed lately (i.e. no method was called).
I can maintain the last time accessed and update it in every method, but is there any more elegant way to achieve this?
| 0 |
python,class,object,tracking
|
2011-06-05T06:55:00.000
| 0 | 6,241,448 |
Python's highly dynamic nature lets you write proxies that wrap objects in interesting ways. In this case, you could write a proxy that replaces the methods of an object (or an entire class) with wrapper methods that update a timestamp and then delegates the call to the original method.
This is somewhat involved. A simpler mechanism is to use Python decorators to augment specific methods. This also lets you exempt some functions that don't constitute an "access" when they are called (if you need this).
| 0 | 565 | false | 0 | 1 |
What's the most elegant way of keeping track of the last time a python object is accessed?
| 6,241,480 |
1 | 2 | 0 | 2 | 3 | 0 | 0.197375 | 0 |
I need to use fabfile to remotely start some program in remote boxes from time to time, and get the results. Since the program takes a long while to finish, I wish to make it run in background and so I dont need to wait. So I tried os.fork() to make it work. The problem is that when I ssh to the remote box, and run the program with os.fork() there, the program can work in background fine, but when I tried to use fabfile's run, sudo to start the program remotely, os.fork() cannot work, the program just die silently. So I switched to Python-daemon to daemonalize the program. For a great while, it worked perfectly. But now when I started to make my program to read some Python shelve dicts, python-daemon cannot work any longer. Seems like if you use python-daemon, the shelve dicts cannot be loaded correctly, which I dont know why. Anyone has an idea besides os.fork() and Python-daemon, what else can I try to solve my problem?
| 0 |
python,fork,shelve,python-daemon
|
2011-06-05T15:42:00.000
| 1 | 6,243,933 |
For those who came across this post in the future. Python-daemon can still work. It is just that be sure to load the shelve dicts within the same process. So previously the shelve dicts is loaded in parent process, when python-daemon spawns a child process, the dict handler is not passed correctly. When we fix this, everything works again.
Thanks for those suggesting valuable comments on this thread!
| 0 | 2,313 | false | 0 | 1 |
remotely start Python program in background
| 6,258,988 |
1 | 3 | 0 | 2 | 3 | 0 | 1.2 | 0 |
I am using python struct module to create custom binary files.
The file itself has the following format:
4 bytes (integer)
1 byte (unsigned char)
4 bytes (float)
4 bytes (integer)
1 byte (unsigned char)
4 bytes (float)
.......................... (100000 such lines)
4 bytes (integer)
1 byte (unsigned char)
4 bytes (float)
Currently, I am using a 32bit machine to create these custom binary files. I am soon planning on switching to a 64bit machine.
Will I be able to read/write the same files using both {32bit / 64bit} machines? or should I expect compatibility issues?
(I will be using Ubuntu Linux for both)
| 0 |
python,serialization,32bit-64bit
|
2011-06-05T18:06:00.000
| 1 | 6,244,799 |
As long as your struct format string uses "standard size and alignment" (< or >) rather than "native size and alignment" (@), your files can be used cross-platform.
| 0 | 1,348 | true | 0 | 1 |
Binary Files on 32bit / 64bit systems?
| 6,244,838 |
2 | 6 | 0 | 0 | 3 | 0 | 0 | 0 |
There's not really anything on my planned site that would require a whole lot of customization but I'm looking for something that has built in functionality for forums, comments, reviews, a blog, a database that can queried by users, and some social networking features.
I have a decent amount of experience using python so I was thinking of using Django and also learning it in the process. I realize though that this would be much more time consuming than using a CMS.
So, part of me is inclined to use a PHP based CMS like worpress or drupal. I don't have any prior experience with PHP but since all the features I'm looking for are built in, do you think this would be my fastest route to get up and running?
| 0 |
python,django,wordpress,content-management-system
|
2011-06-05T18:25:00.000
| 0 | 6,244,906 |
Drupal or Joomla are your best bet. Firstly, Joomla allows you to basically drop in these features you're asking for in an install an go manner. This is the easiest way to go.
Now if you want a LOTS and LOTS customization and don't mind getting into a little code then drupal will be perfect.The great thing is that the customization possibilities are almost endless! The bad thing is that Drupal has a NOTORIOUSLY crazy templating system. It's not hard to understand but even simple things can become a real pain. But Like joomla you can avoid all this with install and go plugins. you'll have the choice.
I don't know too much about wordpress, but having looked at the developer API, it seems to assume volumes about what you intend to build on top of it. Which makes it a lot less flexible than drupal and django.
Django, well according to your question... it's everything you don't want. Also if you have gone to any of the Django CMS sites you'll see how painful it is to get them up and running. That said I'm personally a Django fanatic but I'd rather you dint run into a bad experience with it and have a horrible impression of it. so given your question i'd say Drupal!
| 0 | 1,735 | false | 1 | 1 |
CMS or web framework a simple project
| 6,245,184 |
2 | 6 | 0 | 1 | 3 | 0 | 0.033321 | 0 |
There's not really anything on my planned site that would require a whole lot of customization but I'm looking for something that has built in functionality for forums, comments, reviews, a blog, a database that can queried by users, and some social networking features.
I have a decent amount of experience using python so I was thinking of using Django and also learning it in the process. I realize though that this would be much more time consuming than using a CMS.
So, part of me is inclined to use a PHP based CMS like worpress or drupal. I don't have any prior experience with PHP but since all the features I'm looking for are built in, do you think this would be my fastest route to get up and running?
| 0 |
python,django,wordpress,content-management-system
|
2011-06-05T18:25:00.000
| 0 | 6,244,906 |
Use a CMS, Drupal is very flexible and you can install another cms like vanilla for the forums options with a plugin.
It's al you need. But if you want a full control of your site, use a framwork like Django and you get all.
Remenber, the CMS is the fastest way to build a site.
Sorry for my english mistakes.
| 0 | 1,735 | false | 1 | 1 |
CMS or web framework a simple project
| 6,244,936 |
1 | 2 | 0 | 0 | 2 | 0 | 1.2 | 1 |
I have a method in my script that pulls a Twitter RSS feed, parses it with FeedPharser, wraps it in TwiML (Twilio-flavored XML) using the twilio module, and returns the resulting response in a CherryPy method via str(). This works my fine in development environment (Kubuntu 10.10); I have had mixed results on my server (Ubuntu Server 10.10 on Linode).
For the first few months, all was well. Then, the method described above began to fail with something like:
UnicodeEncodeError: 'ascii' codec
can't encode character u'\u2019' in
position 259: ordinal not in
range(128)
But, when I run the exact same code on the same feed, with the same python version, on the the same OS, on my development box, the code executes fine. However, I should note that even when it does work properly, some characters aren't outputted right. For example:
’
rather than
'
To solve this anomaly, I simply rebuilt my VPS from scratch, which worked for a few more months, and then the error came back.
The server automatically installs updated Ubuntu packages, but so does my development box. I can't think of anything that could cause this. Any help is appreciated.
| 0 |
python,cherrypy,feedparser
|
2011-06-06T00:22:00.000
| 0 | 6,246,850 |
A few reboots later (for unrelated reasons) and it's working again. How odd....
| 0 | 238 | true | 0 | 1 |
What could cause a UnicodeEncodeError exception to creep into a working Python environment?
| 6,279,095 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 |
I'm copying files from the temporary internet files cache into a folder, in bulk using a python script. Using shutil to copy the full path to the os.cwd, it comes up with this error:
builtins.IOError: [Errno 22] Invalid argument:
'C:\\Users\\NICK\\AppData\\(no whitespace in path; only for readability)
Local\\Microsoft\\Windows\\(no whitespace in path; only for readability)
Temporary Internet Files\\CONTENT.IE5\\04HT8Z5C\\024MS[1].png\\'
Is it because these files are hidden or something?
| 0 |
python,shutil
|
2011-06-06T21:10:00.000
| 0 | 6,258,064 |
There is a backslash at the end of your file name so it is maybe treated as a path.
| 0 | 347 | true | 0 | 1 |
Copying files from temporary internet cache in python
| 6,258,122 |
2 | 2 | 0 | 6 | 2 | 1 | 1 | 0 |
I'm fairly new to Python and was hoping I could get some advice before moving forward. I have a group of integers and I want to check whether or not a given element is contained within that group as fast as possible (speed does matter here).
With Python, should I be looking at custom data structures tailored for these operations (BST, etc), python trickery like wrapping with any(), or are there any well known Python/C libraries that are standard for this sort of thing. I don't want to reinvent the wheel here, so I'm interested to hear the common way to approach this in Python.
A little more background, elements are all inserted into the group up front, and none occur thereafter, so insertion time doesn't matter. This seems to imply that maintaining a sorted group and doing something like binary search will be the best approach, but I'm sure this has already been implemented much more efficiently than I could implement and is available in a Python/C lib. Interested to hear what you guys think.
Thanks!
| 0 |
python,c,performance,comparison,binary-search
|
2011-06-07T14:20:00.000
| 0 | 6,266,641 |
As DMS says in the comment, there's a built-in set (and the immutable variant, frozenset, which is very useful you don't need to mutate and can fit the generation of the values into a single generator expression). It's hash-based and therefore sacrifices order for amortized O(1) membership testing. It's written in C and more time went into making it fast than you could reasonably spend at all on it. If memory serves right, it's based on the dictionary implementation, which is propably among the fastet hash tables (for common usage) in existence.
Note that the "hash" part will be O(1) too, as integers hash to themselves. The algorithms are tailored to handling "non-random" (e.g. somewhat consecutive) hashes very well.
| 0 | 116 | false | 0 | 1 |
Improving Python Comparison and Existence Operations
| 6,266,843 |
2 | 2 | 0 | 6 | 2 | 1 | 1 | 0 |
I'm fairly new to Python and was hoping I could get some advice before moving forward. I have a group of integers and I want to check whether or not a given element is contained within that group as fast as possible (speed does matter here).
With Python, should I be looking at custom data structures tailored for these operations (BST, etc), python trickery like wrapping with any(), or are there any well known Python/C libraries that are standard for this sort of thing. I don't want to reinvent the wheel here, so I'm interested to hear the common way to approach this in Python.
A little more background, elements are all inserted into the group up front, and none occur thereafter, so insertion time doesn't matter. This seems to imply that maintaining a sorted group and doing something like binary search will be the best approach, but I'm sure this has already been implemented much more efficiently than I could implement and is available in a Python/C lib. Interested to hear what you guys think.
Thanks!
| 0 |
python,c,performance,comparison,binary-search
|
2011-06-07T14:20:00.000
| 0 | 6,266,641 |
The most Pythonic way would be to not store them in a sorted container, but to use a set (or the immutable variant frozenset). These are hash-based containers, so lookups are O(1). More importantly, the hashing algorithm is one of the core operations in Python (used for dictionaries, and attribute lookups), so it's written in C, and written to be fast.
And that's usually the case with Python. Using the standard containers will be faster than rolling your own at the Python level, so try to use them as much as possible.
If you do want to store them in a sorted list, then look at the bisect module in the standard library. It has standard functions for binary searches. (Well, not actually. I actually returns the index of where the searched for item would be. You'll have to do the final comparison yourself.) And it may implement them in C (depending on your configuration), so it'll be faster than what you write on your own.
| 0 | 116 | false | 0 | 1 |
Improving Python Comparison and Existence Operations
| 6,266,828 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
I am trying to work out a solution for detecting traceability between source code and documentation. The most important use case is that the user needs to see the a collection of source code tokens (sorted by relevance to the documentation) that can be traced back to the documentation. She is wont be bothered about the code format, but somehow needs to see an "identifier- documentation" mapping to get the idea of traceability.
I take the tokens from source code files - somehow split the concatenated identifiers (SimpleMAXAnalyzer becomes "simple max analyzer"), which then act as search terms on the documentation. Search frameworks are best for doing this specific task - drilling down documents to locate stuff using powerful information retrieval algorithms. Whoosh looked really great python search... with a number of analyzer and filters.
Though the problem is similar to search - it differs in that the user is not physically doing any search. So am I solving the problem the right way? Given that everything is static and needs to computed only once - am I using a wrong tool(a search framework) for the job?
| 0 |
python
|
2011-06-07T14:36:00.000
| 0 | 6,266,899 |
I'm not sure, if I understand your use case. The user sees the source code and has some ways of jumping from a token to the appropriate part or a listing of the possible parts of the documentation, right?
Then a search tool seems to be the right tool for the job, although you could precompile every possible search (there is only a limited number of identifiers in the source, so you can calculate all possible references to the docs in advance).
Or are there any "canonical" parts of the documentation for every identifier? Then maybe some kind of index would be a better choice.
Maybe you could clarify your use case a bit further.
Edit: Maybe an alphabetical index of the documentation could be a step to the solution. Then you can look up the pages/chapters/sections for every token of the source, where all or most of its components are mentioned.
| 0 | 44 | false | 0 | 1 |
Design help for static content with fixed keywords search framework
| 6,271,510 |
1 | 3 | 0 | 0 | 4 | 1 | 0 | 0 |
I would like to be able to set breakpoints to every method of a C++ class in gdb.
I think the easiest way to do this is probably python, since now python has complete access to gdb. I know very little python, and with gdb on top of it, it's even harder. I am wondering if anyone knows how to write a class python code that sets breakpoints to every method of a named class in gdb.
| 0 |
c++,python,debugging,gdb
|
2011-06-07T15:06:00.000
| 0 | 6,267,308 |
You can generate (for example using python) a .gdbrc file with a line containing
'break C::foo'
for every function of your class C and then start gdb.
| 0 | 605 | false | 0 | 1 |
gdb python programming: how to write code that will set breakpoints to every method of a C++ class?
| 6,658,184 |
1 | 3 | 0 | 0 | 1 | 0 | 0 | 0 |
I have a set of Python tests that run on TeamCity. I am able to get the test to run, however I cannot get TeamCity to produce a test report. How can I get TeamCity to produce a report of my tests?
Thanks
| 0 |
python,unit-testing,teamcity
|
2011-06-07T18:16:00.000
| 0 | 6,269,795 |
The test reports are to be generated by the test runner, not TeamCity. TeamCity will only look at the test report generated and use it for purposes like displaying info on the tests passed etc.
| 0 | 7,336 | false | 0 | 1 |
Python Integration Testing on TeamCity
| 6,269,838 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
Now I use lxml to parse HTML in python. But I haven't found any API to get font information of a text node. Is there any librafy to do that?
Thanks very much!
| 0 |
python,html
|
2011-06-08T02:38:00.000
| 0 | 6,273,635 |
You can't get this information from the text nodes in the HTML, because it isn't there.
| 0 | 203 | false | 0 | 1 |
How to get font of a HTML node with python?
| 6,274,197 |
1 | 3 | 0 | 0 | 0 | 0 | 0 | 0 |
How do I get Tornado (or in general another server) to handle the .py files on my host, while Apache still handles the php files?
| 0 |
python,apache,webserver,tornado
|
2011-06-08T09:39:00.000
| 1 | 6,276,805 |
So you have Apache as the web head and Tornado running behind it? Why not just use ProxyPass from port 80 to whatever port Tornado is running on.
You can't get Tornado to serve the .py files like PHP can do with .php files.
| 0 | 847 | false | 0 | 1 |
Installed Tornado and Python but Apache is still handling .py files
| 6,289,549 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
I have a CentOS 5.5 server running a local telnet daemon (which is only accessible from localhost) which prints out a list of active users on our accounting system when sent the correct commands through telnet. I need to make this information available on our intranet website which uses PHP.
I've written a Python script using the telnetlib modules to run locally on the CentOS server which simply captures the output of the telnet command and prints them to stdout. I've setup key based ssh between the web server and the CentOS server to run the python script remotely from the web server. I can execute the script successfully from the web server using ssh and it prints the list of users to stdout on the webserver. I was hoping to be able to execute this SSH command and capture the results into a variable to display on the website.
However, I can't get exec(), shell_exec(), passthru() or any other PHP exec function to display the data. I'm well aware that I may be approaching this from the totally wrong angle! What would be the best way to capture the output from ssh?
| 0 |
php,python,ssh,telnet
|
2011-06-08T17:37:00.000
| 1 | 6,282,891 |
Why don't you redirect your stdout to a file. Then use your php website framework to read the file and display the results
| 0 | 899 | false | 0 | 1 |
Using PHP/Python to access data from a remote telnet server over SSH
| 6,283,332 |
1 | 2 | 0 | 5 | 1 | 0 | 0.462117 | 0 |
I have an urgent problem because my time is running out: I let my calculations process on a server with 8 cores therefore I'm using openMP in my c++ code and it works fine. Of course I'm not the only one who is using the server, so my capacity is not always 800%CPU.
But it happened now several times that someone who started his python prog on the machine paralyzed mine and his prog completely: Although I was still using around 500%CPU the code was running approx. 100x slower - for me and the other guy. Do you have an idea what the reason could be, how to prevent it?
| 0 |
c++,python,openmp
|
2011-06-09T07:50:00.000
| 1 | 6,289,668 |
There can be a number of reasons for this, for example:
Increased failure rate in the branch prediction
Exhausted CPU cache
Filled up the memory bus
Too much context switching (this have an effect on many things, including all the previous points)
| 0 | 290 | false | 0 | 1 |
programs paralyzing each other on the server (c++ with openMP and python)
| 6,289,692 |
1 | 2 | 0 | 10 | 4 | 0 | 1 | 0 |
I have a libfoo.so library built from C++ code (compiled with gcc), and I would like to quickly test some of its exported classes (basically, instantiating a class then calling its methods to check the output).
While I could do that in C/C++ with a main file that links to the library in question and build my tests, but I think it would be much easier if it was possible to simply call Python from the command line and call the methods from there.
I know I can use CDLL from ctypes to load C-style libraries, but is there a similar functionality for C++ libraries and objects?
EDIT : Ideally, I do not want to modify the C++ code, I would need to use it as-is.
| 0 |
c++,python,shared-libraries
|
2011-06-10T15:33:00.000
| 0 | 6,308,649 |
Honestly C++ is a bit messy. You could do something like create a pure C function which wraps the C++ functionality (which you then call from python) but at that point you might as well write your tests in C++. Unfortunately the only tool out there for this (that I know of) is SWIG.
It's sad that it's called the "simplified" wrapper and interface generator, because there's nothing simple about it. If you have VERY primitive data types in your signatures (like, JUST ints or maybe char*) it'll be a pretty quick job. Otherwise you have to tell swig how to marshal your data types between languages, and that gets REALLY ugly very quickly. Furthermore, after short time you realize you have to learn the CPython API in order to write your marshalling code.
And by that point you may as well write your own CPython wrapper without involving SWIG. You suddenly realize you've spent a good month learning a new API and feel horribly frustrated. If you're going to be doing this a lot, it's definitely worth your time. However if this is a one-time thing, just write your tests in C / C++.
(I'm speaking from experience here)
| 0 | 2,846 | false | 0 | 1 |
Testing a C++ library with Python
| 6,308,733 |
1 | 3 | 0 | 0 | 1 | 0 | 0 | 0 |
Im in the process of launching a Django app on ec2, but have hit a wall trying to install my code on my AMI instance. This is my situation: I have a bitnami AMI up and running that has Django, apache, Postgresql, and nearly all my dependancies pre installed, and I have my fully functional Django app running on my local machine that I have been testing thus far with the Django Dev server. After quite a bit of googling, the most common methods of installing an app to an ec2 instance seem either using ssh/sftp/scp to drop a tarball in the instance, or creating a repository and importing code from there. If anyone can tell me the method they prefer, and guide me through the process, or provide a link to a good tutorial, it would be hugely appreciated!
| 0 |
python,django,amazon-ec2,amazon-web-services,cloud-hosting
|
2011-06-10T18:34:00.000
| 0 | 6,310,624 |
I usually simply scp -R my whole site directory into /home/bitnami of my AMI. I'm using Apache/NGINX/Django with mod_wsgi. So the directory (for example /home/bitnami/djangosites/) gets referred to based on my mod_wsgi path in my apache cfg file.
In other words, why not just move the whole directory recursively (scp -R) instead of making a tarball etc?
| 0 | 618 | false | 1 | 1 |
Installing a my Django app on ec2
| 6,311,394 |
1 | 5 | 0 | 4 | 1 | 0 | 1.2 | 0 |
In PHP, it was extremely easy to start hacking away and figuring out what was going on on a page. Just throw in a bunch of echos and print_r's and that was about it. It appears that this technique is not working for me in python. I am getting practice by hacking around in a python photo upload module, and when a photo is uploaded, it creates 3 different size photos. I found the code that does this, but I want to see the state at that particular moment. I tried doing a "print" on the size variable, but it did not show up in my browser.
I guess a more straightforward question would be, is it "pythonic" do debug using the browser ( equivalent to echo's and print_r's in php ), or is this what the python console is for? Thanks!
| 0 |
python,debugging
|
2011-06-10T19:43:00.000
| 0 | 6,311,322 |
Use the logging module rather than printing stuff to stdout.
Using the interpreter in interactive mode is a great way to try out code, and pdb is very useful for real debugging.
| 0 | 147 | true | 0 | 1 |
Experienced Python programmers (ESPECIALLY former php programmers) : How do I debug python?
| 6,311,351 |
1 | 1 | 0 | 3 | 3 | 1 | 1.2 | 0 |
I am in the process of building a system in python that centralizes the compilation of our code to a set of machines. I have all three programs written, running and working; however I'm still trying to weed out some of the more elusive bugs. I have been mostly testing over the localhost interface and therefore run all of the components on my machine.
Is there a way to run all the components simultaneously in one Eclipse session so that I can flip between them and terminate if needed?
I have been using multiple terminal windows, but since the code is still immature, it's not always possible to exit cleanly from the program.
| 0 |
python,eclipse,pydev
|
2011-06-10T20:11:00.000
| 0 | 6,311,577 |
Yes - just run them as normal and use the Console menu to flip between them. If you run them under the debugger, you can also use the Debug view in the Debug perspective to terminate them - in either case, using the red square icon to do the terminating.
| 0 | 2,548 | true | 0 | 1 |
Is there a way to run two or more python modules simultaneously from Eclipse(pyDev)?
| 6,317,735 |
1 | 3 | 0 | 0 | 6 | 0 | 0 | 0 |
A bit of background:
I have been developing apps for the past 2 years for Mac and iOS. I really like Objective-c and Cocoa/Cocoa-Touch framework. I did java and c++ before I started programing for iOS and now when I look at these languages i literally get a headache (The syntax mainly but also lack of classes provided by Cocoa framework). I think I have become too used to Objective-c [] syntax and the rich Cocoa-Framework (Things like NSDictionary, NSPredicate, NSString....)
Now:
I need to do some server side programming. I was wondering what's my best option. I certainly don't want to go with Java, but is there a language that is closely like Objective-C that I can use which has a framework like Cocoa with classes similar to NSString, NSDictionary and such...? or better yet, can I even use Objective-C itself in server side programming?
Edit: I took a look at python, and as far as syntax goes, i like it. But of course, that's just syntax, there's ALOT more to a language than just syntax...
Thanks.
| 0 |
python,objective-c,cocoa,server-side
|
2011-06-12T01:09:00.000
| 0 | 6,319,507 |
I concur, try doing it in objective-c
But if you are looking for a similar language that also has rich wen development frameworks widely used, take a look at Ruby.
The syntax is quite different but the object model is fairly similar and won't actually feel that far away. The framework Ruby on Rails is also a very rich one with a nice MVC approach and good documentation.
But still, objective-c would be awesome.
| 0 | 3,015 | false | 0 | 1 |
Objective-c Server Side
| 18,298,127 |
2 | 2 | 0 | 1 | 3 | 0 | 0.099668 | 0 |
I'm a newbie to developing with Python and I'm piecing together the information I need to make intelligent choices in two other open questions. (This isn't a duplicate.)
I'm not developing using a framework but building a web app from scratch using the gevent library. As far as front-end web servers go, it seems I have three choices: nginx, apache, and lighttpd.
From all accounts that I've read, nginx's mod_wsgi isn't suitable.
That leaves two choices - lighttpd and Apache. Under heavy load, am I going to see major differences in performance and memory consumption characteristics? I'm under the impression Apache tends to be memory hungry even when not using prefork, but I don't know how suitable lighttp is for Python apps.
Are there any caveats or benefits to using lighttpd over apache? I really want to hear all the information you can possibly bore me with!
| 0 |
python,apache,lighttpd
|
2011-06-12T01:31:00.000
| 0 | 6,319,575 |
That you have mentioned gevent is important. Does that mean you are specifically trying to implement a long polling application? If you are and that functionality is the bulk of the application, then you will need to put your gevent server behind a front end web server that is implemented using async techniques rather that processes/threading model. Lighttd is an async server and fits that bill whereas Apache isn't. So use of Apache isn't good as front end proxy for long polling application. If that is the criteria though, would actually suggest you use nginx rather than Lighttpd.
Now if you are not doing long polling or anything else that needs high concurrency for long running requests, then you aren't necessarily going to gain too much by using gevent, especially if intention is to use a WSGI layer on top. For WSGI applications, ultimately the performance difference between different servers is minimal because your application is unlikely to be a hello world program that the benchmarks all use. The real bottlenecks are not the server but your application code, database, external callouts, lack of caching etc etc. In light of that, you should just use whatever WSGI hosting mechanism you find easier to use initially and when you properly work out what the hosting requirements are for your application, based on having an actual real application to test, then you can switch to something more appropriate if necessary.
In summary, you are just wasting your time trying to prematurely optimize by trying to find what may be the theoretically best server when in practice your application is what you should be concentrating on initially. After that, you also should be looking at application monitoring tools, because without monitoring tools how are you even going to determine if one hosting solution is better than another.
| 0 | 2,472 | false | 1 | 1 |
Apache + mod_wsgi / Lighttpd + wsgi - am I going to see differences in performance?
| 6,319,726 |
2 | 2 | 0 | 5 | 3 | 0 | 1.2 | 0 |
I'm a newbie to developing with Python and I'm piecing together the information I need to make intelligent choices in two other open questions. (This isn't a duplicate.)
I'm not developing using a framework but building a web app from scratch using the gevent library. As far as front-end web servers go, it seems I have three choices: nginx, apache, and lighttpd.
From all accounts that I've read, nginx's mod_wsgi isn't suitable.
That leaves two choices - lighttpd and Apache. Under heavy load, am I going to see major differences in performance and memory consumption characteristics? I'm under the impression Apache tends to be memory hungry even when not using prefork, but I don't know how suitable lighttp is for Python apps.
Are there any caveats or benefits to using lighttpd over apache? I really want to hear all the information you can possibly bore me with!
| 0 |
python,apache,lighttpd
|
2011-06-12T01:31:00.000
| 0 | 6,319,575 |
Apache...
Apache is by far the most widely used web server out there. Which is a good thing. There is so much more information on how to do stuff with it, and when something goes wrong there are a lot of people who know how to fix it. But, it is also the slowest out of the box; requring a lot of tweaking and a beefier server than Lighttpd. In your case, it will be a lot easier to get off the ground using Apache and Python. There are countless AMP packages out there, and many guides on how to setup python and make your application work. Just a quick google search will get you on your way. Under heavy load, Lighttpd will outshine Apache, but Apache is like a train. It just keeps chugging along.
Pros
Wide User Base
Universal support
A lot of plugins
Cons
Slow out of the box
Requires performance tweaking
Memory whore (No way you could get it working on a 64MB VPS)
Lighttpd...
Lighttpd is the new kid on the block. It is fast, powerful, and kicks ass performance wise (not to mention use like no memory). Out of the box, Lighttpd wipes the floor with Apache. But, not as many people know Lighttpd, so getting it to work is harder. Yes, it is the second most used webserver, but it does not have as much community support behind it. If you look here, on stackoverflow, there is this dude who keeps asking about how to get his Python app working but nobody has helped him. Under heavy load, if configured correctly, Lighttpd will out preform Apache (I did some tests a while back, and you might see a 200-300% performance increase in requests per second).
Pros
Fast out of the box
Uses very little memory
Cons
Not as much support as Apache
Sometimes just does not work
Nginx
If you were running a static website, then you would use nginx. you are correct in saying nginx's mod_wsgi isn't suitable.
Conclusion
Benefits? There are both web servers; designed to be able to replace one another. If both web servers are tuned correctly and you have ample hardware, then there is no real benefit of using one over another. You should try and see which web server meets your need, but asking me; I would say go with Lighttpd. It is, in my opinion, easier to configure and just works.
Also, You should look at Cherokee Web Server. Mad easy to set up and, the performance aint half bad. And you should ask this on Server Fault as well.
| 0 | 2,472 | true | 1 | 1 |
Apache + mod_wsgi / Lighttpd + wsgi - am I going to see differences in performance?
| 6,319,667 |
1 | 1 | 0 | 2 | 0 | 0 | 1.2 | 0 |
We all know that working with S3 is a pain: deleting virtual directories requires to delete all the objects from within the path, etc. At least with RESTful API this is the case.
I was wondering whether there would be any performance improvement if I would use PHP to call GSUtil rather than using my own PHP class. Is there anything special the way GSUtil handles requests or is it the same REST wrapper?
The main issues I am having:
deleting big folders
uploading many small files
reading hierarchical data steps (e.g. only files and folders under /foo path, but not their children-children)
| 0 |
php,python,rest,google-cloud-storage
|
2011-06-13T07:18:00.000
| 0 | 6,327,592 |
Fundamentally, your PHP code and gsutil are both using the RESTful interface (gsutil is actually layered atop an open source Python library called boto which implements the bulk of the REST interface), however, there are several reasons to consider using gsutil:
Gsutil takes care of OAuth 2.0 authentication/authorization for you.
Gsutil does wildcard expansion, which, for example would enable you to remove all objects in a bucket by specifying, simply, 'gsutil rm gs://bucket/*'
Gsutil has lots of other features (get/set ACLs and associated XML parse/build, listing bucket contents, dump object contents, etc.) which you would have to implement yourself (or find in some other PHP library) if you bypass gsutil.
Gsutil has some nice performance capabilities for your "uploading many small files" use case. In particular, the -m option runs your uploads in parallel processes and threads, which provides a substantial performance boost.
In summary, you can roll your own PHP code but I think you'll get your job done faster and have access to more functionality if you leverage gsutil.
| 0 | 766 | true | 1 | 1 |
GSUtil vs PHP RESTful class
| 9,041,894 |
1 | 4 | 0 | 0 | 17 | 0 | 0 | 0 |
I need to call "/usr/bin/pdf2txt.py" with few arguments from my Perl script. How should i do this ?
| 0 |
python,perl
|
2011-06-14T07:37:00.000
| 1 | 6,340,479 |
IF you want to see output in "real time" and not when script has finished running , Add -u after python. example :
my $ret = system("python -u pdf2txt.py arg1 arg2");
| 0 | 31,279 | false | 0 | 1 |
How to call a python script from Perl?
| 70,644,949 |
3 | 4 | 0 | 1 | 4 | 1 | 0.049958 | 0 |
I have a code written in C# I would like to use as the back-end of a site I'm building.
I would prefer not to build the site front-end in ASP.NET (which integrates nicely with C#), and to use PHP or Python instead.
Is that reasonable? Should I re-consider using ASP.NET?
How can I achieve that?
| 0 |
c#,php,asp.net,python,web-services
|
2011-06-14T08:28:00.000
| 0 | 6,340,972 |
You can do whatever you like. Personally i wouldnt use php because i dont know very much php.
But you can do it, you could expose a soap web service and there are libraries that will let php talk to it.
No one here will be able to tell you what you haven't already told us. Asp.Net will probably be easier because of how everything integrates and you can share classes etc - but that does not mean you HAVE to use it.
Both of them are fairly passive server side technologies that present html to browsers though. why do you need 2 servers?
You have to ask why you are doing it .. if you are playing and want to learn then of course you can do it just to see how it all works. But if you are on a commercial project then id suggest that you dont need both a php and a c# server ... or if you do perhaps you want to go asp.net for your web server and if you need another layer of services behind then use WCF if you want to go a microsoft route. Howver it is usually possible to host all services in the same IIs instance.
| 0 | 3,827 | false | 1 | 1 |
Connecting C# Back-end with PHP frontend
| 6,341,033 |
3 | 4 | 0 | 0 | 4 | 1 | 0 | 0 |
I have a code written in C# I would like to use as the back-end of a site I'm building.
I would prefer not to build the site front-end in ASP.NET (which integrates nicely with C#), and to use PHP or Python instead.
Is that reasonable? Should I re-consider using ASP.NET?
How can I achieve that?
| 0 |
c#,php,asp.net,python,web-services
|
2011-06-14T08:28:00.000
| 0 | 6,340,972 |
Ugh, in normal instances, reading data with C# writing it to files and loading up with PHP sound slow, inefficient and down wright crazy. I believe these terms are being used wrongly.
Client Server - user machine - database great for private networks where you connect to the DB without going over the internet
vs n-Tier
Client - Browser programming html, css, javascript connects to middleware over the internet
Middleware - inside your firewall, connects browser to database could be called part of backend - php and C# are middleware languages
Database final (generally 3rd) tier
With php and c# you are creating multiple middleware layers
why why why would you do this for a web app pick one
now if you have a web app with PHP and sneakerware in house client server apps that are controlled ie shipping, accounting that are not exposed - maybe but you have added complexity that you would not need (generally)
Gary
| 0 | 3,827 | false | 1 | 1 |
Connecting C# Back-end with PHP frontend
| 43,901,958 |
3 | 4 | 0 | 3 | 4 | 1 | 0.148885 | 0 |
I have a code written in C# I would like to use as the back-end of a site I'm building.
I would prefer not to build the site front-end in ASP.NET (which integrates nicely with C#), and to use PHP or Python instead.
Is that reasonable? Should I re-consider using ASP.NET?
How can I achieve that?
| 0 |
c#,php,asp.net,python,web-services
|
2011-06-14T08:28:00.000
| 0 | 6,340,972 |
Just use asp.net mvc framework for the frontend instead of plain asp.net. It's easy to learn. And if you know php it will be easy to you undestand asp.net mvc.
I don't see the reasons if you are using c# backend use php frontend. For sure you can create service layer on c# and communicate with php through it, but it does not make sence for me.
| 0 | 3,827 | false | 1 | 1 |
Connecting C# Back-end with PHP frontend
| 6,341,059 |
1 | 2 | 0 | 3 | 1 | 0 | 0.291313 | 0 |
I wrote a little testing framework that uses 'nm' to inspect shared libraries and look for test functions. I then use Python's ctypes library to dynamically load the shared object and execute the test functions. Is there a way to do this with an executable? When I tried the same trick on an executable module Python reported that it could not dynamically load an executable.
| 0 |
python,c,linux,dynamic-linking
|
2011-06-14T16:22:00.000
| 0 | 6,346,757 |
If this is your own application you could rearrange the build so your executable is only main() { real_main(); } and real_main() is in libapp.so. Then you could test libapp.so with your existing code.
If it's possible to load another executable it probably involves loading ld.so and getting it to do the work. If you run /lib/ld-linux.so (on Linux) it will print a stanza with information.
| 0 | 5,178 | false | 0 | 1 |
Can I dynamically load an executable on linux?
| 6,346,820 |
1 | 2 | 0 | 9 | 11 | 1 | 1.2 | 0 |
I am trying to import a module from inside a function and have it be available to my whole file the same way it would be if I imported outside any functions and before all the other code. The reason it is in a function is because I don't have much control over the structure of the script. Is this possible without resorting to things like hacking __builtin__ or passing what I need all around my code?
| 0 |
python,import,global
|
2011-06-14T17:30:00.000
| 0 | 6,347,588 |
How about something like globals()["os"] = __import__("os")?
I guess this could be wrapped in a generic function if you wanted since the module name is a string.
| 0 | 5,294 | true | 0 | 1 |
Is it possible to import to the global scope from inside a function (Python)?
| 6,347,667 |
3 | 3 | 0 | 1 | 2 | 0 | 0.066568 | 0 |
i wanna know if it is possible not to hard code my user name and password in a script that copy a file in the operation system and send it via smtp [gmail]
Thank you!
| 0 |
python,passwords,smtp,hardcoded
|
2011-06-14T19:31:00.000
| 0 | 6,348,988 |
Store your data as environment variables or inside a configuration file...
| 0 | 1,677 | false | 0 | 1 |
There is a way not to hard code username & pass when sending mail [python]
| 6,349,049 |
3 | 3 | 0 | 1 | 2 | 0 | 1.2 | 0 |
i wanna know if it is possible not to hard code my user name and password in a script that copy a file in the operation system and send it via smtp [gmail]
Thank you!
| 0 |
python,passwords,smtp,hardcoded
|
2011-06-14T19:31:00.000
| 0 | 6,348,988 |
I bet that best way will be to keep password encrypted, algo is your choice. Then when user give you credentials you check if it match stored encrypted data and then send it. This will take off the step you have to keep plaintext password in any file/database. Anyway you didn't say if you realy need to keep password plain text (but it seems like a case because of remote use), if so then you should use 2-way encryption to avoid plain text passwords - it can be breaked easy but still needs one more step than just read the config file.
| 0 | 1,677 | true | 0 | 1 |
There is a way not to hard code username & pass when sending mail [python]
| 6,349,407 |
3 | 3 | 0 | 1 | 2 | 0 | 0.066568 | 0 |
i wanna know if it is possible not to hard code my user name and password in a script that copy a file in the operation system and send it via smtp [gmail]
Thank you!
| 0 |
python,passwords,smtp,hardcoded
|
2011-06-14T19:31:00.000
| 0 | 6,348,988 |
make the script take the username as command line args.
if you import sys, then sys.argv is the list of all command line args, where the first is the name of the python script itself.
| 0 | 1,677 | false | 0 | 1 |
There is a way not to hard code username & pass when sending mail [python]
| 6,349,055 |
1 | 4 | 0 | 0 | 5 | 0 | 0 | 0 |
What template engines / template languages are turing complete? I heard about these so far:
FreeMarker (implemented in java)
MovableTypes template language (in perl)
xslt :-(
Cheetah (in Python)
Smarty (in PHP)
Any others (especially implemented with perl)?
Ps: Don't waste time with clarifying me MVC, and why turing complete templates are bad, and why this is not a useful comparison point :)
| 0 |
python,ruby,perl,templates,programming-languages
|
2011-06-15T00:38:00.000
| 0 | 6,351,765 |
Virtually anything that allows procedural code to compute the template result.
| 0 | 630 | false | 1 | 1 |
Turing complete template engines
| 6,351,798 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 |
I wish to convert a bash script that's currently using "curl -b 'cookie'" into a Python script. I've looked at Pycurl, but I couldnt' find a -b equivalent. There are also urllib and urllib2, but I couldn't see an easy way to replicate the line.
Any help would be great.
| 0 |
python
|
2011-06-15T03:15:00.000
| 1 | 6,352,644 |
You can use CookieJar.add_cookie_header to add your cookie to a http request header.
| 0 | 262 | false | 0 | 1 |
Python equivalent for curl -b (--cookie)
| 6,353,057 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 |
I have a Python function registered as a View in Plone. I need to be able to call another function from within this registered function. I'm not sure if it would be best to register this other function as a view as well and try to call that (don't know how to call other views), or if there is a better way to handle this.
Basically I'm creating a function in Python that needs to be callable from other Python functions (that are registered as Views).
Edit -
I have tried calling it like any other function:
(pytest.py)
def Test(self):
return "TEST"
And in my Python script registered as a view:
import pytest
def PageFunction(self):
return pytest.Test()
However, this always seems to crash. If I leave the pytest.Test() out and return a simple string, it seems to work fine (so I don't think the import pytest line is causing any problems...)
| 0 |
python,views,plone
|
2011-06-15T14:44:00.000
| 0 | 6,359,581 |
Just import it and call it as any other function. You don't want to make it a view - that requires you to do a MultiAdapter lookup which is a real pain, and completely unnecessary.
[Edit - strictly using a view is a MultiAdapter lookup, but you can shortcut it via traversal, but that still isn't worth the effort]
| 0 | 327 | true | 1 | 1 |
Python Plone views call others
| 6,360,580 |
1 | 2 | 0 | 6 | 12 | 1 | 1 | 0 |
I've been wondering lately how various operations I perform on basic types like strings and integers work in terms of performance, and I figure I could get a much better idea of this if I knew how those basic types were implemented (i.e. I've heard strings and integers are immutable in Python. Does that mean any operation that modifies one character in a string is O(n) because a completely new string has to be created? How about adding numbers?)
I'm curious about this in both Python and Perl, and felt silly asking basically the same question twice, so I'm just wrapping it into one.
If you can include some example operation costs with your answer, that would make it even more helpful.
| 0 |
python,perl
|
2011-06-15T21:10:00.000
| 0 | 6,364,430 |
Perl strings definitely are not immutable. Each string has a buffer, the initial offset of the string in the buffer, the length of buffer, and the amount of the buffer used. Additionally, for utf8 strings, the character length is cached when it needs to be calculated. At one point, there was some caching of additional character offset to byte offset information too, but I'm not certain that's still in place.
If the buffer needs to be increased, it reallocs it. Perl on many platforms knows the granularity of the system malloc, so it can allocate a, say, 14 byte buffer for a 11 byte string, knowing that that won't actually take any additional memory.
The initial offset allows O(1) removal of data from the beginning of the string.
| 0 | 1,412 | false | 0 | 1 |
How are basic data types (strings and integers) implemented in Python and Perl
| 6,368,606 |
2 | 2 | 0 | 2 | 1 | 0 | 1.2 | 1 |
I have a web page that uses a Python cgi script to store requested information for later retrieval by me. As an example, the web page has a text box that asks "What is your name?" When the user inputs his name and hits the submit button, the web page calls the Python cgi script which writes the user's name to mytextfile.txt on the web site. The problem is that if anyone goes to www.mydomain.com/mytextfile.txt, they can see all of the information written to the text file. Is there a solution to this? Or am I using the wrong tool? Thanks for your time.
| 0 |
python,cgi
|
2011-06-16T11:30:00.000
| 0 | 6,371,097 |
Definitely the wrong tool. Multiple times.
Store the file outside of the document root.
Store a key to the file in the user's session.
Use a web framework.
Use WSGI.
| 0 | 104 | true | 0 | 1 |
Python CGI how to save requested information securely?
| 6,371,127 |
2 | 2 | 0 | 0 | 1 | 0 | 0 | 1 |
I have a web page that uses a Python cgi script to store requested information for later retrieval by me. As an example, the web page has a text box that asks "What is your name?" When the user inputs his name and hits the submit button, the web page calls the Python cgi script which writes the user's name to mytextfile.txt on the web site. The problem is that if anyone goes to www.mydomain.com/mytextfile.txt, they can see all of the information written to the text file. Is there a solution to this? Or am I using the wrong tool? Thanks for your time.
| 0 |
python,cgi
|
2011-06-16T11:30:00.000
| 0 | 6,371,097 |
Store it outside the document root.
| 0 | 104 | false | 0 | 1 |
Python CGI how to save requested information securely?
| 6,371,124 |
1 | 2 | 0 | 4 | 0 | 0 | 1.2 | 1 |
I want to access my linkedin account from command prompt and then i wanted to send mails from my account using command.
Also, I need the delivery reports of the mails.
Can anyone knows how can use that?
| 0 |
python,command,linkedin
|
2011-06-16T14:43:00.000
| 0 | 6,373,779 |
The Member to Member API will return a 2xx status code if your message is accepted by LinkedIn. And a 4xx status code if there's an error.
This means the message was put into the LinkedIn system, not that it has been opened, read, emailed, etc. You cannot get that via the API.
| 0 | 1,130 | true | 0 | 1 |
How to access linkedin from python command
| 6,393,078 |
2 | 3 | 0 | 2 | 9 | 1 | 0.132549 | 0 |
I have found most python modules in python source directory, under Python/Lib or Python/Modules ,but where is the sys (import sys) module ? I didn't find it .
| 0 |
python,module
|
2011-06-20T10:39:00.000
| 0 | 6,409,935 |
It's in Python/Python/sysmodule.c.
| 0 | 18,447 | false | 0 | 1 |
where is the sys module in python source code?
| 6,409,965 |
2 | 3 | 0 | 11 | 9 | 1 | 1.2 | 0 |
I have found most python modules in python source directory, under Python/Lib or Python/Modules ,but where is the sys (import sys) module ? I didn't find it .
| 0 |
python,module
|
2011-06-20T10:39:00.000
| 0 | 6,409,935 |
The Answer
I find it here: ./Python/sysmodule.c
If you're on Linux or Mac OS X, and in doubt, just try find . -name 'sysmodule.c' in the Python directory.
Other Stuff
The way I found it was by searching for the string "platform" throughout the Python directory (using TextMate), as I've used e.g. sys.platform before from the sys module... something similar can be done with grep and xargs.
Another suggestion could be : for i in ./**/*.c ; do grep -H platform $i ; done
This will loop through all *.c files present in anywhere up the file tree you're currently located at, and search the file for "platform". The -H flag will ensure we get a filename path so we can trace the matches back to the files they are found in.
| 0 | 18,447 | true | 0 | 1 |
where is the sys module in python source code?
| 6,409,975 |
1 | 1 | 0 | 4 | 3 | 1 | 0.664037 | 0 |
I have a background python script that gets ran several thousand times a day. I'm simply running it with python foo.py. The script itself does some imports (a parsing library and sqlalchemy) and then makes a database connection, makes the parsing and saves the data to db.
I'm wondering if it adds a lot of overhead to load the python environment each time the script is run?
I could make it so that the script is started once and it would have a polling loop to see if it should do something, but want to clarify that it's worth to do this.
Any input?
| 0 |
python,environment,overhead
|
2011-06-21T21:46:00.000
| 0 | 6,432,377 |
of course it adds a lot of overhead, and it would be (however negligibly) more eco-friendly to use a built-in poll or select(); but then you'd have to have a watchdog to see if it crashed, or use respawn from inittab. as long as the server load is fine, it might not be worth the effort.
forgot to mention, memory leaks that would be unnoticeable in a cron job can become server-eating monsters when your script runs as a daemon. you'd want to watch it carefully the first hour or two, to see if it's growing.
| 0 | 112 | false | 0 | 1 |
How expensive it is to load the environment to run a Python script?
| 6,432,467 |
1 | 3 | 0 | 6 | 8 | 0 | 1.2 | 0 |
I'm looking for an IDE that will allow me to edit remote Python projects and also has decent Django support, remote command execution, and maybe remote debugging. I've tried PyCharm and Aptana with PyDev but I'm not having much luck configuring them for remote editing. Thanks for your help!
| 0 |
python,django,ide
|
2011-06-22T04:45:00.000
| 0 | 6,435,000 |
I have Pycharm setup on a Ubuntu 10.10. The key is to use "sshfs" - it maps to my web-host - via ssh. Those are the pre-reqs : ssh access, sshfs. (unless you can figure out a way to map ssh to a windows shared drive).
So once ssh, sshfs are setup, I create a linux mount locally - so my webhost's directory appears locally as "/webhostx" .. From then on Pycharm (or WingIde or any editor) does not care that "/webhostx" is really a remote folder mounted locally.
If all else fails there's always Emacs (everything included :-) ).
Pycharm also has a remote debugging feature - I am in the process of testing it with my host (webfaction).
| 0 | 5,204 | true | 1 | 1 |
Python and Django IDE with remote editing?
| 6,714,141 |
4 | 4 | 0 | 2 | 3 | 0 | 0.099668 | 0 |
I'm a Java web developer that knows a bit of Python (but haven't done any Python web development), and I am curious what exactly is meant by a LAMP stack.
I understand this to be Linux-Apache-MySQL-(PHP, Perl, or Python), but I don't understand what unites these three languages other than the letter P.
Is a LAMP stack fundamentally different if Ruby was used? Using Ruby would typically mean using Rails, but Python web apps usually use Django or Pylons. Or does LAMP signify that no framework is used? Is Java web development essentially different because of Tomcat in place of Apache?
| 0 |
java,python,ruby,lamp
|
2011-06-22T20:50:00.000
| 0 | 6,446,385 |
I think you're trying to read too much into what it means. The acronym became popular because they were often used together and it was easy to pronounce. It doesn't have any meaning or implication beyond the literal one. There's also WAMP (Windows), LAPP (PostgreSql) and whatever else you want to make up.
| 0 | 6,337 | false | 1 | 1 |
What is the significance of the 'P' in LAMP? Why is it PHP, Perl, or Python?
| 6,446,503 |
4 | 4 | 0 | 1 | 3 | 0 | 0.049958 | 0 |
I'm a Java web developer that knows a bit of Python (but haven't done any Python web development), and I am curious what exactly is meant by a LAMP stack.
I understand this to be Linux-Apache-MySQL-(PHP, Perl, or Python), but I don't understand what unites these three languages other than the letter P.
Is a LAMP stack fundamentally different if Ruby was used? Using Ruby would typically mean using Rails, but Python web apps usually use Django or Pylons. Or does LAMP signify that no framework is used? Is Java web development essentially different because of Tomcat in place of Apache?
| 0 |
java,python,ruby,lamp
|
2011-06-22T20:50:00.000
| 0 | 6,446,385 |
Besides being popular Web development languages, Perl, PHP, and Python share something else: They are all dynamically typed languages, and notoriously fast to develop in. I believe this is part of the "spirit" of LAMP.
So, while it's true you could substitute any other language in for the 'P', some languages fit the dynamic, agile spirit better than others. Ruby, for example, would fit very nicely. You could also use Scheme, if that's what you're good at. Java doesn't fit as nicely into LAMP because it is a statically typed language, and to many feels subjectively "heavier" than the so-called scripting languages.
| 0 | 6,337 | false | 1 | 1 |
What is the significance of the 'P' in LAMP? Why is it PHP, Perl, or Python?
| 6,447,007 |
4 | 4 | 0 | 8 | 3 | 0 | 1.2 | 0 |
I'm a Java web developer that knows a bit of Python (but haven't done any Python web development), and I am curious what exactly is meant by a LAMP stack.
I understand this to be Linux-Apache-MySQL-(PHP, Perl, or Python), but I don't understand what unites these three languages other than the letter P.
Is a LAMP stack fundamentally different if Ruby was used? Using Ruby would typically mean using Rails, but Python web apps usually use Django or Pylons. Or does LAMP signify that no framework is used? Is Java web development essentially different because of Tomcat in place of Apache?
| 0 |
java,python,ruby,lamp
|
2011-06-22T20:50:00.000
| 0 | 6,446,385 |
It's just so happens that the most commonly used components in that part of the stack all happened to begin with a P. It's nothing more than a coincidence. The LAMP acronym was coined before Ruby gained its current popularity levels and there's no reason why you couldn't stick Ruby in the P slot.
| 0 | 6,337 | true | 1 | 1 |
What is the significance of the 'P' in LAMP? Why is it PHP, Perl, or Python?
| 6,446,413 |
4 | 4 | 0 | 7 | 3 | 0 | 1 | 0 |
I'm a Java web developer that knows a bit of Python (but haven't done any Python web development), and I am curious what exactly is meant by a LAMP stack.
I understand this to be Linux-Apache-MySQL-(PHP, Perl, or Python), but I don't understand what unites these three languages other than the letter P.
Is a LAMP stack fundamentally different if Ruby was used? Using Ruby would typically mean using Rails, but Python web apps usually use Django or Pylons. Or does LAMP signify that no framework is used? Is Java web development essentially different because of Tomcat in place of Apache?
| 0 |
java,python,ruby,lamp
|
2011-06-22T20:50:00.000
| 0 | 6,446,385 |
I believe the P originally stood mainly for PHP, as that particular combination was extremely widely used. It got expanded to include Python and Perl as non-PHP languages became more popular for web development, and never expanded further because it would have broken the acronym.
LAMP is a de facto standard way of doing things, but not a formal standard. Changing out the P for Ruby+Rails, or Apache/PHP for Tomcat/Java changes some things about your development process, but not other things.
One aspect of LAMP that's significant is that all the components are open-source.
| 0 | 6,337 | false | 1 | 1 |
What is the significance of the 'P' in LAMP? Why is it PHP, Perl, or Python?
| 6,446,566 |
6 | 6 | 0 | 1 | 7 | 1 | 0.033321 | 0 |
OK, so we are developing an network related application where the user can upload their own python scripts to decide for an algorithm. Our code contains c and cython and python modules.
Since avoiding latency, memory footprint and minimal processing is critical for us, we were wondering if it's a wise and effective (performance wise) to turn off garbage collection and handle memory deallocation ourselves.
| 0 |
python,memory-management,memory-leaks,garbage-collection,cython
|
2011-06-23T02:35:00.000
| 0 | 6,448,742 |
CPython (the original and most used Python) uses a ref counting approach for garbage collection: objects which are no longer referenced are immediately freed. Therefore if you don't create any cycles then the garbage collector, which only exists to detect cycles, shouldn't be getting invoked much.
| 0 | 2,303 | false | 0 | 1 |
Does garbage collection make python slower?
| 6,448,987 |
6 | 6 | 0 | 18 | 7 | 1 | 1 | 0 |
OK, so we are developing an network related application where the user can upload their own python scripts to decide for an algorithm. Our code contains c and cython and python modules.
Since avoiding latency, memory footprint and minimal processing is critical for us, we were wondering if it's a wise and effective (performance wise) to turn off garbage collection and handle memory deallocation ourselves.
| 0 |
python,memory-management,memory-leaks,garbage-collection,cython
|
2011-06-23T02:35:00.000
| 0 | 6,448,742 |
Just let the language do what it wants to do, and if you find you have an actual problem, come on back and post about it. Otherwise it's premature optimization.
| 0 | 2,303 | false | 0 | 1 |
Does garbage collection make python slower?
| 6,448,754 |
6 | 6 | 0 | 11 | 7 | 1 | 1.2 | 0 |
OK, so we are developing an network related application where the user can upload their own python scripts to decide for an algorithm. Our code contains c and cython and python modules.
Since avoiding latency, memory footprint and minimal processing is critical for us, we were wondering if it's a wise and effective (performance wise) to turn off garbage collection and handle memory deallocation ourselves.
| 0 |
python,memory-management,memory-leaks,garbage-collection,cython
|
2011-06-23T02:35:00.000
| 0 | 6,448,742 |
gc.disable only turns off the cyclic garbage collector. Objects will still be collected when the refcount drops to zero anyway. So unless you have a lot of cyclic references, it will make no difference.
Are you are talking about doing a customised Python build and disabling the ref counting GC?
| 0 | 2,303 | true | 0 | 1 |
Does garbage collection make python slower?
| 6,448,994 |
6 | 6 | 0 | 5 | 7 | 1 | 0.16514 | 0 |
OK, so we are developing an network related application where the user can upload their own python scripts to decide for an algorithm. Our code contains c and cython and python modules.
Since avoiding latency, memory footprint and minimal processing is critical for us, we were wondering if it's a wise and effective (performance wise) to turn off garbage collection and handle memory deallocation ourselves.
| 0 |
python,memory-management,memory-leaks,garbage-collection,cython
|
2011-06-23T02:35:00.000
| 0 | 6,448,742 |
Just develop the application so that it is functionally correct. Once you've got a correct application fire up the profiler and determine where the slow bits are. If you're worried about user script performance, you're probably focussing on the wrong thing.
A shorter answer is that it is probably unwise and ineffective to try to optimize before initial development is complete.
| 0 | 2,303 | false | 0 | 1 |
Does garbage collection make python slower?
| 6,448,778 |
6 | 6 | 0 | 4 | 7 | 1 | 0.132549 | 0 |
OK, so we are developing an network related application where the user can upload their own python scripts to decide for an algorithm. Our code contains c and cython and python modules.
Since avoiding latency, memory footprint and minimal processing is critical for us, we were wondering if it's a wise and effective (performance wise) to turn off garbage collection and handle memory deallocation ourselves.
| 0 |
python,memory-management,memory-leaks,garbage-collection,cython
|
2011-06-23T02:35:00.000
| 0 | 6,448,742 |
Garbage collection makes everything slower. It also makes everything much less error-prone. Especially if the point is to run user-uploaded scripts, I have a hard time believing the trade-off will work out well; if you have any leaks or double frees, you now have a DoS vulnerability if someone can figure out how to trigger it.
| 0 | 2,303 | false | 0 | 1 |
Does garbage collection make python slower?
| 6,448,786 |
6 | 6 | 0 | 2 | 7 | 1 | 0.066568 | 0 |
OK, so we are developing an network related application where the user can upload their own python scripts to decide for an algorithm. Our code contains c and cython and python modules.
Since avoiding latency, memory footprint and minimal processing is critical for us, we were wondering if it's a wise and effective (performance wise) to turn off garbage collection and handle memory deallocation ourselves.
| 0 |
python,memory-management,memory-leaks,garbage-collection,cython
|
2011-06-23T02:35:00.000
| 0 | 6,448,742 |
I've spent quite a bit of time working in languages with automatic garbage collection and I can say almost unilaterally that trusting the native garbage collection will be faster and more reliable than a custom solution.
| 0 | 2,303 | false | 0 | 1 |
Does garbage collection make python slower?
| 6,449,666 |
1 | 1 | 0 | 0 | 1 | 0 | 1.2 | 0 |
I'm running fedora 32 bit on one machine and have installed several eggs using easy_install.
I've installed the same eggs using easy_install on a 64-bit centos 5 machine. The site-packages directories are different - on my fedora machine, some of the eggs have been inflated so there are directories ending .egg-info as well as the main code directories. On Centos there are no .egg-info directories. Why is this?
Thanks
| 0 |
python,egg
|
2011-06-26T20:39:00.000
| 1 | 6,486,598 |
A package can specify itself using the zip_safe flag inside its setup.py file if it should be unarchived or not. In addition most installers like 'easy_install' provide options to control the unpacking explicitly (e.g. easy_install --zip-ok ...)...so it may depend on how the packages are installed under Fedora oder CentOS.
| 0 | 133 | true | 0 | 1 |
Python eggs - sometimes inflated, sometimes not
| 6,488,633 |
1 | 1 | 0 | 2 | 1 | 0 | 0.379949 | 0 |
I need to write a cgi page which will act like a reverse proxy between the user and another page (mbean). The issue is that each mbean uses different port and I do not know ahead of time which port user will want to hit.
Therefore want I need to do is following:
A) Give user a page which will allow him to choose which application he wants to hit
B) spawn a reverse proxy base on information above (which gives me port, server, etc..)
C) the user connects to the remote mbean page via the reverse proxy and therefore never "leaves" the original page.
The reason for C is that user does not have direct access to any of the internal apps only has access to initial port 80.
I looked at twisted and it appears to me like it can do the job. What I don't know is how to spawn twisted process from within cgi so that it can establish the connection and keep further connection within the reverse proxy framework.
BTW I am not married to twisted, if there is another tool that would do the job better, I am all ears. I can't do things like mod_proxy (for instance) since the wide range of ports would make configuration rather silly (at around 1000 different proxy settings).
| 0 |
python
|
2011-06-27T05:10:00.000
| 1 | 6,488,806 |
You don't need to spawn another process, that would complicate things a lot. Here's how I would do it based on something similar in my current project :
Create a WSGI application, which can live behind a web server.
Create a request handler (or "view") that is accessible from any URL mapping as long as the user doesn't have a session ID cookie.
In the request handler, the user can choose the target application and with it, the hostname, port number, etc. This request handler creates a connection to the target application, for example using httplib and assigns a session ID to it. It sets the session ID cookie and redirects the user back to the same page.
Now when your user hits the application, you can use the already open http connection to redirect the query. Note that WSGI supports passing back an open file-like object as response, including those provided by httplib, for increased performance.
| 0 | 488 | false | 1 | 1 |
python reverse proxy spawning via cgi
| 6,531,642 |
1 | 1 | 0 | 10 | 9 | 0 | 1.2 | 1 |
What is the recommended library for python to do everything that is Amazon EC2 related?
I came across boto and libcloud. Which one is easier to use? does libcloud offer the same functionality as boto?
| 0 |
python,amazon-ec2
|
2011-06-28T14:04:00.000
| 0 | 6,507,708 |
The big advantage of libcloud is that it provides a unified interface to multiple providers, which is a big plus in my mind. You won't have to rewrite everything if you plan to migrate some instances to Rackspace later, or mix and match, etc. I haven't used it extensively but it looks fairly complete as far as EC2 goes. In boto's favor it has support for nearly all of Amazon's web services, so if you plan to be Amazon-centric and use other services you'll probably want to use boto.
That said, try both packages and see which you prefer.
| 0 | 933 | true | 0 | 1 |
Python & Amazon EC2 -- Recommended Library?
| 6,508,616 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 1 |
To use OAUTH with python-twitter I need to register a web app, website etc. The program I have is for personal use though. Since basic auth is now defunct and Oauth is not an option due to the requirements is there a work around to log in to twitter using a script?
I can't imagine that Twitter would alienate everyone who does not have a website/web app from logging in to twitter from a script that is for personal use.
| 0 |
python,authentication,twitter,oauth
|
2011-06-28T18:53:00.000
| 0 | 6,511,633 |
If you want to access resources at Twitter, even if they are your own, and even if it is just for a "personal script" -> you have to use oAuth.
| 0 | 881 | false | 0 | 1 |
How to log in to twitter without Oauth for a script?
| 6,533,687 |
1 | 2 | 0 | 3 | 1 | 0 | 1.2 | 0 |
I've read through many of the related questions and am a bit unsure as to how to handle this situation.
The Basic Question: What is the best way to handle "foreign" (Hebrew, Greek, Aramaic?, etc.) characters in a website?
I get that I need to use UTF-8 encoding but the mechanics behind it are lost on me.
I am using tornado as my framework and am storing the data in redis.
My current implementation is to simply store the English keyboard equivalent in the data store and then rendering on a page with the appropriate Hebrew/Greek font (e.g. Bwhebb.ttf). This has worked, for the most part, but I am bumping up against some characters which are being CGI encoded which, in turn, causes the font method to break.
| 0 |
python,redis,tornado,hebrew
|
2011-06-29T01:53:00.000
| 0 | 6,514,971 |
Read the articles given in the comments.
Short answer though, store unicode in Redis, and if you're using Python 2.x, use unicode strings (u"") throughout. You may have to convert to unicode (unicode()) after retrieval from Redis, depending on what it gives you.
| 0 | 920 | true | 0 | 1 |
Handling foreign characters in website running on python, tornado and redis
| 6,518,345 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.