Available Count
int64
1
31
AnswerCount
int64
1
35
GUI and Desktop Applications
int64
0
1
Users Score
int64
-17
588
Q_Score
int64
0
6.79k
Python Basics and Environment
int64
0
1
Score
float64
-1
1.2
Networking and APIs
int64
0
1
Question
stringlengths
15
7.24k
Database and SQL
int64
0
1
Tags
stringlengths
6
76
CreationDate
stringlengths
23
23
System Administration and DevOps
int64
0
1
Q_Id
int64
469
38.2M
Answer
stringlengths
15
7k
Data Science and Machine Learning
int64
0
1
ViewCount
int64
13
1.88M
is_accepted
bool
2 classes
Web Development
int64
0
1
Other
int64
1
1
Title
stringlengths
15
142
A_Id
int64
518
72.2M
1
1
0
0
0
0
0
0
How would I go about setting up one github user and ssh key and then replicating that to several other laptops so they can all use the same account? It would be optimal if I could copy a configuration file so I wouldn't have to apply it one laptop at a time - I could apply it through server administration. This isn't a typical github setup so don't worry about this being the correct way to set it up.
0
python,git,github
2012-02-21T16:29:00.000
0
9,381,247
Setup the project the way github describes. Create your ssh keys. Tar or Zip everything up. Distribute and Untar/Unzip. Done.
0
189
false
0
1
Multiple laptops with same github account and SSH key
9,381,435
1
1
0
2
1
0
1.2
0
Hi is there a way out to gracefully shutdown the bottle server. In a way it should be able to do few steps before it eventually stops. This is critical for some clean up of threads and db state etc avoiding the corrupt state during the restart. I am using mod wsgi apache module for running the bottle server.
0
python,mod-wsgi,bottle
2012-02-22T04:34:00.000
1
9,389,138
In mod_wsgi you can register atexit callbacks and they will be called on normal process shutdown. You don't have too long to do stuff though. If embedded mode, or daemon mode and shutdown caused by Apache restart, you have only 3 seconds as Apache will kill off processes forcibly after that. If daemon mode and trigger is due to touching WSGI script file or you explicitly sent daemon process a signal, you have 5 seconds, which is when mod_wsgi will decide it is taking too long and forcibly kill them. See the 'atexit' module in Python.
0
758
true
0
1
Gracefull shutdown of bottle python server
9,389,919
1
2
0
0
0
0
0
0
I have Apache running on OSX Lion and MacPorts Python and some packages installed with MacPorts. There are some Python cgi scripts that I'd like to run. It looks like Apache uses the Python that is installed with Lion. How can I configure Apache so that the cgi scripts are run with the MacPorts Python and sites-packages (PYTHONPATH I guess)?
0
python,macos,apache
2012-02-23T05:03:00.000
1
9,407,472
Edit the shebang line in the CGI scripts to point to the other executable.
0
366
false
0
1
OSX and setting PATH for Apache
9,407,550
3
5
0
1
14
1
0.039979
0
Is it possible to deploy python applications such that you don't release the source code and you don't have to be sure the customer has python installed? I'm thinking maybe there is some installation process that can run a python app from just the .pyc files and a shared library containing the interpreter or something like that? Basically I'm keen to get the development benefits of a language like Python - high productivity etc. but can't quite see how you could deploy it professionally to a customer where you don't know how there machine is set up and you definitely can't deliver the source. How do professional software houses developing in python do it (or maybe the answer is that they don't) ?
0
python,deployment
2012-02-23T21:11:00.000
0
9,421,373
Build a web application in python. Then the world can use it via a browser with zero install.
0
9,784
false
0
1
deploying python applications
9,421,431
3
5
0
19
14
1
1.2
0
Is it possible to deploy python applications such that you don't release the source code and you don't have to be sure the customer has python installed? I'm thinking maybe there is some installation process that can run a python app from just the .pyc files and a shared library containing the interpreter or something like that? Basically I'm keen to get the development benefits of a language like Python - high productivity etc. but can't quite see how you could deploy it professionally to a customer where you don't know how there machine is set up and you definitely can't deliver the source. How do professional software houses developing in python do it (or maybe the answer is that they don't) ?
0
python,deployment
2012-02-23T21:11:00.000
0
9,421,373
You protect your source code legally, not technologically. Distributing py files really isn't a big deal. The only technological solution here is not to ship your program (which is really becoming more popular these days, as software is provided over the internet rather than fully installed locally more often.) If you don't want the user to have to have Python installed but want to run Python programs, you'll have to bundle Python. Your resistance to doing so seems quite odd to me. Java programs have to either bundle or anticipate the JVM's presence. C programs have to either bundle or anticipate libc's presence (usually the latter), etc. There's nothing hacky about using what you need. Professional Python desktop software bundles Python, either through something like py2exe/cx_Freeze/some in-house thing that does the same thing or through embedding Python (in which case Python comes along as a library rather than an executable). The former approach is usually a lot more powerful and robust.
0
9,784
true
0
1
deploying python applications
9,421,511
3
5
0
8
14
1
1
0
Is it possible to deploy python applications such that you don't release the source code and you don't have to be sure the customer has python installed? I'm thinking maybe there is some installation process that can run a python app from just the .pyc files and a shared library containing the interpreter or something like that? Basically I'm keen to get the development benefits of a language like Python - high productivity etc. but can't quite see how you could deploy it professionally to a customer where you don't know how there machine is set up and you definitely can't deliver the source. How do professional software houses developing in python do it (or maybe the answer is that they don't) ?
0
python,deployment
2012-02-23T21:11:00.000
0
9,421,373
Yes, it is possible to make installation packages. Look for py2exe, cx_freeze and others. No, it is not possible to keep the source code completely safe. There are always ways to decompile. Original source code can trivially be obtained from .pyc files if someone wants to do it. Code obfuscation would make it more difficult to do something with the code.
0
9,784
false
0
1
deploying python applications
9,421,442
1
6
0
5
98
1
0.16514
0
What is the base language Python is written in?
0
python
2012-02-26T09:23:00.000
0
9,451,929
You get a good idea if you compile python from source. Usually it's gcc that compiles the *.c files
0
106,795
false
0
1
Base language of Python
17,551,576
1
2
0
4
2
0
0.379949
0
I am working on building an inverted index using Python. I am having some doubts regarding the performance it can provide me. Would Python be almost equally as fast in indexing as Java or C? Also, I would like to know if any modules/implementations exists (and what are they, some link please?) for the same and how well do they perform compared to the something developed in Java/C? I read about this guy who optimized his Python twice as fast as C by using it with Psyco. I know for a fact that this is misleading since gcc 3.x compilers are like super fast. Basically, my point is I know Python won't be faster than C. But is it somewhat comparable? And can someone shed some light on its performance compared with Java? I have no clue about that. (In terms of inverted index implementation, if possible because it would essentially require disk write and reads.) I am not asking this here without googling first. I didn't get a definite answer, hence the question. Any help is much appreciated!
0
python,information-retrieval,inverted-index
2012-02-26T11:19:00.000
1
9,452,631
Worry about optimization after the fact. Write the code, profile it, stress test it, identify the slow parts and offset them in Cython or C or re-write the code to make it more efficient, it might be faster if you load it onto PyPy as that has a JIT Compiler, it can help with long running processes and loops. Remember Premature optimization, is the root of all evil. (After threads of course)
0
2,059
false
0
1
Inverted Index System using Python
9,452,656
1
1
0
0
0
1
1.2
0
Which would it make more sense to code an IRC bot in: Python 2 or 3? With 3 I heard you have to do extra stuff because it's unicode(?).
0
python,irc
2012-02-26T16:48:00.000
0
9,454,974
It shouldn't matter. Python 3 is more Unicode compatible, but that's only a good thing. The most obvious and visible thing changed in Python 3 is print. In Python 3.0 it is a function and requires parentheses.
0
355
true
0
1
Python 2 vs Python 3 for an IRC bot?
9,454,999
1
1
0
0
0
0
0
0
I've tried to create a twill test that changes the proxy server settings of 2 different tests. I need to trigger this change in runtime without relaunching the test script. I've tried to use the "http_proxy" environment variable by setting os.environ["HTTP_PROXY"], but it's only changes the proxy setting for the first test, and does not works on the second and third tests. Could you please suggest a way to change twill's proxy settings on runtime ?
0
python,mechanize,twill
2012-02-26T19:35:00.000
0
9,456,442
Set the proxy environment variable before you run the twill script. sh/ksh/bash export HTTP_PROXY=blah:8080 csh setenv HTTP_PROXY blah:8080 It's worth nothing, this should work by setting os.environ['http_proxy'], but it might not if you set it after you import twill. Twill may be checking this once on startup? The only 100% safe way I would imagine is exporting the variable so that all further child processes will get it as their environment.
0
478
false
0
1
twill - changing the proxy server setting in runtime
9,465,091
1
2
0
2
13
1
0.197375
0
I'm a big fan of discovering sentences that can be rapped very quickly. For example, "gotta read a little bit of Wikipedia" or "don't wanna wind up in the gutter with a bottle of malt." (George Watsky) I wanted to write a program in Python that would enable me to find words (or combinations of words) that can be articulated such that it sounds very fast when spoken. I initially thought that words that had a high syllable to letter ratio would be the best, but upon writing a Python program to do find those words, I retrieved only very simple words that didn't really sound fast (e.g. "iowa"). So I'm at a loss at what actually makes words sound fast. Is it the morpheme to letter ratio? Is it the number of alternating vowel-consonant pairs? How would you guys go about devising a python program to resolve this problem?
0
python,algorithm,word,nlp,linguistics
2012-02-27T03:38:00.000
0
9,459,745
I would say it's a good idea to start by taking the examples you gave or other ones you like and doing some sort of analysis for all your ideas on them: e.g. phoneme to to letter ratio, etc; whatever sounds reasonable and that you can calculate. The more samples the better. Hopefully this will give you a good idea of what properties the lines and words you already enjoy share, which should lead you in the right direction. Otherwise, my laymen's guess is that short vowels (obviously) and hard consonants like 't', some 'p's, hard 'g's, etc, will be best - they make the lines sound staccato and rapid-fire. (wanted to leave this as a comment cause it's not really an answer, but it's too long :)
0
1,243
false
0
1
Find words and combinations of words that can be spoken the quickest
9,466,414
1
3
0
0
0
0
0
1
Is there any easy way to initiate ssh connection with Python 3 without using popen? I would like to achieve password and password less authentication.
0
ssh,python-3.x,connection
2012-02-27T13:28:00.000
0
9,465,807
No. Paramiko does not work with Python 3.x yet
0
1,977
false
0
1
How to initiate ssh connection with Python 3
17,140,320
1
2
0
0
2
1
0
0
I have an python application in production (on CentOS 6.2 / Python 2.6.6) that takes up to: 800M VIRT / 15M RES / 2M SHR The same app run on (Fedora 16 / Python 2.7.2) "only" takes up to: 56M VIRT / 15M RES / 2M SHR Is it an issue ? What's the explanation of this difference ? I'm wondering if it could go wrong anytime with such an amount of virtual memory ?
0
python,memory
2012-02-28T09:35:00.000
1
9,479,492
What does the application do? What libraries does it use? What else is different between those machines? It's hard to give a general answer. The VIRT value indicates how much memory the process has requested from the operating system in one way or another. But Linux is lazy in this respect: that memory won't actually be allocated to the process until the process tries to do something with it. The RES value indicates how much memory is actually resident in RAM and currently in use by the process. This excludes pages that haven't yet been touched by the process or that have been swapped out to disk. Since the RES values are small and identical for both of those processes, there's probably nothing to worry about.
0
1,159
false
0
1
Python VIRT Memory Usage
9,951,397
2
5
0
1
8
1
0.039979
0
We can write a piece of python code and put it in already compiled ".pyc" file and use it. I am wondering that is there any kind of gain in terms of performance or it is just a kind of modular way of grouping the code. Thanks a lot
0
python,pyc
2012-02-28T16:36:00.000
0
9,485,905
I'm not sure about .pyc files (very minor gain is at least not creating .pyc files again), but there's a '-O' flag for the Python interpreter which produces optimised bytecode (.pyo files).
0
9,152
false
0
1
is there any kind of performance gain while using .pyc files in python?
9,485,942
2
5
0
1
8
1
0.039979
0
We can write a piece of python code and put it in already compiled ".pyc" file and use it. I am wondering that is there any kind of gain in terms of performance or it is just a kind of modular way of grouping the code. Thanks a lot
0
python,pyc
2012-02-28T16:36:00.000
0
9,485,905
Yes, simply because the first time you execute a .py file, it is compiled to a .pyc file. So basically you have to add the compilation time. Afterwards, the .pyc file should be always used.
0
9,152
false
0
1
is there any kind of performance gain while using .pyc files in python?
9,485,947
1
2
0
2
1
1
1.2
0
This question might seem vague, sorry. Does anybody have experience writing RegEx with Objective-C and Python? I am wondering about the performance of one vs the other? Which is faster in terms of 1. runtime speed, and 2. memory consumption? I have a Mac OS application that is always running in the background, and I'd like my app to index some text files that are being saved, and then save the result... I could write a regex method in my app in Obj-C, or I could potentially write a separate app using Perl or Python (just a beginner in Python). (Thanks, I got some good info from some of you already. Boo to those who downvoted; I am here to learn, and I might have some stupid questions time to time - part of the deal.)
0
python,objective-c,regex
2012-02-28T17:30:00.000
0
9,486,827
If you’re looking for raw speed, neither of those two would be a very good choice. For execution speed, you’d choose Perl. For how quickly you could code it up, either Python or Perl alike would easily beat the time to write it in Objective C, just as both would easily beat a Java solution. High-level languages that take less time to code up are always a win if all you’re measuring is time-to-solution compared with solutions that take many more lines of code. As far as actual run-time performance goes, Perl’s regexes are written in very tightly coded C, and are known to be the fastest and most flexible regexes available. The regex optimizer does a lot of very clever things to the compiled regex program, such as applying an Aho–Corasick start-point optimization for finding the start of an alternation trie, running in O(1) time. Nobody else does that. Heck, I don’t think anybody else but Perl even bothers to optimize alternations into tries, which is the thing that takes you from O(n) to O(1), because the compiler spent more time doing something smart so that the interpreter runs much faster. Perl regexes also offer substantial improvements in debugging and profiling. They’re also more flexible than Python’s, but the debugging alone is enough to tip the balance. The only exception on performance matters is with certain pathological patterns that degenerate when run under any recursive backtracker, whether Perl’s, Java’s, or Python’s. Those can be addressed by using the highly recommended RE2 library, written by Russ Cox, as a replacement plugin. I know it’s available as a transparent replacement regex engine for Perl, and I’m pretty sure I remember seeing that it was also available for Python, too. On the other hand, if you really want to use Python but just want a more expressive and robust regex library, particularly one that is well-behaved on Unicode, then you want to use Matthew Barnett’s regex module, available for both Python2 and Python3. Besides conforming to tr18’s level-1 compliance requirements (that’s the standards doc on Unicode regexes), it also has all kinds of other clever features, some of which are completely sui generis. If you’re a regex connoisseur, it’s very much worth checking out.
0
601
true
0
1
RegEx performance in Objective-C vs Python
9,487,143
1
2
0
3
1
0
1.2
0
I want to send lots of numbers via zeromq but converting them to str is inefficient. What is the best way to send numbers via zmq?
0
python,zeromq,pyzmq
2012-02-28T20:52:00.000
0
9,489,560
You state that converting numbers to str is inefficient. And yet, unless you have a truly exotic network, that is exactly what must occur no matter what solution is chosen, because all networks in wide use today are byte-based. Of course, some ways of converting numbers to byte-strings are faster than others. Performing the conversion in C code will likely be faster than in Python code, but consider also whether it is acceptable to exclude "long" (bignum) integers. If excluding them is not acceptable, the str function may be as good as it gets. The struct and cpickle modules may perform better than str if excluding long integers is acceptable.
0
2,967
true
0
1
send a number by zeromq pyzmq
9,491,177
1
1
0
3
0
1
0.53705
0
I would like to contribute to open source Python project hosted on github. But the code base comes as module that needs to be installed using pip or smth like this. Which means I do "git clone", "setup.py install" the code will be placed after installation into another (non repo) folder. The question is which folder I should edit/commit code then and what's the standard solution foe such a multi-folder issue.
0
python,open-source,github
2012-02-29T12:44:00.000
0
9,499,396
You'd normally do setup.py develop or pip install -e . So you don't want the installer to copy it anywhere else. Using this mode, a special link file is created in your site-packages directory. This link points back to the current folder or 'root package'. Any changes you make to the software here will be reflected immediately without having to do an install again.
0
119
false
0
1
Contributing to Python: edit git/cloned code or installed code?
9,499,693
1
2
0
1
2
0
0.099668
1
I want to display all the Internet History Information of a system using Python. The index.dat file holds all the history information of user, but it's encoded. How can I decode it? [I have heard about WinInet Method INTERNET_CACHE_ENTRY_INFO. It provides information about websites visited, hit counts, etc.] Are there any libraries available in Python for achieving this? If not, are there any alternatives available?
0
python,internet-explorer,browser-cache,browser-history
2012-02-29T21:27:00.000
0
9,506,894
If you wanted to do this for Firefox history, it's an SQLITE database in the file places.sqlite in the user's firefox profile. It can be opened with python's sqlite3 library. Now if you only care about Explorer (as implied by your mention of index.dat), well I don't know about that.
0
5,552
false
0
1
How do I Retrieve and Display the Internet History Information in Python?
9,508,666
2
3
0
2
1
1
0.132549
0
What's the best way to call C++ function from shared object in python. Can I can solve this problem without additional python extension?
0
c++,python,shared-objects
2012-03-01T12:41:00.000
0
9,516,431
If you have the C++ source code, I would say boost python is the best way because it's very easy to get this up and running and it's flexible. If you don't have C++ source then checkout ctypes.
0
459
false
0
1
The best way to call C++ function from shared object in python
9,516,707
2
3
0
1
1
1
0.066568
0
What's the best way to call C++ function from shared object in python. Can I can solve this problem without additional python extension?
0
c++,python,shared-objects
2012-03-01T12:41:00.000
0
9,516,431
Another way is to use swig (http://www.swig.org/) to generate a python module wrapping the C++ code.
0
459
false
0
1
The best way to call C++ function from shared object in python
9,516,904
1
6
0
1
24
0
0.033321
0
I am working with an ARM Cortex M3 on which I need to port Python (without operating system). What would be my best approach? I just need the core Python and basic I/O.
0
python,embedded
2012-03-01T15:52:00.000
1
9,519,346
fyi I just ported CPython 2.7x to non-POSIX OS. That was easy. You need write pyconfig.h in right way, remove most of unused modules. Disable unused features. Then fix compile, link errors. Then it just works after fixing some simple problems on run. If You have no some POSIX header, write one by yourself. Implement all POSIX functions, that needed, such as file i/o. Took 2-3 weeks in my case. Although I have heavily customized Python core. Unfortunately cannot opensource it :(. After that I think Python can be ported easily to any platform, that has enough RAM.
0
7,622
false
0
1
Porting Python to an embedded system
24,161,759
1
3
0
1
18
0
0.066568
0
I've tried "nosetests p1.py > text.txt" and it is not working. What is the proper way to pipe this console output?
0
python,nosetests
2012-03-01T16:11:00.000
0
9,519,717
parameter -s - Not capturing stdout
0
5,685
false
0
1
how do i redirect the output of nosetests to a textfile?
9,519,888
1
2
0
0
3
0
0
1
I am working on my senior project at university and I have a question. My advisor and other workers don't know much more on the matter so I thought I would toss it out to SO and see if you could help. We want to make a website that will be hosted on a server that we are configuring. That website will have buttons on it, and when visitors of that website click a certain button we want to register an event on the server. We plan on doing this with PHP. Once that event is registered (this is where we get lost), we want to communicate with a serial device on a remote computer. We are confident we can set up the PHP event/listener for the button press, but once we have that registered, how do we signal to the remote computer(connected via T1 line/routers) to communicate with the serial device? What is this sequence of events referred to as? The hardest thing for us (when researching it) is that we are not certain what to search for! We have a feeling that a python script could be running on the server, get signals from the PHP listener, and then communicate with the remote PC. The remote PC could also be running a python script that then will communicate with our serial device. Again, most of this makes sense, but we are not clear on how we communicate between Python and PHP on the web server (or if this is possible). If any one could give me some advice on what to search for, or similar projects I would really appreciate it. Thanks,
0
php,python,web
2012-03-01T20:02:00.000
0
9,523,147
You can set up a web server also on the remote computer, perhaps using the same software as on the public server, so you do not need to learn another technology. The public server can make HTTP requests and the remote server responds by communicating with the serial device.
0
232
false
1
1
Website to computer communications
9,523,459
1
1
0
0
0
0
0
1
I've been reading about beautifulSoup, http headers, authentication, cookies and something about mechanize. I'm trying to scrape my favorite art websites with python. Like deviant art which I found a scraper for. Right now I'm trying to login but the basic authentication code examples I try don't work. So question, How do I find out what type of authentication a site uses so that I know I'm trying to login the correct way? Including things like valid user-agents when they try to block bots. Bear with my ignorance as I'm new to HTTP, python, and scraping.
0
python,http,authentication,screen-scraping,web-scraping
2012-03-02T05:23:00.000
0
9,528,395
It's very unlikely that any of the sites you are interested in use basic auth. You will need a library like mechanize that manages cookies and you will need to submit the login information to the site's login page.
0
829
false
1
1
how to find the authentication used on a website
9,542,705
1
2
0
0
2
1
0
0
i'm trying to work out which language to work with in VS2010, c# or Python. I understand that there are better ide's for Python out there but i like the VS IDE environment. If Iron python can do everything CS and VB can do in VS2010 i'll be happy. But can it?
0
python,visual-studio-2010
2012-03-02T13:38:00.000
0
9,534,242
If Iron python can do everything C# and VB can do in VS2010 i'll be happy. But can it? No, C# and VB, and to a lesser extent F#, are the primary languages for Visual Studio. Microsoft support for IronPython has been dropped and it seems to be stagnating. If you are just writing console based code then you may be OK with IronPython, but if you are doing any GUI work then I would not recommend it. Even then, the fact that the language no longer has MS backing leaves me with a bad feeling. I would not invest time and effort into writing IronPython code because I suspect that it will become a dead end.
0
1,931
false
0
1
Python in VS2010
9,534,308
1
1
0
1
1
1
0.197375
0
I have a dll built in c++, under VS2010, and I am calling it from a python project. I had an error, inside the dll, and I would have liked to be able to debug using VS tools, step into the solution until I reach the task that "read an invalid memory location". The debug / stepping into functions didn't step into the function code inside the dll. I tried to attach the debugger (and run the python code from command line/ stop at a raw_input that gave me the pid, then attach the debugger). Same thing happened. I hit the breakpoints inside the python code, but none inside the dll. I eventually found my error, after much banging my head against my monitor, using old-style trace inside the dll. But there has to be a way to be able to debug an existing/ open project inside VS... I am going to run into this again, so I hope to learn something now, and avoid damage to my monitor in the future. :) Note: the c++ dll and the pdb file are located both in the same directory as the python file, they are of course automatically built into the Debug folder, and they are also in a folder located into the system path. Any possible DEBUG symbols are enabled. I am using python 2.7.
0
c++,python,visual-studio-2010,debugging,dll
2012-03-02T20:54:00.000
0
9,540,220
You need to have the .pdb file in your bin directory if you want to be able to step into and debug a dll, otherwise you will not have access to any of the debugging symbols. This .pdb allows visual studio to read the .dll file and step into its method calls.
0
1,816
false
0
1
debugging a c++ dll in VS2010, from python
9,540,284
2
5
0
-1
4
1
-0.039979
0
I'm trying to send a python script I wrote on my Mac to my friends. Problem is, I don't want to send them the code that they can edit. How can I have my script change from an editable text file, to a program that you click to run?
0
python,macos,text,compiler-construction
2012-03-03T02:18:00.000
1
9,542,814
You could try py2exe (http://www.py2exe.org/) since it compiles your code into an exe file, they should have a hell of a time trying to decompose it.
0
4,391
false
0
1
"Compiling" python script
9,542,902
2
5
0
0
4
1
0
0
I'm trying to send a python script I wrote on my Mac to my friends. Problem is, I don't want to send them the code that they can edit. How can I have my script change from an editable text file, to a program that you click to run?
0
python,macos,text,compiler-construction
2012-03-03T02:18:00.000
1
9,542,814
If your friends are on windows you could use py2exe, but if they're on Mac I'm not sure there's an equivalent. Either way, compiling like that breaks cross platform compatability, which is the whole point of an interpreted language really... Python just isn't really set up to hide code like that, it's against it's philosophy as far as I can tell.
0
4,391
false
0
1
"Compiling" python script
9,543,052
1
1
0
0
0
0
0
0
I am trying to think through a script that I need to create. I am most likely going to be using php unless there would be a better language to do this with e.g. python or ror. I only know a little bit of php so this will definitely be a learning experience for me and starting fresh with a different language wouldn't be a problem if it would help in the long run. What I am wanting to do is create a website where people can sign up for WordPress hosting. Right now I have the site set up with WHMCS. If I just leave it how it is I will have manually go in and install WordPress every time a customer signs up. I would like an automated solution that creates a database and installs WordPress as soon as the customer signs up. With WHMCS I can run a script as soon as a customer signs up and so far I understand how to create a database, download WordPress, and install WordPress. The only thing is I can't figure out how to make it work with more than one customer because with each customer there will be a new database. What I need the script to do is when customer A signs up, the script will create a database name "customer_A" (that name is just an example) and when, lets say my second customer signs up, the script will create a database named "customer_B". Is there a possible solution to this? Thanks for the help
1
php,python,wordpress
2012-03-03T03:30:00.000
0
9,543,171
I did this yesterday. my process was to add a row to a master accounts table, get the auto inc id, use that along with the company name to create the db name. so in my case the db's are Root_1companyname1 Root_2companyname2 .. Root_ is optional of course. Ask if you have any questions.
0
90
false
0
1
Automate database creation with incremental name?
9,543,194
2
3
0
1
1
0
0.066568
0
I have my python script set to run from cron in Ubuntu Server. However it might take longer time to finish, before another cron event will try to start it. I would like to determine such case from script itself and if running then gracefully terminate it from python script.
0
python,ubuntu-10.04
2012-03-04T14:39:00.000
1
9,555,742
Save your pid to a file; if the file already exists, check that the process that left its PID is still alive. (This is safer than trying to ensure you always remove the file: You can't). The full process goes like this: Check if the checkpoint file exists. If it does not, write your PID into the file and go ahead with the computation. If the file exists: Read the PID and check if the process is, in fact, still alive. The best way to do that is with "kill -0" (from python: os.kill), which doesn't bother the running process but fails if it does not exist. If the process is still running, exit. Otherwise, write your PID to the file etc. There's a small chance of a race condition, but if your process is getting restarted at infrequent intervals, that should be entirely harmless: Your process could always quit in favor of a running process that exits a second later, so what does it matter if the running process manages to quit first?
0
3,425
false
0
1
How to determine if my python script is running?
9,555,991
2
3
0
0
1
0
0
0
I have my python script set to run from cron in Ubuntu Server. However it might take longer time to finish, before another cron event will try to start it. I would like to determine such case from script itself and if running then gracefully terminate it from python script.
0
python,ubuntu-10.04
2012-03-04T14:39:00.000
1
9,555,742
There are two obvious solutions: Some kind of lock file, which it checks. If the lock file exists, then don't start, otherwise create it. (Or more aptly, in true python 'ask for forgiveness, not permission' style, try to make it and catch the error if it exists - stopping a race condition). You need to be careful to ensure this gets cleaned up when the script ends, however - even on errors, otherwise it could block future runs. Traditionally this is a .pid file which contains the process id of the running process. Use ps to check for the running process. With this solution it is harder to stop the race condition, however.
0
3,425
false
0
1
How to determine if my python script is running?
9,555,781
1
1
0
0
0
1
0
0
I tried to figure it out, the most secure and flexible solution for storing in config file some credentials for database connection and other private info. This is inside a python module for logging into different handlers (mongodb, mysqldb, files,etc) the history of users activity in the system. This logging module, is attached with a handler and its there where I need to load the config file for each handler. I.E. database, user, pass, table, etc. After some research in the web and stackoverflow, I just saw mainly the security risks comparison between Json and CPickle, but concerning the eval method and the types restriction, more than the config file storage issue. I was wondering if storing credentials in json is a good idea, due to the security risks involved in having a .json config file in the server (from which the logging handler will read the data). I know that this .json file could be retrieved by an http request. If the parameters are stored in a python object inside a .py code, I guess there is more security due to the fact that any request of this file will be interpreted first by the server, but I am loosing the flexibility of modularization and easy modification of this data. What would you suggest for this kind of Security issues while storing this kind of config files in the server and accessed by some Python class? Thanks in advance, Luchux.
0
python,json,security,configuration-files,pickle
2012-03-05T13:37:00.000
0
9,567,579
I'd think about encrypting the credentials file. The process that uses it will need a key/password to decrypt it, and you can store that somewhere else-- or even enter it interactively on server start-up. That way you don't have a single point of failure (though of course a determined intruder can eventually put the pieces together). (Naturally you should also try to secure the server so that your credentials can't just be fetched by http request)
0
427
false
0
1
Security issues storing config file in json/CPickle
9,568,058
1
3
0
1
23
0
0.066568
1
I'm using boto to spawn a new EC2 instance based on an AMI. The ami.run method has a number of parameters, but none for "name" - maybe it's called something different?
0
python,amazon-ec2,amazon-web-services,boto
2012-03-05T22:39:00.000
0
9,575,148
In EC2 there's no api to change the actually name of the machine. You basically have two options. You can pass the desired name of the computer in the user-data and when the server starts run a script that will change the name of the computer. You can use an EC2 tag to name the server ec2-create-tags <instance-id> --tag:Name=<computer name>. Downside to this solution is the server wont actually update to this name. This tag is strictly for you or for when you're querying the list of servers in aws. Generally speaking if you're at the point where you want your server to configure itself when starting up I've found that renaming your computer in EC2 just causes more trouble than it's worth. I suggest not using them if you don't have to. Using the tags or elb instances is the better way to go.
0
9,225
false
0
1
With boto, how can I name a newly spawned EC2 instance?
9,575,281
2
3
0
2
1
0
1.2
0
I'm using pyramid web framework. I was confused by the relationship between the cookie and session. After looked up in wikipedia, did I know that session is an abstract concept and cookie may just be an kind of approach (on the client side). So, my question is, what's the most common implementation (on both the client and server)? Can somebody give some example (maybe just description) codes? (I wouldn't like to use the provided session support inside the pyramid in order to learn)
0
python,session,cookies,pyramid
2012-03-06T00:35:00.000
0
9,576,263
In general, the cookie stored with the client is just a long, hard-to-guess hash code string that can be used as a key into a database. On the server side, you have a table mapping those session hashes to primary keys (a session hash should never be a primary key) and expiration timestamps. So when you get a request, first thing you do is look for the cookie. If there isn't one, create a session entry (cookie + expiration timestamp) in the database table. If there is one, look it up and make sure it hasn't expired; if it has, make a new one. In either case, if you made a new cookie, you might want to pass that fact down to later code so it knows if it needs to ask for a login or something. If you didn't need to make a new cookie, reset the expiration timestamp so you don't expire the session too soon. While handling the view code and generating a response, you can use that session primary key to index into other tables that have data associated with the session. Finally, in the response sent back to the client, set the cookie to the session key hash. If someone has cookies disabled, then their session cookie will always be new, and any session-based features won't work.
0
365
true
1
1
Is cookie a common and secure implementation of session?
9,576,386
2
3
0
1
1
0
0.066568
0
I'm using pyramid web framework. I was confused by the relationship between the cookie and session. After looked up in wikipedia, did I know that session is an abstract concept and cookie may just be an kind of approach (on the client side). So, my question is, what's the most common implementation (on both the client and server)? Can somebody give some example (maybe just description) codes? (I wouldn't like to use the provided session support inside the pyramid in order to learn)
0
python,session,cookies,pyramid
2012-03-06T00:35:00.000
0
9,576,263
A session is (usually) a cookie that has a unique value. This value maps to a value in a database or held in memory that then tells you what session to load. PHP has an alternate method where it appends a unique value to the end of every URL (if you've ever seen PHPSESSID in a URL you now know why) but that has security implications (in theory). Of course, since cookies are sent back and forth with every request unless you're talking over HTTPS you are sending the only way to know (reliably) that the client you are talking to now is the same one you logged in ten seconds ago to anyone on the same wireless network. See programs like Firesheep for reasons why switching to HTTPS is a good idea. Finally, if you do want to build your own I, was given some advice on the matter by a university professor. Give out a new token on every page load and invalidate all a users tokens if an invalid token is used. This just means that if an attacker does get a token and logs in to it whilst it is still valid when the victim clicks a link both parties get logged out.
0
365
false
1
1
Is cookie a common and secure implementation of session?
9,576,390
1
1
1
0
4
0
0
0
I have some C++ code that delivers events to Python objects. Observers are kept as weak_ptrs, so they don't have to deregister. This works in C++, but bridging weak pointers and Python weak references is troublesome (I also want Python event handlers not being kept alive by subscriptions, same as in C++ code). In order to have a live observer, something needs to have a shared pointer to it while the object is alive, so it boils down to having an observer in Python land control the lifetime of a C++ observer object. The approaches I've come up with so far involve a fair amount of boilerplate and intermediate objects (e.g. creating another PyTypeObject for a type that keeps a C++ observer and a weak reference to the Python observer and setting it as a member of Python observer, so it dies with it). The question is, is there any obvious way to do it?
0
c++,python,weak-references
2012-03-06T02:59:00.000
0
9,577,314
I would write a python wrapper over the C++ module and dispatch to python observers in the python wrapper. Would that be enough? When you mention that something needs to have a shared pointer, would it be enough if that shared pointer is on the stack until given observer returns?
0
107
false
0
1
Track the lifetime of a CPython object from C extension
9,737,868
1
2
0
0
0
0
0
0
I hope you guys can spare a moment with some ideas on how to develop my idea. I have an Asterisk-based telephone switch . When an incoming call is arriving, I can make sure the server runs an external script of any language. Here comes my development work. I would like to notify a group of listening clients about the call, and probably open a browser page on their computer. What kind of approach would you take for this sort of server-based push notification? (with no iPhone involved) I am open to any language. Thanks
0
python,notifications,push-notification,server-push
2012-03-07T02:39:00.000
0
9,595,076
Maybe have a look at www.pubnub.com .. its commercial, but lets you send 5 million messages a month for free. Essentially it lets you create a named channel, and have X number of clients connect to it and send messages back and forth. Using one of these services would of course require you to write a client to distribute to your users (in your language of choice) and tie's you in somewhat (shouldn't really be a problem as you could swap in some other solution later if they go under or whatever.) The upside would be, very good x-platform support and a very clean API, infrastructure taken care of for you (for example clients can still connect the the channel even if your asterisk box is down or whatever) (and no, I don't work for pubnub! but it seems like a no-brainer to use it with the 5mil messages free deal!)
0
335
false
1
1
Cross Platform Event Notification
9,595,261
1
1
0
0
0
0
1.2
0
I am using smtplib sendmail and \n (line feed) is being added where there was just \r (carriage return). This corrupts the file for use with the UNIX tnef utility. How can I keep the line feed from being added? Thanks
0
python,email,carriage-return,smtplib
2012-03-07T22:37:00.000
0
9,610,221
Email servers are free to change line endings IIRC. They could be various platforms. If you are transmitting an attachment, use a suitable encoding such as Base64.
0
433
true
0
1
Python's smtplib sendmail is inserting line feed
9,610,703
1
3
0
0
0
0
0
0
I have data across several computers stored in folders. Many of the folders contain 40-100 G of files of size from 500 K to 125 MB. There are some 4 TB of files which I need to archive, and build a unfied meta data system depending on meta data stored in each computer. All systems run Linux, and we want to use Python. What is the best way to copy the files, and archive it. We already have programs to analyze files, and fill the meta data tables and they are all running in Python. What we need to figure out is a way to successfully copy files wuthout data loss,and ensure that the files have been copied successfully. We have considered using rsync and unison use subprocess.POPEn to run them off, but they are essentially sync utilities. These are essentially copy once, but copy properly. Once files are copied the users would move to new storage system. My worries are 1) When the files are copied there should not be any corruption 2) the file copying must be efficient though no speed expectations are there. The LAN is 10/100 with ports being Gigabit. Is there any scripts which can be incorporated, or any suggestions. All computers will have ssh-keygen enabled so we can do passwordless connection. The directory structures would be maintained on the new server, which is very similar to that of old computers.
0
python,file,rsync,unison
2012-03-08T13:49:00.000
1
9,618,641
I think rsync is the solution. If you are concerned about data integrity, look at the explanation of the "--checksum" parameter in the man page. Other arguments that might come in handy are "--delete" and "--archive". Make sure the exit code of the command is checked properly.
0
836
false
0
1
What is the best utility/library/strategy with Python to copy files across multiple computers?
9,619,361
1
2
0
5
5
1
1.2
0
I have an embedded device with Python installed on in. Is it possible to explicitly access registers in pure Python, or should I write C extensions for my Python code?
0
python,c,embedded
2012-03-08T14:28:00.000
0
9,619,207
It seems that you can't access the low level registers. I recommend just writing a short C extension code to allow Python to access the registers you need.
0
548
true
0
1
Accessing low-level registers of an embedded device using Python
9,619,591
1
2
0
0
2
1
0
0
I am new to this whole python deal, and admit that I am half lost - don't know whether I am coming or going. So, here's the question and I hope someone can assist me. I am running a RedHat system and by default, it has python 2.4 installed. I have a python script that gives me an error when attempting to import json. I have checked my phpinfo and it shows that I have json version 1.2.1 (or something or other) - so why isn't Python recognizing that this json does exist? Is there a file that I need to edit to manually enter or edit where python looks for the json at, and if so, where? I even tried installing simplejson and also python 3 - nothing has worked so far, and I have run out of hair to pull out. Any help would be greatly appreciated - thanks in advance.
0
python,linux,json
2012-03-08T20:38:00.000
0
9,624,584
python needs the python module for json, which is not the same as the php module for json. There are some to pick from, e.g. you can use python-cjson, so make sure that this module is installed. You can ask rpm about which packages are installed like this: rpm -qa | grep json
0
7,168
false
0
1
Python - import json returning module not found
9,624,622
2
3
0
0
1
0
0
0
In my game, there is are ActionFactory (makes AbstractActions), AbstractAction (actions that could exist), PotentialAction (actions that a being could do, which are assosietted with a specific being) classes. I need a name for a class that reperessents an actual, choosen action which was done by a specific being, has specific targets, and possibly arguments.
0
python,naming-conventions,variable-names
2012-03-09T02:52:00.000
0
9,628,303
Would CompletedAction, FinishedAction, ClosedAction, or PastAction be of any use?
0
84
false
0
1
What is a good name for chosen actions, possibly in the past?
9,628,338
2
3
0
0
1
0
1.2
0
In my game, there is are ActionFactory (makes AbstractActions), AbstractAction (actions that could exist), PotentialAction (actions that a being could do, which are assosietted with a specific being) classes. I need a name for a class that reperessents an actual, choosen action which was done by a specific being, has specific targets, and possibly arguments.
0
python,naming-conventions,variable-names
2012-03-09T02:52:00.000
0
9,628,303
I went with RealAction, mainly because of API consistency - all actual, specific real-world classes are prefixed with Real (and have an associated Potential class)
0
84
true
0
1
What is a good name for chosen actions, possibly in the past?
9,710,839
1
2
0
2
0
1
1.2
0
I have a custom codechecker in python, Also there is a bigger project running in PHP, which stores users code in MySQL database. I am new to python, so I'm not sure how I can pass the code from PHP to Python. Do I have to store the file to the filesystem to pass it to Python? (In that case too many files might be created, and their cleanup after execution has to be taken care)
0
php,python
2012-03-09T04:21:00.000
0
9,628,893
To expand on Brad's answer, there's several options, each with pros & cons... Pipes (ie: STDIN/STDOUT): proc_open() Shared memory: shmop_open() AF_UNIX family sockets: socket_bind() You'll probably want to use the first option but read up on the others before making a commitment.
0
109
true
0
1
Passing code to python from PHP
9,629,033
1
2
0
1
2
0
1.2
0
I am running my Test Harness which is written in Python. Before running a test through this test harness, I am exporting some environment variables through a shell script which calls the test harness after exporting the variables. When the harness comes in picture, it checks if the variables are in the environment and does operations depending on the values in the env variables. However after the test is executed, I think the environment variables values aren't getting cleared as the next time, it picks up those values even if those aren't set through the shell script. If they are set explicitly, the harness picks up the new values but if we clear it next time, it again picks up the values set in 1st run. I tried clearing the variables using "del os.environ['var']" command after every test execution but that didn't solve the issue. Does anybody know why are these values getting preserved? On the shell these variables are not set as seen in the 'env' unix command. It is just in the test harness that it shows the values. None of the env variables store their values in any text files.
0
python,environment-variables
2012-03-09T13:00:00.000
1
9,634,473
A subshell can change variables it inherited from the parent, but the changes made by the child don't affect the parent. When a new subshell is started, in which the variable exported from the parent is visible. The variable is unsetted by del os.environ['var'], but the value for this variable in the parent stays the same.
0
2,224
true
0
1
Environment variables getting preserved in Python script even after exiting
9,634,524
2
2
0
0
0
0
1.2
0
I know perl and python is tested solution for this kind of log parsing and data mining - Anybody have experience dealing with syslog parsing with Java ? I have to create a Java demon anyway to load the parsed output to DB .. So I was thinking why not going all the way - python might be useful when I will be running it on different environment.
0
java,python,parsing,logging
2012-03-09T22:42:00.000
0
9,641,974
I recently started writing python scripts, but recently i wrote a java gc log parser to print the timestamp when a gc happened and counts etc, and i found Python real easy in writing it. What kind of fields are you interested while parsing the syslogs? I think if you know what you are looking for in the logs(patterns etc) then it becomes easy to write a script which would do that for you. Ankit.
0
838
true
1
1
Log parser solutions python/perl vs Java
9,642,425
2
2
0
1
0
0
0.099668
0
I know perl and python is tested solution for this kind of log parsing and data mining - Anybody have experience dealing with syslog parsing with Java ? I have to create a Java demon anyway to load the parsed output to DB .. So I was thinking why not going all the way - python might be useful when I will be running it on different environment.
0
java,python,parsing,logging
2012-03-09T22:42:00.000
0
9,641,974
I translated some Java GC log parser/analyzer from Perl to Java. In Java the code looked like more lines and the code obviously more verbose but the execution was at least 5 times faster.
0
838
false
1
1
Log parser solutions python/perl vs Java
11,693,757
1
2
0
0
0
0
0
1
I want to take results from a web page, sent from dom as json through ajax, then send this data to a python script, run it, then return the new results back as json. I was told a php script running gearman would be a good bet, but I'm still not sure how that would work.
0
php,javascript,python,gearman
2012-03-09T23:18:00.000
0
9,642,259
Put your Python script in your CGI directory and use the cgi and json modules in your script to read AJAX from post/get params. Of course you can do a system call from PHP to run a Python script, but I can't think of a good reason why you would.
0
703
false
1
1
How can I run a python script through a webserver and return results to javascript?
9,642,671
1
4
0
13
67
1
1
0
What makes parsing a text file in 'r' mode more convenient than parsing it in 'rb' mode? Especially when the text file in question may contain non-ASCII characters.
0
python,file-io,text-parsing
2012-03-10T05:13:00.000
0
9,644,110
The difference lies in how the end-of-line (EOL) is handled. Different operating systems use different characters to mark EOL - \n in Unix, \r in Mac versions prior to OS X, \r\n in Windows. When a file is opened in text mode, when the file is read, Python replaces the OS specific end-of-line character read from the file with just \n. And vice versa, i.e. when you try to write \n to a file opened in text mode, it is going to write the OS specific EOL character. You can find what your OS default EOL by checking os.linesep. When a file is opened in binary mode, no mapping takes place. What you read is what you get. Remember, text mode is the default mode. So if you are handling non-text files (images, video, etc.), make sure you open the file in binary mode, otherwise you’ll end up messing up the file by introducing (or removing) some bytes. Python also has a universal newline mode. When a file is opened in this mode, Python maps all of the characters \r, \n and \r\n to \n.
0
84,866
false
0
1
Difference between parsing a text file in r and rb mode
31,152,300
1
6
0
0
1
1
0
0
I want to use "Importação de petróleo" in my program. How can I do that because all encodings give me errors as cannot encode.
0
python,unicode,encoding
2012-03-10T06:05:00.000
0
9,644,338
Help on class unicode in module builtin: class unicode(basestring) | unicode(string [, encoding[, errors]]) -> object | | Create a new Unicode object from the given encoded string. | encoding defaults to the current default string encoding. | errors can be 'strict', 'replace' or 'ignore' and defaults to 'strict'. | try using "utf8" as the encoding for unicode()
0
518
false
0
1
How to encode 'Importação de petróleo' string in python?
9,644,413
1
2
0
5
7
0
1.2
0
I have been trying to get the hang of TDD and unit testing (in python, using nose) and there are a few basic concepts which I'm stuck on. I've read up a lot on the subject but nothing seems to address my issues - probably because they're so basic they're assumed to be understood. The idea of TDD is that unit tests are written before the code they test. Unit test should test small portions of code (e.g. functions) which, for the purposes of the test, are self-contained and isolated. However, this seems to me to be highly dependent on the implementation. During implementation, or during a later bugfix it may become necessary to abstract some of the code into a new function. Should I then go through all my tests and mock out that function to keep them isolated? Surely in doing this there is a danger of introducing new bugs into the tests, and the tests will no longer test exactly the same situation? From my limited experience in writing unit tests, it appears that completely isolating a function sometimes results in a test that is longer and more complicated than the code it is testing. So if the test fails all it tells you is that there is either a bug in the code or in the test, but its not obvious which. Not isolating it may mean a much shorter and easier to read test, but then its not a unit test... Often, once isolated, unit tests seem to be merely repeating the function. E.g. if there is a simple function which adds two numbers, then the test would probably look something like assert add(a, b) == a + b. Since the implementation is simply return a + b, what's the point in the test? A far more useful test would be to see how the function works within the system, but this goes against unit testing because it is no longer isolated. My conclusion is that unit tests are good in some situations, but not everywhere and that system tests are generally more useful. The approach that this implies is to write system tests first, then, if they fail, isolate portions of the system into unit tests to pinpoint the failure. The problem with this, obviously, is that its not so easy to test corner cases. It also means that the development is not fully test driven, as unit tests are only written as needed. So my basic questions are: Should unit tests be used everywhere, however small and simple the function? How does one deal with changing implementations? I.e. should the implementation of the tests change continuously too, and doesn't this reduce their usefulness? What should be done when the test gets more complicated than the code its testing? Is it always best to start with unit tests, or is it better to start with system tests, which at the start of development are much easier to write?
0
python,unit-testing,tdd
2012-03-11T10:04:00.000
0
9,654,020
Regarding your conclusion first: both unit tests and system tests (integration tests) both have their use, and are in my opinion just as useful. During development I find it easier to start with unit tests, but for testing legacy code I find your approach where you start with the integration tests easier. I don't think there's a right or wrong way of doing this, the goal is to make a safetynet that allows you to write solid and well tested code, not the method itself. I find it useful to think about each function as an API in this context. The unit test is testing the API, not the implementation. If the implementation changes, the test should remain the same, this is the safety net that allows you to refactor your code with confidence. Even if refactoring means taking part of the implementation out to a new function, I will say it's ok to keep the test as it is without stubbing or mocking the part that was refactored out. You will probably want a new set of tests for the new function however. Unit tests are not a holy grail! Test code should be fairly simple in my opinion, and it should be little reason for the test code itself to fail. If the test becomes more complex than the function it tests, it probably means you need to refactor the code differently. An example from my own past: I had some code that took some input and produced some output stored as XML. Parsing the XML to verifying that the output was correct caused a lot of complexity in my tests. However realizing that the XML-representation was not the point, I was able to refactor the code so that I could test the output without messing with the details of XML. Some functions are so trivial that a separate test for them adds no value. In your example you're not really testing your code, but that the '+' operator in your language works as expected. This should be tested by the language implementer, not you. However that function won't need to get very much more complex before adding a test for it is worthwhile. In short, I think your observations are very relevant and point towards a pragmatic approach to testing. Following some rigorous definition too closely will often get in the way, even though the definitions themselves may be necessary for the purpose of having a way to communicate about the ideas they convey. As said, the goal is not the method, but the result; which for testing is to have confidence in your code.
0
1,322
true
0
1
How to approach unittesting and TDD (using python + nose)
9,654,684
1
2
0
1
1
0
0.099668
0
I'm using Pydev with Eclipse. Is it possible to execute a line of python code or a text selection with my IDE? Thanks!
0
python,eclipse,pydev
2012-03-12T09:30:00.000
0
9,664,539
CTRL+ALT+ENTER will execute the selected lines.
0
3,017
false
0
1
Execute Python line of code in Eclipse
22,150,831
1
1
0
1
2
0
1.2
1
I've just created a web chat server with Tornado over Python. The communication mechanism is to use long-polling and I/O events. I want to benchmark this web chat server at large scale, meaning I want to test this chat server (Tornado based) to see how many chatters it can withstand. Because I'm using cookies to identify sessions, presently I can only test with maximum 5 (IE, Firefox, Chrome, Safari, Opera) sessions per computer (cookie path has no use coz everything goes thru' the same web page), but in my office we only have limited number of computers. I want to test this Tornado app at the extreme, hopefully it can withstand few thousand concurrent users like Tornado is advertising, but having no clue how to do this!
0
python,performance,chat,tornado,long-polling
2012-03-12T11:06:00.000
0
9,665,913
I would run the server in a mode where you let the client tell which client they are. i.e. change the code so it can be run this way as required. This is less secure, but makes testing easier. In production, don't use this option. This will give you a realistic test from a small number of client machines.
0
510
true
0
1
How to benchmark web chat performance?
9,668,433
1
4
0
2
2
0
0.099668
0
pymat doesnt seem to work with current versions of matlab, so I was wondering if there is another equivalent out there (I havent been able to find one). The gist of what would be desirable is running an m-file from python (2.6). (and alternatives such as scipy dont fit since I dont think they can run everything from the m-file). Thanks in advance!
0
python,matlab
2012-03-12T21:56:00.000
0
9,675,386
You can always start matlab as separate subprocess and collect results via std.out/files. (see subprocess package).
1
5,746
false
0
1
Running m-files from Python
9,675,452
1
1
0
1
0
0
1.2
0
I'm using Django to power a site where I pull in tweets from twitter timelines for use (for about 50 different people). I want to keep a large dictionary of all the tweets in a cache so I don't have to poll twitter every page-refresh. Right now I have it so when it retrieves tweets (30) from twitter, it saves it in the default cache with the key being that user's ID. However, I want it to save these in the long-term so the list of tweets for a user grows over time. My question is, if I save them using the file-system cache instead, will the files themselves (pickled dictionaries) get deleted after the timeout value, or will it just re-read them into the cache from the file? That way, I could still add to the file over time. Thanks!
0
python,django
2012-03-14T18:22:00.000
0
9,707,816
The filesystem cache in Django works like any of the other caches, when the timeout value expires, the cache is "invalidated". In the case of files, that means it will be deleted/overwritten. If you want long-term storage, you need to use a a long-term storage solution (Django's cache framework is specifically not a long-term storage solution). Just save the tweets to your DB or manually to a file. You can still implement caching in addition to this, but you need to handle the long-term storage end.
0
970
true
1
1
Do files with filesystem caching in Django delete after timeout?
9,707,962
2
3
0
6
7
1
1.2
0
I am reading through code for optimization routines (Nelder Mead, SQP...). Languages are C++, Python. I observe that often conversion from double to float is performed, or methods are duplicated with double resp. float arguments. Why is it profitable in optimization routines code, and is it significant? In my own code in C++, should I be careful for types double and float and why? Kind regards.
0
c++,python,algorithm,optimization,scipy
2012-03-14T20:10:00.000
0
9,709,513
Often the choice between double and float is made more on space demands than speed. Modern processors are capable of operating on double quite fast. Floats may be faster than doubles when using SIMD instructions (such as SSE) which can operate on multiple values at a time. Also if the operations are faster than the memory pipeline, the smaller memory requirements of float will speed things overall.
0
958
true
0
1
Double or float - optimization routines
9,709,955
2
3
0
2
7
1
0.132549
0
I am reading through code for optimization routines (Nelder Mead, SQP...). Languages are C++, Python. I observe that often conversion from double to float is performed, or methods are duplicated with double resp. float arguments. Why is it profitable in optimization routines code, and is it significant? In my own code in C++, should I be careful for types double and float and why? Kind regards.
0
c++,python,algorithm,optimization,scipy
2012-03-14T20:10:00.000
0
9,709,513
Other times that I've come across the need to consider the choice between double and float types in terms of optimisation include: Networking. Sending double precision data across a socket connection will obviously require more time than sending half that amount of data. Mobile and embedded processors may only be able to handle high speed single precision calculations efficiently on a coprocessor. As mentioned in another answer, modern desktop processors can handle double precision Processing quite fast. However, you have to ask yourself if the double precision processing is really required. I work with audio, and the only time that I can think of where I would need to process double precision data would be when using high order filters where numerical errors can accumulate. Most of the time this can be avoided by paying more careful attention to the algorithm design. There are, of course, other scientific or engineering applications where double precision data is required in order to correctly represent a huge dynamic range. Even so, the question of how much effort to spend on considering the data type to use really depends on your target platform. If the platform can crunch through doubles with negligible overhead and you have memory to spare then there is no need to concern yourself. Profile small sections of test code to find out.
0
958
false
0
1
Double or float - optimization routines
9,710,279
1
2
0
2
4
0
1.2
1
I have a problem I've been dealing with lately. My application asks its users to upload videos, to be shared with a private community. They are teaching videos, which are not always optimized for web quality to start with. The problem is, many of the videos are huge, way over the 50 megs I've seen in another question. In one case, a video was over a gig, and the only solution I had was to take the client's video from box.net, upload it to the video server via FTP, then associate it with the client's account by updating the database manually. Obviously, we don't want to deal with videos this way, we need it to all be handled automatically. I've considered using either the box.net or dropbox API to facilitate large uploads, but would rather not go that way if I don't have to. We're using PHP for the main logic of the site, though I'm comfortable with many other languages, especially Python, but including Java, C++, or Perl. If I have to dedicate a whole server or server instance to handling the uploads, I will. I'd rather do the client-side using native browser JavaScript, instead of Flash or other proprietary tech. What is the final answer to uploading huge files though the web, by handling the server response in PHP or any other language?
0
java,php,javascript,c++,python
2012-03-15T01:44:00.000
0
9,712,898
It is possible to raise the limits in Apache and PHP to handle files of this size. The basic HTTP upload mechanism does not offer progressive information, however, so I would usually consider this acceptable only for LAN-type connections. The normal alternative is to locate a Flash or Javascript uploader widget. These have the bonus that they can display progressive information and will integrate well with a PHP-based website.
0
200
true
0
1
Uploading huge files with PHP or any other language?
9,712,935
1
2
0
1
0
0
0.099668
0
netbean IDE support when downloading PHP bundle version. I also found a download of netbean for python. But How can I let one netbean IDE support both PHP and python?
0
php,python,netbeans,ide
2012-03-15T12:43:00.000
0
9,719,937
For first download netbeans for php support, and form plugin manager install python support.
0
400
false
0
1
How to make netbean IDE support both python and php
9,719,988
1
1
0
1
1
0
1.2
0
I'm writing a unittesting framework for servers that uses popen to basically execute "python myserver.py" with shell=False, run some tests, and then proceed to take the server down by killpg. This myserver.py can and will use multiprocessing to spawn subprocesses of its own. The problem is, from my tests, it seems that the pgrp pid of the server processes shares the same group pid as the actual main thread running the unittests, therefore doing an os.killpg on the group pid will not only take down the server but also the process calling the popen (not what I want to do). Why does it do this? And how can I make them be on separate group pids that I can kill independently?
0
python,unix
2012-03-15T15:21:00.000
1
9,722,778
You're asking about something pretty messy here. I suspect that none of this is what you want to do at all, and that you really want to accomplish this some simpler way. However, presuming you really want to mess with process groups... Generally, a new process group is created only by the setpgrp(2) system call. Otherwise, processes created by fork(2) are always members of the current process group. That said, upon creating a new process group, the processes in that group aren't even controlled by any tty and doing what you appear to want to do properly requires understanding the whole process group model. A good reference for how all this works is Stevens, "Advanced Programming in the Unix Environment", which goes into it in gory detail. If you really want to go down this route, you're going to have to implement popen or the equivalent yourself with all the appropriate system calls made.
0
241
true
0
1
Popen-ing a python call that invokes a script using multiprocessing (pgrp issue)?
10,034,142
1
2
0
0
4
1
0
0
I have a completely non-interactive python program that takes some command-line options and input files and produces output files. It can be fairly easily tested by choosing simple cases and writing the input and expected output files by hand, then running the program on the input files and comparing output files to the expected ones. 1) What's the name for this type of testing? 2) Is there a python package to do this type of testing? It's not difficult to set up by hand in the most basic form, and I did that already. But then I ran into cases like output files containing the date and other information that can legitimately change between the runs - I considered writing something that would let me specify which sections of the reference files should be allowed to be different and still have the test pass, and realized I might be getting into "reinventing the wheel" territory. (I rewrote a good part of unittest functionality before I caught myself last time this happened...)
0
python,testing
2012-03-15T18:50:00.000
0
9,726,214
Functional testing. Or regression testing, if that is its purpose. Or code coverage, if you structure your data to cover all code paths.
0
1,131
false
0
1
Testing full program by comparing output file to reference file: what's it called, and is there a package for it?
9,726,303
2
2
0
1
3
0
0.099668
0
I have a program that regularly appends small pieces (say 8 bytes) of sensitive data to a number of logfiles. I would like this data to be encrypted. I want the program to start automatically at boot time, so I don't want to type a password at program start. I also don't want it to store a password somewhere, since that would almost defeat the purpose of encryption. For these reasons, it seems to me that public key encryption would be a good choice. The program knows my public key, but my private key is password protected somewhere else. So far, so good. But when I try to use PyCrypto to RSA (or ElGamal)-encrypt a small 5-byte string, the output explodes to 128 bytes. My logfiles are large enough as it is... On the other hand, when I try a symmetric crypto, like Blowfish, the output string is just as large as the input string. So, my question is: Is there a reasonably secure public key encryption algorithm where I can encrypt data 8 bytes at a time and don't have it blow up? (I guess a factor of 2 would be OK). I think what I want is a public key stream cipher. If there is not such a thing, I think I will just give up and use a symmetric crypto and give the password manually on startup.
0
python,encryption,public-key-encryption
2012-03-16T13:02:00.000
0
9,737,757
What you need is to do something like SSL does: exchange a key using public key encryption, then use symmetric encryption. Asymmetric encryption is very inefficient in terms of performance, and should not be used for such stuff.
0
2,836
false
0
1
Is there a public key stream cipher encryption?
9,738,049
2
2
0
5
3
0
1.2
0
I have a program that regularly appends small pieces (say 8 bytes) of sensitive data to a number of logfiles. I would like this data to be encrypted. I want the program to start automatically at boot time, so I don't want to type a password at program start. I also don't want it to store a password somewhere, since that would almost defeat the purpose of encryption. For these reasons, it seems to me that public key encryption would be a good choice. The program knows my public key, but my private key is password protected somewhere else. So far, so good. But when I try to use PyCrypto to RSA (or ElGamal)-encrypt a small 5-byte string, the output explodes to 128 bytes. My logfiles are large enough as it is... On the other hand, when I try a symmetric crypto, like Blowfish, the output string is just as large as the input string. So, my question is: Is there a reasonably secure public key encryption algorithm where I can encrypt data 8 bytes at a time and don't have it blow up? (I guess a factor of 2 would be OK). I think what I want is a public key stream cipher. If there is not such a thing, I think I will just give up and use a symmetric crypto and give the password manually on startup.
0
python,encryption,public-key-encryption
2012-03-16T13:02:00.000
0
9,737,757
Typically this is solved in the way that the program creates some (real) random numbers which are used as a secret key to a symmetric encryption algorithm. In you program you have to do something like: Generate some real random data (maybe use /dev/random) as a secret key. Encrypt the secret key with the public key algorithm. Use the secret key for some other symmetric algorithm. To decrypt this, Use the private key to decrypt the secret key. Use the secret key and the symmetric algorithm to decrypt the data. You might want to get some random data (e.g. >=256bit) for a 'good' key.
0
2,836
true
0
1
Is there a public key stream cipher encryption?
9,738,026
1
2
1
2
1
0
0.197375
0
I have some big mysql databases with data for calculations and some parts where I need to get data from external websites. I used python to do the whole thing until now, but what shall I say: its not a speedster. Now I'm thinking about mixing Python with C++ using Boost::Python and Python C API. The question I've got now is: what is the better way to get some speed. Shall I extend python with some c++ code or shall I embedd python code into a c++ programm? I will get fore sure some speed increment using c++ code for the calculating parts and I think that calling the Python interpreter inside of an C-application will not be better, because the python interpreter will run the whole time. And I must wrap things python-libraries like mysqldb or urllib3 to have a nice way to work inside c++. So what whould you suggest is the better way to go: extending or embedding? ( I love the python language, but I'm also familiar with c++ and respect it for speed ) Update: So I switched some parts from python to c++ and used multi threading (real one) in my c modules and my programm now needs instead of 7 hours 30 minutes :))))
0
c++,python,embedding,extending
2012-03-17T01:56:00.000
0
9,746,586
In my opinion, in your case it makes no sense to embed Python in C++, while the reverse could be beneficial. In most of programs, the performance problems are very localized, which means that you should rewrite the problematic code in C++ only where it makes sense, leaving Python for the rest. This gives you the best of both world: the speed of C++ where you need it, the ease of use and flexibility of Python everywhere else. What is also great is that you can do this process step by step, replacing the slow code paths by the by, leaving you always with the whole application in an usable (and testable!) state. The reverse wouldn't make sense: you'd have to rewrite almost all the code, sacrificing the flexibility of the Python structure. Still, as always when talking about performance, before acting measure: if your bottleneck is not CPU/memory bound switching to C++ isn't likely to produce much advantages.
0
2,903
false
0
1
Speed - embedding python in c++ or extending python with c++
9,746,618
1
1
0
1
0
1
1.2
0
I'm using OS X with Sublime text build 2181, and I am having trouble using the Yaml module in a Sublime Text plugin. I have installed PyYaml by doing python setup.py install. When I go to the python console, and try import yaml I have no problems. But when I try to save my Sublime Text plugin with the import yaml statement, I keep getting ImportError: No module name yaml I'm using the pre-installed version of Python, version 2.7. Last line of the install output: Writing /Users/me/Developer/Cellar/python/2.7.2/lib/python2.7/site-packages/PyYAML-3.10-py2.7.egg-info Any help would be greatly appreciated.
0
python,yaml,sublimetext
2012-03-17T18:54:00.000
0
9,752,808
/Users/me/Developer/Cellar/python/2.7.2/lib/python2.7 doesn't seem like a pre-installed version of Python on a Mac. Can you try to identify the system-wide Python installation and use the explicit path to the python executable to execute setup.py install? Then try the Sublime Text plug-in. The default Mac OS X Python should be located at /Library/Frameworks/Python.framework/Versions/...
0
5,021
true
0
1
Python SublimeText plugin - No module named Yaml
9,752,868
2
2
0
0
1
0
0
1
I have used twitter search API to collect lots of tweets given a search keyword. Now that I have this collection of tweets, I'd like to find out which tweet has been retweeted most. Since search API does not have retweet_count, I have to find some other way to check how many times each tweet has been retweeted. The only clue I have is that I have ID number for each tweet. Is there any way I could use these ID numbers to figure out how many times each tweet has been retweeted?? I am using twitter module for python.
0
python,api,twitter
2012-03-18T13:16:00.000
0
9,758,636
I am currently studying twitter structure and had found out that is a field called tweet_count associated with each tweet as to # of times that particular original tweet has been retweeted
0
1,571
false
0
1
Getting Retweet Count of a Given Tweet ID Number
29,076,949
2
2
0
0
1
0
0
1
I have used twitter search API to collect lots of tweets given a search keyword. Now that I have this collection of tweets, I'd like to find out which tweet has been retweeted most. Since search API does not have retweet_count, I have to find some other way to check how many times each tweet has been retweeted. The only clue I have is that I have ID number for each tweet. Is there any way I could use these ID numbers to figure out how many times each tweet has been retweeted?? I am using twitter module for python.
0
python,api,twitter
2012-03-18T13:16:00.000
0
9,758,636
i don't think so, since one can either retweet using the retweet command or using a commented retweet. At least the second alternative generates a new tweet id
0
1,571
false
0
1
Getting Retweet Count of a Given Tweet ID Number
9,758,703
1
2
0
5
2
1
1.2
0
I want to basically copy whats from the clipboard and paste it in a file in utf-8 encoding, but what ever I try, the file has the '?' symbols in it and is Anscii encoding... But what I found out is, if there is a file that's already in utf-8 encoding, then whatever I paste in it manually (deleting whats there already), wont have the '?' in it. So if there is a way to clear content in a utf-8 file, then copy whats from the clipboard and write it to that file then that would be great. If I create the file, it's always ends up being Ancii... Now I already know how to copy from clip board and write it to a file, its just how to clear a file which is confusing...
0
python
2012-03-19T00:41:00.000
0
9,763,675
Opening the file in write/read mode (w+) will truncate the file without rewriting it if it already exists.
0
12,883
true
0
1
How to erase all text from a file using python, but not delete/recreate the file?
9,763,705
3
3
0
0
4
0
0
0
In OpenERP 6.0.1, I've created a server action to send a confirmation email after an invoice is confirmed, and linked it to appropriately to the invoice workflow. now normally when an invoice is confirmed, an email is automatically sent. is there a way to set a date for when the email should be sent instead of being sent immediately? like "send email after one week of confirmation" ?
0
python,openerp
2012-03-17T17:03:00.000
0
9,771,171
i dont know but i think you can also use the sheduled actions in administration->shedular->sheduled actions or else ir.cron is the best option for sheduling outgoing emails
0
1,470
false
1
1
openerp schedule server action
10,222,065
3
3
0
9
4
0
1.2
0
In OpenERP 6.0.1, I've created a server action to send a confirmation email after an invoice is confirmed, and linked it to appropriately to the invoice workflow. now normally when an invoice is confirmed, an email is automatically sent. is there a way to set a date for when the email should be sent instead of being sent immediately? like "send email after one week of confirmation" ?
0
python,openerp
2012-03-17T17:03:00.000
0
9,771,171
There is a one object ir.cron which will run on specific time period. There you can specify the time when you want to sent the mail. This object will call the function which you given in Method attribute. In this function you have to search for those invoices which are in created state. Then check the date when it created and if its >=7 days then send mail. Or You can create ir.cron on specific workflow action of the invoice which will have Next Execution Date as after the 7 or 8 days.
0
1,470
true
1
1
openerp schedule server action
9,784,730
3
3
0
0
4
0
0
0
In OpenERP 6.0.1, I've created a server action to send a confirmation email after an invoice is confirmed, and linked it to appropriately to the invoice workflow. now normally when an invoice is confirmed, an email is automatically sent. is there a way to set a date for when the email should be sent instead of being sent immediately? like "send email after one week of confirmation" ?
0
python,openerp
2012-03-17T17:03:00.000
0
9,771,171
With OpenERO 6.1 New Email Engine has Email Queue so what you just need to do it queue your Email on that email queue and we already have one Scheduled Action which processes this email queue at defined interval, so what you can do it you can change the trigger time of the same action. and you can see the email Engine api for how to queue your emails in email queue. Regards
0
1,470
false
1
1
openerp schedule server action
10,615,931
1
1
0
0
10
0
0
0
I'm working on a large python project using vim with tagexplorer, pythoncomplete, and ctags. Tag-based code-browsing and code-completion features don't work the way they should unfortunately because ctags doesn't tie instances to types. Hypothetical scenarios: Auto Complete: vim won't auto-complete method on() in myCar.ignition().on() because ctags doesn't know that ignition() returns TypeIgnition. Code Browsing: vim won't browse into TypeCar when I click on myCar but instead presents me with multiple definition matches, incorrect matches, or no matches because ctags doesn't backtrack and tie instances to types. The problem seems to stem from python being a dynamically typed language. Neither scenario would present a challenge otherwise. Is there an effective alternative to tags-based code-browsing and code-completion and an IDE or vim plugin that implements it well? Note: Please vote "re-open". Solutions to this problem are valuable to the community. The question was originally formulated very vaguely, that's no longer the case.
0
python,ruby,vim,code-completion
2012-03-19T17:31:00.000
0
9,774,966
Commercial IDE for python like wing (www.wingware.com) and pycharm (www.jetbrains.com/pycharm) are better to solve majority of code-completion issues. Of course, they are not free though. I myself, when use eclipse with pydev plugin was not able to get satisfactory results.
0
273
false
0
1
How to address python code-browsing and code-completion issues in vim?
9,775,180
1
1
0
0
1
0
0
0
I am trying to connect an android device to specific AP without keycodes. I am looking for adb shell commands or monkeyrunner script that can perform the same. Hope you guys can help me with this. PS. After researching for days only way I found is using wpa_cli in adb shell. But couldnt exactly connect because I was not able to find the exact codes.
0
android,python,android-intent,android-emulator,monkeyrunner
2012-03-19T19:24:00.000
1
9,776,529
wpa_cli should work.Open wpa_cli>> add_network set_network ssid "APSSID" set_network key_mgmt NONE \if ap is confgrd in open none save_config enable these set of commands should work if WiFI is ON in UI. using Monkeyrunner navigate using keycode is the only option OR you need to make an APK for ur specific operations
0
799
false
0
1
How to connect android device to specific AP with adb shell or monkeyrunner
10,211,905
2
3
0
1
18
1
0.066568
0
I have to write a daemon program that constantly runs in the background and performs some simple tasks. The logic is not complicated at all, however it has to run for extended periods of time and be stable. I think C++ would be a good choice for writing this kind of application, however I'm also considering Python since it's easier to write and test something quickly in it. The problem that I have with Python is that I'm not sure how its runtime environment is going to behave over extended periods of time. Can it eat up more and more memory because of some GC quirks? Can it crash unexpectedly? I've never written daemons in Python before, so if anyone here did, please share your experience. Thanks!
0
python,daemon
2012-03-19T23:00:00.000
1
9,779,200
I've written many things in C/C++ and Perl that are initiated when a LINUX box O.S. boots, launching them using the rc.d. Also I've written a couple of java and python scripts that are started the same way I've mentioned above, but I needed a little shell-script (.sh file) to launch them and I used rc.5. Let me tell you that your concerns about their runtime environments are completely valid, you will have to be careful about wich runlevel you'll use... (only from rc.2 to rc.5, because rc.1 and rc.6 are for the System). If the runlevel is too low, the python runtime might not be up at the time you are launching your program and it could flop. e.g.: In a LAMP Server MySQL and Apache are started in rc.3 where the Network is already available. I think your best shot is to make your script in python and launch it using a .sh file from rc.5. Good luck!
0
2,480
false
0
1
Is writing a daemon in Python a good idea?
9,779,553
2
3
0
14
18
1
1.2
0
I have to write a daemon program that constantly runs in the background and performs some simple tasks. The logic is not complicated at all, however it has to run for extended periods of time and be stable. I think C++ would be a good choice for writing this kind of application, however I'm also considering Python since it's easier to write and test something quickly in it. The problem that I have with Python is that I'm not sure how its runtime environment is going to behave over extended periods of time. Can it eat up more and more memory because of some GC quirks? Can it crash unexpectedly? I've never written daemons in Python before, so if anyone here did, please share your experience. Thanks!
0
python,daemon
2012-03-19T23:00:00.000
1
9,779,200
I've written a number of daemons in Python for my last company. The short answer is, it works just fine. As long as the code itself doesn't have some huge memory bomb, I've never seen any gradual degradation or memory hogging. Be mindful of anything in the global or class scopes, because they'll live on, so use del more liberally than you might normally. Otherwise, like I said, no issues I can personally report. And in case you're wondering, they ran for months and months (let's say 6 months usually) between routine reboots with zero problems.
0
2,480
true
0
1
Is writing a daemon in Python a good idea?
9,779,293
1
3
0
0
8
1
0
0
Should Python library modules start with #!/usr/bin/env python? Looking at first lines of *.py in /usr/share/pyshared (where Python libs are stored in Debian) reveals that there are both files that start with the hashbang line and those that do not. Is there a reason to include or omit this line?
0
python,coding-style,shebang
2012-03-20T08:27:00.000
1
9,783,482
if you want your script to be an executable, you have to include this line
0
1,029
false
0
1
Should Python library modules start with #!/usr/bin/env python?
9,783,492
1
1
0
0
3
0
0
0
I'm connecting a several identical USB-MIDI devices and talking to them using Python and pyportmidi. I have noticed that when I run my code on Linux, occasionally the MIDI ports of the devices are enumerated in a different order, so I send messages to the wrong devices. As the devices do not have unique identifiers, I am told that I should identify them by which USB port they are connected to. Is there any way to retrieve this information? My app will run on Linux, but Mac OS support is useful for development. It's annoying because they usually enumerate in a sensible order - the first device in the hub is the first device in portmidi, but sometimes they don't - usually the first 2 devices are switched. I have to physically move the devices without unplugging to fix them.
0
python,usb,midi,pyportmidi
2012-03-20T16:19:00.000
1
9,790,715
lsusb should do the trick. All devices and their respective hubs are listed there.
0
455
false
0
1
Is it possible to find out which USB port a MIDI device is connected to in portmidi / pyportmidi
9,790,821
1
1
0
1
0
0
1.2
0
I've been able to deploy a test application by using pyramid with pserve and running pceleryd (I just send an email without blocking while it is sent). But there's one point that I don't understand: I want to run my application with mod_wsgi, and I don't understand if I can can do it without having to run pceleryd from a shell, but if I can do something in the virtualhost configuration. Is it possible? How?
0
python,celery,pyramid,celeryd
2012-03-21T16:17:00.000
0
9,808,628
There are technically ways you could use Apache/mod_wsgi to manage a process distinct from that handling web requests, but the pain point is that Celery will want to fork off further worker processes. Forking further processes from a process managed by Apache can cause problems at times and so is not recommended. You are thus better of starting up Celery process separately. One option is to use supervisord to start it up and manage it.
0
530
true
1
1
using celery with pyramid and mod_wsgi
9,813,506
3
3
0
0
1
0
0
0
First I want to clearify that I mean by reverse engineering something like "decompiling" and getting back the original source code or something similiar. Yesterday I read a question about someone who wanted to protect his python code from "getting stolen" in other words: he didn't like that someone can read his python code. The interesting thing I read was that someone said that the only reliable way to "protect" his code from getting reverse engineered is by using a Webservice. So I could actually only write some GUIs in Python, PHP, whatever and do the "very secret code" I want to protect via a Webservice. (Basically sending variables to the host and getting results back). Is it really impossible to reverse engineer a Webservice (via code and without hacking into the Server)? Will this be the future of modern commercial applications? The cloud-hype is already here. So I wouldn't wonder. I'm very sorry if this topic was already discussed, but I couldn't find any resources about this. EDIT: The whole idea reminds me of AJAX. The code is executed on the server and the content is sent to the client and "prettified". The client himself doesnt see what php-code or other technology is behind.
0
python,web-services,open-source,reverse
2012-03-21T19:38:00.000
0
9,811,655
Yes, All they could do is treat your web service as a black box, query the WSDL for all the parameters it accepts and the data that it returns. They could then submit different variables and see what different results are. The "code" could not be seen or stolen (with proper security) but the inputs and outputs could be duplicated. If you want to secure your "very secret code" a web service is a great way to protect the actual code. -sb
0
276
false
1
1
Reverse Engineer a program working as a webservice, the future?
9,811,793
3
3
0
1
1
0
1.2
0
First I want to clearify that I mean by reverse engineering something like "decompiling" and getting back the original source code or something similiar. Yesterday I read a question about someone who wanted to protect his python code from "getting stolen" in other words: he didn't like that someone can read his python code. The interesting thing I read was that someone said that the only reliable way to "protect" his code from getting reverse engineered is by using a Webservice. So I could actually only write some GUIs in Python, PHP, whatever and do the "very secret code" I want to protect via a Webservice. (Basically sending variables to the host and getting results back). Is it really impossible to reverse engineer a Webservice (via code and without hacking into the Server)? Will this be the future of modern commercial applications? The cloud-hype is already here. So I wouldn't wonder. I'm very sorry if this topic was already discussed, but I couldn't find any resources about this. EDIT: The whole idea reminds me of AJAX. The code is executed on the server and the content is sent to the client and "prettified". The client himself doesnt see what php-code or other technology is behind.
0
python,web-services,open-source,reverse
2012-03-21T19:38:00.000
0
9,811,655
Wow, this is awesome! I've never thought it this way, but you could create a program that crawls an api, and returns as an output a django/tastypie software that mimics everything the api does. By calling the service, and reading what it says, you can parse it, and begin to see the relationships between objects inside the api. Having this, you can create the models, and tastypie takes it from this point. The awesome thing about this, is that normal people (or at least not backend developers) could create an api just by describing what they want to be as an output. I've seen many android/iphone developers creating a bunch of static xml or json, so they can call their service, and start the frontend development. Well what if that was enough? Take some xml/json files as input, get a backend as an output.
0
276
true
1
1
Reverse Engineer a program working as a webservice, the future?
9,812,028
3
3
0
0
1
0
0
0
First I want to clearify that I mean by reverse engineering something like "decompiling" and getting back the original source code or something similiar. Yesterday I read a question about someone who wanted to protect his python code from "getting stolen" in other words: he didn't like that someone can read his python code. The interesting thing I read was that someone said that the only reliable way to "protect" his code from getting reverse engineered is by using a Webservice. So I could actually only write some GUIs in Python, PHP, whatever and do the "very secret code" I want to protect via a Webservice. (Basically sending variables to the host and getting results back). Is it really impossible to reverse engineer a Webservice (via code and without hacking into the Server)? Will this be the future of modern commercial applications? The cloud-hype is already here. So I wouldn't wonder. I'm very sorry if this topic was already discussed, but I couldn't find any resources about this. EDIT: The whole idea reminds me of AJAX. The code is executed on the server and the content is sent to the client and "prettified". The client himself doesnt see what php-code or other technology is behind.
0
python,web-services,open-source,reverse
2012-03-21T19:38:00.000
0
9,811,655
It depends on what you mean by reverse engineering: by repeatedly sending input and analyzing the output the behaviour of your code can still be seen. I wouldn't have your code but I can still see what the system does. This means I could build a similar system that does the same thing, given the same input. It would be hard to catch exceptional cases (such as output that is different on one day of the year only) but the common behaviour can certainly be copied. It is similar to analyzing the protocol of an instant messaging client: you may not have the original code but you can still build a copy.
0
276
false
1
1
Reverse Engineer a program working as a webservice, the future?
9,812,274
1
2
0
0
2
0
0
0
I'm running Python scripts as a CGI under Apache 2.2. These scripts rely on environment variables set in my .bashrc to run properly. The .bashrc is never loaded, and my scripts fail. I don't want to duplicate my bashrc by using a bunch of SETENV commands; the configuration files will easily get out of sync and cause hard-to-find bugs. I'm running apache as my user, not as root. I'm starting/stopping it manually, so the /etc/init.d script shouldn't matter at all (I think). Given these constraints, what can I do to have my .bashrc loaded when my CGI is called? Edit: I use /usr/sbin/apache2ctl to do the restarting.
0
python,bash,cgi,apache
2012-03-22T02:33:00.000
1
9,815,655
What? Surely you don't mean that your scripts rely on configurations in some account's personal home directory. Apache config files can export environment variables to CGI scripts, etc. Maybe your program is too dependent on too many environment variables. How about supporting a configuration file: /etc/mypythonprogram.rc. There can be a single environment variable telling the program to use an alternative config file, for flexibility.
0
2,452
false
0
1
Apache httpd doesn't load .bashrc
9,815,735
2
4
0
0
6
0
0
0
One difference is that "./script.py" only works if script.py is executable (as in file permissions), but "python script.py" works regardless. However, I strongly suspect there are more differences, and I want to know what they are. I have a Django website, and "python manage.py syncdb" works just fine, but "./manage.py syncdb" creates a broken database for some reason that remains a mystery to me. Maybe it has to do with the fact that syncdb prompts for a superuser name and password from the command line, and maybe using "./manage.py syncdb" changes the way it interacts with the command line, thus mangling the password. Maybe? I am just baffled by this bug. "python manage.py syncdb" totally fixes it, so this is just curiosity. Thanks. Edit: Right, right, I forgot about the necessity of the shebang line #!/usr/bin/python. But I just checked, "python manage.py syncdb" and "./manage.py syncdb" are using the same Python interpreter (2.7.2, the only one installed, on Linux Mint 12). Yet the former works and the latter does not. Could the environment variables seen by the Python code be different? My code does require $LD_LOADER_PATH and $PYTHON_PATH to be set special for each shell.
0
python,django,posix,sh
2012-03-22T16:16:00.000
1
9,826,313
./script.py runs the interpreter defined in the #! at the beginning of the file. For example, the first line might be #! /usr/bin/env python or #! /usr/bin/python or something else like that. If you look at what interpreter is invoked, you might be able to fix that problem.
0
2,513
false
0
1
When invoking a Python script, what is the difference between "./script.py" and "python script.py"
9,826,394
2
4
0
1
6
0
0.049958
0
One difference is that "./script.py" only works if script.py is executable (as in file permissions), but "python script.py" works regardless. However, I strongly suspect there are more differences, and I want to know what they are. I have a Django website, and "python manage.py syncdb" works just fine, but "./manage.py syncdb" creates a broken database for some reason that remains a mystery to me. Maybe it has to do with the fact that syncdb prompts for a superuser name and password from the command line, and maybe using "./manage.py syncdb" changes the way it interacts with the command line, thus mangling the password. Maybe? I am just baffled by this bug. "python manage.py syncdb" totally fixes it, so this is just curiosity. Thanks. Edit: Right, right, I forgot about the necessity of the shebang line #!/usr/bin/python. But I just checked, "python manage.py syncdb" and "./manage.py syncdb" are using the same Python interpreter (2.7.2, the only one installed, on Linux Mint 12). Yet the former works and the latter does not. Could the environment variables seen by the Python code be different? My code does require $LD_LOADER_PATH and $PYTHON_PATH to be set special for each shell.
0
python,django,posix,sh
2012-03-22T16:16:00.000
1
9,826,313
In Linux using terminal you can execute any file -if the user has execute permission- by typing ./fileName. When the OS sees a valid header like #! /usr/bin/python (or for perl #! /usr/bin/python), It will call the python or perl (appropriate) interpreter to execute the program. You can use the command python script.py directly because, python is a executable program located at /usr/bin (or somewhere else) which is in a environmental variable $PATH, that corresponding to directory of executables.
0
2,513
false
0
1
When invoking a Python script, what is the difference between "./script.py" and "python script.py"
9,826,923
1
1
0
1
0
0
1.2
0
We have multiple Python projects that have dependencies on each other. Hierarchically, these are organized like this: P1 P2 ... Pn Each of these is an PyDev project within Eclipse and they co-exist just fine within that environment. We are in the process of structuring out build process to enable us to deploy these and distribute these in a more systematic fashion. Currently, we just zip up these projects and copy them over for deployment. I need some advice on how to go about this task using distutils. Our objective is to have a script to build a zip file (or tar file) using distutils that contains all the necessary code and necessary data/properties from the projects P1 through Pn. We should then be able to deploy this with setup.py and having our DJango-based web layer access it. My first attempt is to create a project whose sole purpose is to build the deployment artifacts. This will sit parallel to the projects P1 through Pn, called PBuild. Does this seem reasonable? I'm having some issues with this approach. Does anybody have any other ideas of how to do this?
0
python,django,deployment,distutils,project-organization
2012-03-22T16:16:00.000
0
9,826,322
There's different philosophies on how apps should be packaged, but most Python developers adhere to a very minimalistic approach. In other words, you package up the smallest units of logic you can. So, your goal here shouldn't be to cram everything together, but to package each discrete application separately. By application, here, I don't mean necessarily each Django app, although breaking out some of the apps into their own packages may be worthwhile as well. This is really all about reusability. Any piece that could serve a purpose in some other scenario should get its own package. Then, you can set them up to have dependencies on whatever other packages they require.
0
180
true
1
1
Multiple Python projects organization for deployment and/or distribution
9,826,658
1
1
0
2
1
1
1.2
0
Are there any best practices for the use of higher-level Python constructs such as threading.Condition, and collections.deque from modules written in C? In particular: Avoiding dict lookup costs, for methods and members Accessing parts of these constructs that are in C directly where possible When to reimplement desired functionality locally and not import from elsewhere in the standard library
0
python,c,cpython,python-c-api,python-c-extension
2012-03-23T05:58:00.000
0
9,834,761
String lookups on a dict are very cheap in Python, but if desired you can cache them in a struct. There usually is no provision for doing so, since these libraries are meant to be accessed via Python and not C. It is still possible to generate your own headers that match the definitions in the C modules, but they would need to be maintained per Python version. There's no good answer for this one. It comes down to "fast" vs. "fast enough".
0
120
true
0
1
Using higher-level Python constructs from C
9,834,834
1
13
0
1
36
0
0.015383
0
Background I would like my Python script to pause before exiting using something similar to: raw_input("Press enter to close.") but only if it is NOT run via command line. Command line programs shouldn't behave this way. Question Is there a way to determine if my Python script was invoked from the command line: $ python myscript.py verses double-clicking myscript.py to open it with the default interpreter in the OS?
0
python,command-line
2012-03-23T12:32:00.000
1
9,839,240
This is typically done manually/, I don't think there is an automatic way to do it that works for every case. You should add a --pause argument to your script that does the prompt for a key at the end. When the script is invoked from a command line by hand, then the user can add --pause if desired, but by default there won't be any wait. When the script is launched from an icon, the arguments in the icon should include the --pause, so that there is a wait. Unfortunately you will need to either document the use of this option so that the user knows that it needs to be added when creating an icon, or else, provide an icon creation function in your script that works for your target OS.
0
35,970
false
0
1
How to determine if Python script was run via command line?
9,839,781
3
3
0
0
2
0
0
0
Suppose the following imagined scenario: I have a site that is used for military recruits. Military recruits and only military recruits may sign up on this site. - The easiest way to authenticate would be to get a list of pre-authorized email addresses. However, the military obviously will not release their email address list. How would I authenticate these individuals to sign up? My initial thought would be that I could get a sha3 hash of the email addresses. Then, when people register, I would check the sha3 of the email address they entered against the database. Basically, this would be a way to get a boolean back of whether the email is in the system without knowing the email address. Does this sound like a realistic approach, that would ensure the anonymity of the email address? Any better ideas to accomplish this?
0
python,security,encryption
2012-03-23T18:33:00.000
0
9,844,679
This is a workable zero-knowledge proof. However, if you have the co-operation of the military, one would think you could do better... You could get a (otherwise non-sensical) serial number and initial password from the military, and then have the recruits sign in using that. This will solve the problem of people not having a 1:1 relationship to an e-mail address, and be much harder to guess, too, since you need to guess the password as well as the serial number.
0
134
false
0
1
Authenticating email address without being able to view email address
9,852,214
3
3
0
0
2
0
0
0
Suppose the following imagined scenario: I have a site that is used for military recruits. Military recruits and only military recruits may sign up on this site. - The easiest way to authenticate would be to get a list of pre-authorized email addresses. However, the military obviously will not release their email address list. How would I authenticate these individuals to sign up? My initial thought would be that I could get a sha3 hash of the email addresses. Then, when people register, I would check the sha3 of the email address they entered against the database. Basically, this would be a way to get a boolean back of whether the email is in the system without knowing the email address. Does this sound like a realistic approach, that would ensure the anonymity of the email address? Any better ideas to accomplish this?
0
python,security,encryption
2012-03-23T18:33:00.000
0
9,844,679
I think it is insecure to have just a hash of the email address as hashing algorithm is not reversible but it is theoretically possible to have incorrect email that has the same hash as correct one. It is true for md5 hash algorithm and theoretically true for any other. I suggest to use some kind of salt (additional hash function payload) or personal registration keys and of cause do not use md5 ever. With the salt that you know and "military" knows you can receive just hashes from "military" and be kindly sure that encryption is fair enough to identify recruits by email. But this technique is vulnerable to random hash coincident still. Probably, the best way to be sure that recruit's email is truly valid is to give recruits unique registration codes on "military" side, than they need to give you pairs of registration code and the hash of the code and the corresponding email. Than every recruit will need to provide that registration code and you will be able to recalculate the recruit's personal hash from his email and registration code. The benefit of the second way is that even you will not be able to easy brute force the hashes into emails if the "military" will give you not just the pairs of code/hash but the list of the codes for every hash, only one of which will be correct. Update: The paranoia culmination way. You receive just a number of hashes from "military" two times more than the recruits number is. Every recruit receives unique registration code. Than you will need to calculate registration code hash first and check if you have it. Than you will need to combine this hash with the email and check if you have the second hash. This way you will not be able to reverse emails yourself ever.
0
134
false
0
1
Authenticating email address without being able to view email address
9,852,047
3
3
0
1
2
0
1.2
0
Suppose the following imagined scenario: I have a site that is used for military recruits. Military recruits and only military recruits may sign up on this site. - The easiest way to authenticate would be to get a list of pre-authorized email addresses. However, the military obviously will not release their email address list. How would I authenticate these individuals to sign up? My initial thought would be that I could get a sha3 hash of the email addresses. Then, when people register, I would check the sha3 of the email address they entered against the database. Basically, this would be a way to get a boolean back of whether the email is in the system without knowing the email address. Does this sound like a realistic approach, that would ensure the anonymity of the email address? Any better ideas to accomplish this?
0
python,security,encryption
2012-03-23T18:33:00.000
0
9,844,679
Sounds good, as long as the "military" is convinced, correctly or not, that the hashing is truly irreversible, and is willing to trust you with the hashed list of addresses. (What is sha5, by the way? Afaik sha3 is the latest generation). If they will not entrust you with even the cryptographically hashed list, the alternative would be to delegate the authentication: You forward the email address to the military through a secure connection, and they tell you whether it's ok or not. It would be slower but you only need to do it once, at sign-up time.
0
134
true
0
1
Authenticating email address without being able to view email address
9,851,795
3
11
0
0
17
1
0
0
I downloaded the colorama module for python and I double clicked the setup.py. The screen flashed, but when I try to import the module, it always says 'No Module named colorama' I copied and pasted the folder under 'C:\Python26\Lib\site-packages' and tried to run the setup from there. Same deal. Am I doing something wrong? Thanks, Mike
0
python,colorama
2012-03-23T21:27:00.000
0
9,846,683
Re-installing colorama might not work right away. If there is a colorama .egg in site-packages, you need to remove that file first and then pip install colorama.
0
125,711
false
0
1
How to install Colorama in Python?
51,990,615
3
11
0
3
17
1
0.054491
0
I downloaded the colorama module for python and I double clicked the setup.py. The screen flashed, but when I try to import the module, it always says 'No Module named colorama' I copied and pasted the folder under 'C:\Python26\Lib\site-packages' and tried to run the setup from there. Same deal. Am I doing something wrong? Thanks, Mike
0
python,colorama
2012-03-23T21:27:00.000
0
9,846,683
Run the following command in Google shell: sudo pip3 install colorama
0
125,711
false
0
1
How to install Colorama in Python?
68,844,752
3
11
0
0
17
1
0
0
I downloaded the colorama module for python and I double clicked the setup.py. The screen flashed, but when I try to import the module, it always says 'No Module named colorama' I copied and pasted the folder under 'C:\Python26\Lib\site-packages' and tried to run the setup from there. Same deal. Am I doing something wrong? Thanks, Mike
0
python,colorama
2012-03-23T21:27:00.000
0
9,846,683
I have also experienced this problem. Following the instructions to install sudo pip install colorama I receive the message: Requirement already satisfied: colorama in /usr/lib/python2.7/dist-packages. The problem for me is that I am using python3 in my header code #!usr/bin/env python3. Changing this to#!usr/bin/env python works - sorry, I don't know how to get it to work with python 3!
0
125,711
false
0
1
How to install Colorama in Python?
54,669,213
2
3
0
5
2
0
1.2
0
I have an application that emails individuals on different occurrences. The entire application is on a single server. I am currently sending emails through SendGrid. At what volume of emails would it make sense to use a system like RabbitMQ to send out emails? Maximum rate = 1 email per minute? 1 email per second? 10 emails per second? How would I evaluate when the switch makes sense?
0
python,email,smtp,rabbitmq,amqp
2012-03-23T22:47:00.000
0
9,847,451
Why are you considering RabbitMQ ? it is better to consider using a MTA/Mail relay like Postfix where you submit your emails and it handles them for you in a queue. You can configure it to dispatch the queue on different mail relays, set the email throughput, how much retry shall be made on a failed sending ...
0
1,146
true
0
1
At what email volume to use AMQP
9,847,881
2
3
0
1
2
0
0.066568
0
I have an application that emails individuals on different occurrences. The entire application is on a single server. I am currently sending emails through SendGrid. At what volume of emails would it make sense to use a system like RabbitMQ to send out emails? Maximum rate = 1 email per minute? 1 email per second? 10 emails per second? How would I evaluate when the switch makes sense?
0
python,email,smtp,rabbitmq,amqp
2012-03-23T22:47:00.000
0
9,847,451
Having Rabbitmq is good option when your are considering scaling in future, I mean in terms of new smpt send mail workers or new email server, as of now if you have single server and not going to more in it then rabbitmq will load your server even more and will be issue to maintain, but if your going to have more then 100 mails per second then it makes sense to have rabbitmq its goal to make your calling function free as soon as possible by offloading all the load from function to rabbitmq queue and then save it till worker or consumer doesn't pick them, this will help in fail cases too as your having your mails saved in rabbitmq and if the consumer fails you still have your mails, when it starts (smtp send worker) then rabbitmq will provide the rest of the mails to it. I hope this makes sense, please feel free to ask other stuff about it, I used rabbitmq for sending mail but in my case we are having one server running only rabbitmq only so there it makes sense.
0
1,146
false
0
1
At what email volume to use AMQP
27,737,869
1
3
0
2
0
0
0.132549
0
I am confused about using php or python for implementing server program. I seems that there are not only syntax differences. For example, the php program is short-lived (only exist when request comes and die when response is generated) and it can only store things in DB rather than memory. But for python (using TwistedWeb), the python program is long-lived. It can hold things in memory, doing something else when there are no request. Am I wrong? I am confused and please help me to clarify it.
0
php,python
2012-03-25T00:58:00.000
0
9,857,067
Both PHP and Python can be used for long running programs. PHP was designed as a embeddable programing language for web servers. In the environment it is commonly set up in the web server (generally Apache with mod_php) has PHP running in it and the entire PHP environment is set up and torn down for each request. Python was designed as a general purpose language. When used for developing web applications it is generally run separately from the web server process and it has requests routed to it by the web server (Apache, ngix, etc.) There is no reason that this has to be this way - you can set up a Python program to run over CGI (where it will be re-started for each request) and you can set up PHP to run as a FastCGI program and it will be run separately from the web server and stay up between requests. However, if you want to persist important information (for example statistics about the total number of requests received or the total amount of work performed) between requests you will be far better off persisting them to the filesystem (via files, a database, etc) or to another in-memory process like Redis or Memcached. Even when the application process is run separate of the web server it often spawns several child processes and these processes are started and stopped after serving a certain number of requests (or after a certain amount of uptime) in order to release system resources. Important information needs to be persisted elsewhere (and backed-up regularly).
0
763
false
0
1
php vs python for server program
9,857,192
2
2
0
0
0
0
0
0
I'm thinking about trying to convert a Scons (Python) script to another build system but was wondering if there was a Python-analysis library available in order to 'interrogate' the Scons/Python script? What I'm [possibly] after is something along the lines of Java's reflection mechanism, in fact, if this is possible via say Jython/Java, coding in Java, that would be best for me as a Java dev (I have no real background in Python). What I need to be able to do is extract the variable assigment values etc. for certain named class types and methods within the script, so that I can transfer them to my new output format. Any ideas? Thanks Rich
0
java,python,reflection,jython,scons
2012-03-25T12:14:00.000
0
9,860,029
If your current scons files are very regular and consistent it may be easier to do something "dumb" with standard text-editing tools. If you want to get smarter, you should notice that scons is itself a Python program, and it loads your build files which are also Python. So you could make your own "special" version of scons which implements the functions your build scripts use (to add programs, libraries, whatever). Then you could run your build scripts in your "fake" scons program and have your functions dump their arguments in a format suitable for your new build system. In other words, don't think of the problem in terms of analyzing the Python grammar completely--realize that you can actually run your build scripts as Python code and hijack their behavior. Easier said than done, I'm sure.
0
95
false
0
1
"Analyse" the Python language to dissect a Scons build script?
9,865,039
2
2
0
0
0
0
0
0
I'm thinking about trying to convert a Scons (Python) script to another build system but was wondering if there was a Python-analysis library available in order to 'interrogate' the Scons/Python script? What I'm [possibly] after is something along the lines of Java's reflection mechanism, in fact, if this is possible via say Jython/Java, coding in Java, that would be best for me as a Java dev (I have no real background in Python). What I need to be able to do is extract the variable assigment values etc. for certain named class types and methods within the script, so that I can transfer them to my new output format. Any ideas? Thanks Rich
0
java,python,reflection,jython,scons
2012-03-25T12:14:00.000
0
9,860,029
I doubt it's the best tool for migrating scons, but python's inspect module offers some reflection facilities. For the rest, you can simply poke inside live classes and objects: Python has some data hiding but does not enforce access restrictions.
0
95
false
0
1
"Analyse" the Python language to dissect a Scons build script?
9,874,850
1
3
0
0
1
0
0
0
I am planning to build, for lack of a better term, a multi user Customer Relationship Manager (CRM) and I want to create a unique identifier that is easy to transmit in email, via text, and verbally to other team members. For Example: I upload my list of 100 customers and John Smith and his phone number are included in that list. Upon upload, I want to generate a hidden fingerprint / unique identifier for John Smith in the database, and then propagate a 12 digit number that can be shared publicly. In my mind like this - john smith + ph: 5557898095 = fingerprint: 7e013d7962800374e6e67dd502f2d7c0 displays to end user id number: 103457843983 My question is - what method or process should I use to take the name and phone number, generate a hidden key, and then translate to a displayable key that is linked to the hidden one? I hope this clear. I mainly want to use the right logic process.
0
php,python,mysql
2012-03-25T12:56:00.000
0
9,860,276
Assuming your real ID is the auto_incremented field in your customer table, then just have a second table that maps your public ID to the real ID. Assuming you're using some sort of hashing algorithm to generate your public ID, it'd be a simple process to do a lookup on that table when you create a new user to detect a clash with an existing user, then regenerate a new ID until there's no clash (e.g. include system time as part of your hash input, then just keep regenerating until you find a unique ID)
0
739
false
0
1
Unique Key Generation Logic
9,860,314