Q_Id
int64 2.93k
49.7M
| CreationDate
stringlengths 23
23
| Users Score
int64 -10
437
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| DISCREPANCY
int64 0
1
| Tags
stringlengths 6
90
| ERRORS
int64 0
1
| A_Id
int64 2.98k
72.5M
| API_CHANGE
int64 0
1
| AnswerCount
int64 1
42
| REVIEW
int64 0
1
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 15
5.1k
| Available Count
int64 1
17
| Q_Score
int64 0
3.67k
| Data Science and Machine Learning
int64 0
1
| DOCUMENTATION
int64 0
1
| Question
stringlengths 25
6.53k
| Title
stringlengths 11
148
| CONCEPTUAL
int64 0
1
| Score
float64 -1
1.2
| API_USAGE
int64 1
1
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 15
3.72M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5,920,764 | 2011-05-07T11:33:00.000 | 8 | 0 | 0 | 1 | 0 | python,macos,wxpython,uninstallation | 0 | 5,922,093 | 0 | 1 | 0 | false | 0 | 1 | If you look in the .dmg for wxPython, there is an uninstall_wxPython.py unininstall script. Just drag it to your desktop and run python ~/Desktop/uninstall_wxPython.py in a terminal. | 1 | 5 | 0 | 0 | Some details of my machine and installed packages before proceeding further:
Mac OSX version: 10.6.6
Python version: Activestate Python 2.7.1
wxpython version: wxpython 2.8 (uses Carbon API hence limited to 32-bit mode arch only)
I installed wxPython2.8-osx-unicode-py2.7 from wxpython website using their .dmg installer file. This package uses the Carbon API and hence is limited to 32-bit mode only. I have to write my applications using "arch -i386" in order to import wx, and due to this limitation I am unable to import certain other packages likes "MySQLdb" which are not available in 32-bit mode. So, my best option is to uninstall wxpython 2.8 and install wxpython 2.9 because version 2.9 uses Cocoa API which has both 32-bit and 64-bit support.
I don't know how to uninstall wxpython2.8 on my Mac OSX. Can anyone please help me? | How to uninstall wxpython 2.8 on Mac OSX 10.6 | 0 | 1 | 1 | 0 | 0 | 4,762 |
5,922,032 | 2011-05-07T15:29:00.000 | 0 | 0 | 0 | 0 | 0 | javascript,python | 0 | 5,922,070 | 0 | 1 | 0 | true | 1 | 0 | It's very unclear what you are looking for. If the question is about server push keywords are WebSockets,Socket.io,Ajax,Comet,Bayeux to start with | 1 | 1 | 0 | 0 | The original framework of my app (using Python) is below:
user key in keywords in forms.
App use the data in forms as parameter to get some results from third-party website.
The result which is from third-party website will be analyzed in my app.
Then show the result of analyze to user.
But there are some problems between my app and third-party website, so I want to change the framework of my app.
user key in keywords in forms.
store the keywords which in forms as cookie.
automatically download one javascript which in my app (website), and run the javascript in user's computer.
the javascript will get result from third-party website, and then sent the result to my app.
my app will analyze the result (from third-party website) and then show the result of analyze to user.
I want to know how can I do this, or what keyword should I search in Google. | Load form data in JavaScript and run something client side | 0 | 1.2 | 1 | 0 | 0 | 248 |
5,930,982 | 2011-05-08T23:06:00.000 | 1 | 0 | 0 | 1 | 0 | python,windows,apache,python-3.x | 0 | 5,931,096 | 0 | 3 | 0 | false | 0 | 0 | Python 3.0 is only supported via CGI. Put your CGI script in cgi-bin\. If you're willing to look at newer versions, preliminary support is available in mod_wsgi (but you'll probably have to build it yourself). | 1 | 3 | 0 | 1 | I've searched for ages on how to use Python 3 under Apache. If there is a walkthrough anywhere, it's very well hidden. Thus, hopefully, one of you Python professionals could make a quick 1-2-3 on how it's done!
I'm on Windows 7 using the newest version of XAMPP. | How do I use Python 3.0 under Apache? | 0 | 0.066568 | 1 | 0 | 0 | 1,328 |
5,934,326 | 2011-05-09T08:20:00.000 | 2 | 0 | 0 | 0 | 0 | javascript,android,python,sl4a | 0 | 5,939,196 | 0 | 1 | 0 | true | 1 | 0 | You'll need to wait for a refresh event (this would be a custom event) in your JavaScript that is waiting for an event posted from your Python script. The only communication layer between JavaScript and Python is via events. | 1 | 1 | 0 | 0 | I am writing an Android app using Python and SL4A. The app uses a webview that I wish to refresh. I plan on doing this by utilising javascript location.replace() within a wrapping doRefresh() javascript function. The problem I have is that I do not know how to call the javascript function from within my main event loop within the Python code.
Is there a way to directly call the method?
or
Is there a way to indirectly call the method say via a button's onClick and a mimic screen tap?
Thanks. | How to call a javascript method from python on SL4A? | 0 | 1.2 | 1 | 0 | 0 | 618 |
5,953,657 | 2011-05-10T17:02:00.000 | 0 | 1 | 1 | 0 | 0 | python | 0 | 5,953,799 | 0 | 4 | 0 | false | 0 | 0 | You can tell your friend to make *.py files to be executed by the interpreter. Change it from Explorer:Tools:Folder Options:File Types. | 3 | 2 | 0 | 0 | how can i run my program using test files on my desktop without typing in the specific pathname. I just want to be able to type the file name and continue on with my program. Since i want to be able to send it to a friend and not needing for him to change the path rather just read the exact same file that he has on his desktop. | Python path help | 0 | 0 | 1 | 0 | 0 | 312 |
5,953,657 | 2011-05-10T17:02:00.000 | 1 | 1 | 1 | 0 | 0 | python | 0 | 5,953,805 | 0 | 4 | 0 | true | 0 | 0 | f = open(os.path.join(os.environ['USERPROFILE'], 'DESKTOP', my_filename)) | 3 | 2 | 0 | 0 | how can i run my program using test files on my desktop without typing in the specific pathname. I just want to be able to type the file name and continue on with my program. Since i want to be able to send it to a friend and not needing for him to change the path rather just read the exact same file that he has on his desktop. | Python path help | 0 | 1.2 | 1 | 0 | 0 | 312 |
5,953,657 | 2011-05-10T17:02:00.000 | 0 | 1 | 1 | 0 | 0 | python | 0 | 5,953,763 | 0 | 4 | 0 | false | 0 | 0 | If you place your Python script in the same directory as the files your script is going to open, then you don't need to specify any paths. Be sure to allow the Python installer to "Register Extensions", so Python is called when you double-click on a Python script. | 3 | 2 | 0 | 0 | how can i run my program using test files on my desktop without typing in the specific pathname. I just want to be able to type the file name and continue on with my program. Since i want to be able to send it to a friend and not needing for him to change the path rather just read the exact same file that he has on his desktop. | Python path help | 0 | 0 | 1 | 0 | 0 | 312 |
5,953,949 | 2011-05-10T17:27:00.000 | 2 | 0 | 1 | 0 | 0 | python | 0 | 5,954,080 | 0 | 6 | 0 | false | 0 | 0 | Why don't you just read the file char by char using file.read(1)?
Then, you could - in each iteration - check whether you arrived at the char 1. Then you have to make sure that storing the string is fast. | 2 | 4 | 0 | 0 | Hey there, I have a rather large file that I want to process using Python and I'm kind of stuck as to how to do it.
The format of my file is like this:
0 xxx xxxx xxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
1 xxx xxxx xxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
So I basically want to read in the chunk up from 0-1, do my processing on it, then move on to the chunk between 1 and 2.
So far I've tried using a regex to match the number and then keep iterating, but I'm sure there has to be a better way of going about this. Any suggestion/info would be greatly appreciated. | Python: Read large file in chunks | 0 | 0.066568 | 1 | 0 | 0 | 8,205 |
5,953,949 | 2011-05-10T17:27:00.000 | 0 | 0 | 1 | 0 | 0 | python | 0 | 5,954,012 | 0 | 6 | 0 | false | 0 | 0 | If the format is fixed, why not just read 3 lines at a time with readline() | 2 | 4 | 0 | 0 | Hey there, I have a rather large file that I want to process using Python and I'm kind of stuck as to how to do it.
The format of my file is like this:
0 xxx xxxx xxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
1 xxx xxxx xxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
So I basically want to read in the chunk up from 0-1, do my processing on it, then move on to the chunk between 1 and 2.
So far I've tried using a regex to match the number and then keep iterating, but I'm sure there has to be a better way of going about this. Any suggestion/info would be greatly appreciated. | Python: Read large file in chunks | 0 | 0 | 1 | 0 | 0 | 8,205 |
5,965,655 | 2011-05-11T14:12:00.000 | 2 | 1 | 0 | 0 | 1 | php,python,ajax,ipc | 0 | 5,965,679 | 0 | 4 | 0 | false | 0 | 0 | I think you would have to use a meta refresh and maybe have the python write the status to a file and then have the php read from it.
You could use AJAX as well to make it more dynamic.
Also, probably shouldn't use exec()...that opens up a world of vulnerabilities. | 1 | 10 | 0 | 0 | I'm trying to build a web interface for some python scripts. The thing is I have to use PHP (and not CGI) and some of the scripts I execute take quite some time to finish: 5-10 minutes. Is it possible for PHP to communicate with the scripts and display some sort of progress status? This should allow the user to use the webpage as the task runs and display some status in the meantime or just a message when it's done.
Currently using exec() and on completion I process the output. The server is running on a Windows machine, so pcntl_fork will not work.
LATER EDIT:
Using another php script to feed the main page information using ajax doesn't seem to work because the server kills it (it reaches max execution time, and I don't really want to increase this unless necessary)
I was thinking about socket based communication but I don't see how is this useful in my case (some hints, maybe?
Thank you | Communication between PHP and Python | 1 | 0.099668 | 1 | 0 | 0 | 16,013 |
5,983,032 | 2011-05-12T19:00:00.000 | 2 | 0 | 0 | 1 | 0 | python,file-upload,tornado,ajax-upload | 0 | 5,989,216 | 0 | 1 | 0 | true | 1 | 0 | I got the answer.
I need to use self.request.body to get the raw post data.
I also need to pass in the correct _xsrf token, otherwise tornado will fire a 403 exception.
So that's about it. | 1 | 3 | 0 | 0 | I'm using this javascript library (http://valums.com/ajax-upload/) to upload file to a tornado web server, but I don't know how to get the file content. The javascript library is uploading using XHR, so I assume I have to read the raw post data to get the file content. But I don't know how to do it with Tornado. Their documentation doesn't help with this, as usual :(
In php they have something like this:
$input = fopen("php://input", "r");
so what's the equivalence in tornado? | asynchronous file upload with ajaxupload to a tornado web server | 0 | 1.2 | 1 | 1 | 0 | 1,836 |
5,986,472 | 2011-05-13T01:51:00.000 | 2 | 0 | 0 | 1 | 0 | python,django,linux,nginx,uwsgi | 0 | 5,986,912 | 0 | 1 | 0 | true | 1 | 0 | I like having regular users on a system:
multiple admins show up in sudo logs -- there's nothing quite like asking a specific person why they made a specific change.
not all tasks require admin privileges, but admin-level mistakes can be more costly to repair
it is easier to manage the ~/.ssh/authorized_keys if each file contains only keys from a specific user -- if you get four or five different users in the file, it's harder to manage. Small point :) but it is so easy to write cat ~/.ssh/id_rsa.pub | ssh user@remotehost "cat - > ~/.ssh/authorized_keys" -- if one must use >> instead, it's precarious. :)
But you're right, you can do all your work as root and not bother with regular user accounts. | 1 | 6 | 0 | 0 | I'm currently trying to set up nginx + uWSGI server for my Django homepage. Some tutorials advice me to create specific UNIX users for certain daemons. Like nginx user for nginx daemon and so on. As I'm new to Linux administration, I thought just to create second user for running all the processes (nginx, uWSGI etc.), but it turned out that I need some --system users for that.
Main question is what users would you set up for nginx + uWSGI server and how to work with them? Say, I have server with freshly installed Debian Squeeze.
Should I install all the packages, virtual environment and set up all the directories as root user and then create system ones to run the scripts? | Linux user scheme for a Django production server | 0 | 1.2 | 1 | 0 | 0 | 925 |
6,000,205 | 2011-05-14T06:32:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,django-views | 0 | 68,934,242 | 0 | 9 | 0 | false | 1 | 0 | I believe the updated solution is view.__module__. This returns your app_name both from Django and Django Rest Framework.
My scenario was working with dynamically module or app_name from view call so that I can work with access permission check for that particular module. | 1 | 26 | 0 | 0 | If you are in the view and want to retrieve the app name using Python ( the app name will be used for further logic ), how would you do it ? | How to get an app name using python in django | 0 | 0 | 1 | 0 | 0 | 31,008 |
6,006,666 | 2011-05-15T05:26:00.000 | 3 | 0 | 0 | 0 | 0 | python,django,apache2,mod-wsgi,hotdeploy | 0 | 6,007,285 | 0 | 2 | 0 | false | 1 | 0 | Just touching the wsgi file allways worked for me. | 1 | 4 | 0 | 0 | I have setup an Apache server with mod_wsgi, python_sql, mysql and django.
Everything works fine, except the fact that if I make some code changes, they do not reflect immidiately, though I thing that everything is compiled on the fly when it comes to python/mod_wsgi.
I have to shut down the server and come back again to see the changes.
Can someone point me to how hot-deployment can be achieved with the above setup??
Thanks,
Neeraj | Hot deployment using mod_wsgi,python and django on Apache | 0 | 0.291313 | 1 | 1 | 0 | 768 |
6,013,930 | 2011-05-16T06:37:00.000 | 1 | 0 | 1 | 0 | 0 | python,pdf,pypdf | 0 | 6,013,951 | 0 | 1 | 0 | true | 0 | 0 | Short answer: not possible with pypdf and not with any PDF tool for Python I know. | 1 | 0 | 0 | 0 | how to use python pypdf to read pdf and get highlighted words? I highlighted the unknown words in a pdf and I want to extract them for referencing later. | python pypdf to read pdf and get highlighted words | 0 | 1.2 | 1 | 0 | 0 | 864 |
6,020,070 | 2011-05-16T16:01:00.000 | 1 | 0 | 0 | 0 | 0 | wxpython,wxwidgets | 0 | 6,641,179 | 0 | 1 | 0 | false | 0 | 1 | Maybe you could use the editor to set the numeric value of the cell, then on the renderer draw a vertical bar by dividing the value by the maximum possible value and multiplying it by the width of the cell (in px, to get the proportion of the cell that the slider needs to appear at) and drawing a narrow vertical rectangle at this point that would act as the indicator, something like:
+---------------+
| |-| |
+---------------+
Note that this is untested, but I plan to do something similar on my current project. Just out of curiosity, how did you get the slider to appear in the editor? | 1 | 1 | 0 | 0 | I need to show a slider in one column of a grid. I was able to create a custom CellEditor which displays the slider when I double click into a cell of the appropriate column so it enters the edit mode. But I don't know how to create a custom CellRenderer that displays the slider in all cell that are not in edit mode.
Unfortunately the wx.RendererNative does not offer such a method like DrawSlider() :-(
I appreciate any suggestion.
Below you can see an example of what is working so far. You can see the one cell with the sl | wx/wxPython: How to add a slider to a grid cell? | 0 | 0.197375 | 1 | 0 | 0 | 337 |
6,025,305 | 2011-05-17T01:27:00.000 | 0 | 0 | 1 | 1 | 0 | emacs,buffer,python-mode | 0 | 6,401,182 | 0 | 2 | 0 | false | 0 | 0 | I use python-mode 5.2.0.
I went into python-mode.el and changed the switch-to-buffer-other-window to switch-to-buffer.
I evaluated it and now the interpreter opens up in the same window (regardless of the number of other windows I have).
Did you evaluate the function when you changed the above line?
Btw, opening the interpreter in another window is a feature, not a bug, IMHO. We want to be able to see the interpreter when we evaluate a region of code using C-c | or the buffer using C-c C-c. | 1 | 1 | 0 | 1 | Maybe I'm being irrational but I really really hate it when a command opens a new window in emacs. I'm using emacs on Ubuntu which came with python-mode and when I start an interpreter with C-c ! it pops up in a new window.
What I want is for emacs to switch to a new buffer in the same window. So far I've tried adding Jython(I set the interpreter to jython) to same-window-buffer-names and even going into python-mode.el and changing switch-to-buffer-other-window calls to switch-to-buffer(which I since changed back). So far I've gotten no change.
I have emacs 23.1.1 and python-mode 5.1.0
Edit: The actual name of the jython buffer is bracketed by asterisks and I don't know how to let stackoverflow know that they aren't styling information. | making python interpreter open in same window | 0 | 0 | 1 | 0 | 0 | 1,148 |
6,030,115 | 2011-05-17T11:29:00.000 | 0 | 1 | 0 | 0 | 0 | python,amazon-ec2 | 0 | 71,252,207 | 0 | 5 | 0 | false | 1 | 0 | simply add your code to Github and take clone on EC2 instance and run that code. | 2 | 71 | 0 | 0 | I understand nearly nothing to the functioning of EC2. I created an Amazon Web Service (AWS) account. Then I launched an EC2 instance.
And now I would like to execute a Python code in this instance, and I don't know how to proceed. Is it necessary to load the code somewhere in the instance? Or in Amazon's S3 and to link it to the instance?
Where is there a guide that explain the usages of instance that are possible? I feel like a man before a flying saucer's dashboard without user's guide. | How to run a code in an Amazone's EC2 instance? | 0 | 0 | 1 | 0 | 0 | 66,912 |
6,030,115 | 2011-05-17T11:29:00.000 | 4 | 1 | 0 | 0 | 0 | python,amazon-ec2 | 0 | 12,026,840 | 0 | 5 | 0 | false | 1 | 0 | Launch your instance through Amazon's Management Console -> Instance Actions -> Connect
(More details in the getting started guide)
Launch the Java based SSH CLient
Plugins-> SCFTP File Transfer
Upload your files
run your files in the background (with '&' at the end or use nohup)
Be sure to select an AMI with python included, you can check by typing 'python' in the shell.
If your app require any unorthodox packages you'll have to install them. | 2 | 71 | 0 | 0 | I understand nearly nothing to the functioning of EC2. I created an Amazon Web Service (AWS) account. Then I launched an EC2 instance.
And now I would like to execute a Python code in this instance, and I don't know how to proceed. Is it necessary to load the code somewhere in the instance? Or in Amazon's S3 and to link it to the instance?
Where is there a guide that explain the usages of instance that are possible? I feel like a man before a flying saucer's dashboard without user's guide. | How to run a code in an Amazone's EC2 instance? | 0 | 0.158649 | 1 | 0 | 0 | 66,912 |
6,040,603 | 2011-05-18T06:14:00.000 | 2 | 0 | 1 | 0 | 0 | python,dot,pydot | 0 | 6,040,922 | 0 | 1 | 0 | true | 0 | 0 | dot.write_png('filename.png')? Or is there something I'm missing?
Also, the neato command-line program has a -n option for graph files that already have layout. The program description says it is for undirected graphs, but I tried it with a digraph and it produced the correct result. | 1 | 1 | 0 | 0 | I have taken an initial DOT file and modified the pos attributes of some nodes using pydot. Now I want to render an image file that shows the nodes in their new positions. The catch is, I don't want a layout program to mess with the positions! I just want to see the nodes exactly where the pos attribute indicates. I don't care about how the edges look.
I can produce a DOT file with my positions easily using pydot, but I can't figure out how to make an image file, either in pydot or on the command line with dot. Help would be really appreciated! Thanks! | How to render DOT file using only pos attributes | 0 | 1.2 | 1 | 0 | 0 | 672 |
6,041,395 | 2011-05-18T07:42:00.000 | 3 | 0 | 1 | 0 | 0 | python,class,serialization,memory-management,pickle | 0 | 6,046,352 | 0 | 2 | 0 | false | 0 | 0 | Do you construct your tree once and then use it without modifying it further? In that case you might want to consider using separate structures for the dynamic construction and the static usage.
Dicts and objects are very good for dynamic modification, but they are not very space efficient in a read-only scenario. I don't know exactly what you are using your suffix tree for, but you could let each node be represented by a 2-tuple of a sorted array.array('c') and an equally long tuple of subnodes (a tuple instead of a vector to avoid overallocation). You traverse the tree using the bisect-module for lookup in the array. The index of a character in the array will correspond to a subnode in the subnode-tuple. This way you avoid dicts, objects and vector.
You could do something similar during the construction process, perhaps using a subnode-vector instead of subnode-tuple. But this will of course make construction slower, since inserting new nodes in a sorted vector is O(N). | 1 | 10 | 1 | 0 | I was wondering whether someone might know the answer to the following.
I'm using Python to build a character-based suffix tree. There are over 11 million nodes in the tree which fits in to approximately 3GB of memory. This was down from 7GB by using the slot class method rather than the Dict method.
When I serialise the tree (using the highest protocol) the resulting file is more than a hundred times smaller.
When I load the pickled file back in, it again consumes 3GB of memory. Where does this extra overhead come from, is it something to do with Pythons handling of memory references to class instances?
Update
Thank you larsmans and Gurgeh for your very helpful explanations and advice. I'm using the tree as part of an information retrieval interface over a corpus of texts.
I originally stored the children (max of 30) as a Numpy array, then tried the hardware version (ctypes.py_object*30), the Python array (ArrayType), as well as the dictionary and Set types.
Lists seemed to do better (using guppy to profile the memory, and __slots__['variable',...]), but I'm still trying to squash it down a bit more if I can. The only problem I had with arrays is having to specify their size in advance, which causes a bit of redundancy in terms of nodes with only one child, and I have quite a lot of them. ;-)
After the tree is constructed I intend to convert it to a probabilistic tree with a second pass, but may be I can do this as the tree is constructed. As construction time is not too important in my case, the array.array() sounds like something that would be useful to try, thanks for the tip, really appreciated.
I'll let you know how it goes. | Python memory serialisation | 0 | 0.291313 | 1 | 0 | 0 | 415 |
6,058,019 | 2011-05-19T11:37:00.000 | 2 | 0 | 0 | 0 | 0 | python,security,passwords | 0 | 6,058,858 | 0 | 2 | 1 | false | 0 | 0 | If you use a different salt for each user, you must store it somewhere (ideally in a different place). If you use the same salt for every user, you can hardcode it in your app, but it can be considered less secure.
If you don't keep the salt, you will not be able to match a given password against the one in your database.
The aim of the salt is to make bruteforce or dictionnary attacks a lot harder. That is why it is more secure if store separately, to avoid someone having both hash passwords and corresponding salts. | 1 | 10 | 0 | 0 | I am creating a software with user + password. After autentification, the user can access some semi public services, but also encrypt some files that only the user can access.
The user must be stored as is, without modification, if possible. After auth, the user and the password are kept in memory as long as the software is running (i don't know if that's okay either).
The question is how should i store this user + password combination in a potentially unsecure database?
I don't really understand what should i expose.
Let's say I create an enhanced key like this:
salt = random 32 characters string (is it okay?)
key = hash(usr password + salt)
for 1 to 65000 do
key = hash(key + usr password + salt)
Should I store the [plaintext user], [the enhanced key] and [the salt] in the database ?
Also, what should I use to encrypt (with AES or Blowfish) some files using a new password everytime ?
Should I generate a new salt and create a new enhanced key using (the password stored in memory of the program + the salt) ?
And in this case, if i store the encrypted file in the database, i should probably only store the salt.
The database is the same as where i store the user + password combination.
The file can only be decrypted if someone can generate the key, but he doesn't know the password. Right ?
I use Python with PyCrypto, but it's not really important, a general example is just fine.
I have read a few similar questions, but they are not very explicit.
Thank you very very much! | Storing user and password in a database | 1 | 0.197375 | 1 | 1 | 0 | 3,307 |
6,064,044 | 2011-05-19T19:46:00.000 | 0 | 0 | 0 | 0 | 1 | button,menu,bitmap,wxpython,toggle | 0 | 6,073,070 | 0 | 2 | 0 | false | 0 | 1 | Per Mark's suggestion, if you have wx 2.8.12 you can use a plate button to get the toggle/bitmap/menu functionality. Since it is not easy for me to update to the newer wx at this point, I'll use a bitmap button and fake the toggle. | 2 | 0 | 0 | 0 | I need a button that has a bitmap, toggles, and to which I can add a menu (I realize this is asking a lot). I can't figure out a way to do this in wx python. Here are the things I've tried and why they don't work:
plate buttons: don't toggle
genbitmaptogglebuttons: for some reason, these buttons kill my tooltips (I posted this problem earlier and never got an answer)
toolbar buttons: can't add a drop down menu to a button. I would make a separate button for the drop down menu, but the toolbar has to be oriented vertically, and I don't know how to get the drop down button to show up beside its corresponding button, rather than beneath it with vertical a toolbar orientation.
bitmap buttons: won't toggle
Am I missing something obvious? If not I'm just going to resort to faking a toggle by changing the border/background color, unless someone has a better suggestion.
Thanks. | wx python bitmap/toggle/menu button | 0 | 0 | 1 | 0 | 0 | 618 |
6,064,044 | 2011-05-19T19:46:00.000 | 0 | 0 | 0 | 0 | 1 | button,menu,bitmap,wxpython,toggle | 0 | 6,064,311 | 0 | 2 | 0 | false | 0 | 1 | I don't see a pre-built button with all those features. I would think that you can use the generic toggle button or maybe the ShapedButton for your bitmap toggle functionality and attach a right-click popup menu. I'm not really sure what you mean by a menu, so that may not work. If you're talking about a menu implementation similar to the one that the PlateButton has, then you'll probably have to roll your own button. The guys on the wxPython mailing list can tell you how to do that. | 2 | 0 | 0 | 0 | I need a button that has a bitmap, toggles, and to which I can add a menu (I realize this is asking a lot). I can't figure out a way to do this in wx python. Here are the things I've tried and why they don't work:
plate buttons: don't toggle
genbitmaptogglebuttons: for some reason, these buttons kill my tooltips (I posted this problem earlier and never got an answer)
toolbar buttons: can't add a drop down menu to a button. I would make a separate button for the drop down menu, but the toolbar has to be oriented vertically, and I don't know how to get the drop down button to show up beside its corresponding button, rather than beneath it with vertical a toolbar orientation.
bitmap buttons: won't toggle
Am I missing something obvious? If not I'm just going to resort to faking a toggle by changing the border/background color, unless someone has a better suggestion.
Thanks. | wx python bitmap/toggle/menu button | 0 | 0 | 1 | 0 | 0 | 618 |
6,076,014 | 2011-05-20T18:11:00.000 | 4 | 0 | 0 | 0 | 0 | python,drawing,wxpython | 0 | 6,076,453 | 0 | 2 | 0 | true | 0 | 1 | I would just call self.Refresh() or maybe RefreshRect() and pass the area that needs to be repainted. | 1 | 4 | 0 | 0 | I've got a Canvas which manipulates objects in the mouse event handler. After modifying the objects, I want to trigger the OnPaint() event for the same Canvas to show (rerender) the changes. What is the right way to do this? It doesn't let me call OnPaint() directly. Also, is triggering an event from another event "wrong" in some sense, or likely to lead to trouble? | Force repaint in wxPython Canvas | 0 | 1.2 | 1 | 0 | 0 | 3,642 |
6,085,280 | 2011-05-22T00:27:00.000 | 1 | 1 | 0 | 0 | 0 | php,python,file-upload | 0 | 6,085,309 | 0 | 2 | 1 | false | 0 | 0 | I think you're referring to a application made in php running on some website in which case thats just normal HTTP stuff.
So just look at what name the file field has on the html form generated by that php script and then do a normal post. (urllib2 or whatever you use) | 1 | 2 | 0 | 1 | I was wondering is there any tutorial out there that can teach you how to push multiple files from desktop to a PHP based web server with use of Python application?
Edited
I am going to be writing this so I am wondering in general what would be the best method to push files from my desktop to web server. As read from some responses about FTP so I will look into that (no sFTP support sadly) so just old plain FTP, or my other option is to push the data and have PHP read the data thats being send to it pretty much like Action Script + Flash file unloader I made which pushes the files to the server and they are then fetched by PHP and it goes on from that point on. | how to upload files to PHP server with use of Python? | 1 | 0.099668 | 1 | 0 | 1 | 2,176 |
6,090,288 | 2011-05-22T19:43:00.000 | 1 | 0 | 0 | 0 | 0 | python,arrays,indexing,numpy,slice | 0 | 6,090,407 | 0 | 2 | 0 | false | 0 | 0 | I don't think there is a better solution, unless you have some extra information about what's in those arrays. If they're just random numbers, you have to do (n^2)/2 calculations, and your algorithm is reflecting that, running in O((n^2)/2). | 1 | 1 | 1 | 0 | I have an 2d array, A that is 6x6. I would like to take the first 2 values (index 0,0 and 0,1) and take the average of the two and insert the average into a new array that is half the column size of A (6x3) at index 0,0. Then i would get the next two indexes at A, take average and put into the new array at 0,1.
The only way I know how to do this is using a double for loop, but for performance purposes (I will be using arrays as big as 3000x3000) I know there is a better solution out there! Thanks! | python numpy array slicing | 0 | 0.099668 | 1 | 0 | 0 | 1,976 |
6,093,313 | 2011-05-23T05:51:00.000 | 0 | 0 | 0 | 0 | 1 | python | 0 | 6,093,396 | 0 | 1 | 0 | false | 0 | 1 | What editor do you use??
Without knowing that there's no way we can possibly help you.
FWIW in vim I use C-x [onp] | 1 | 0 | 0 | 0 | I'm learning wxpython right now and one thing that helps me ALOT is when I'm typing in the text editor I sometimes press the tab key to give me a hint on what I'm looking for...This is great when it works but I notice sometimes it doesn't work and I get lost looking for a syntax I can't remember...
Question is how can I get the suggestion box to pop back up again, Or what am I doing that causes it to stop coming up...
if it matters I backtracked to 2.7 to learn wx, Windows 7
Edit: More specifically... when I type: wx.(Here is normally when I would press tab) | Anyway to bring up suggestion box when typing python code | 0 | 0 | 1 | 0 | 0 | 320 |
6,107,978 | 2011-05-24T08:55:00.000 | 5 | 0 | 0 | 0 | 1 | python,django,django-south | 0 | 6,886,466 | 0 | 4 | 1 | false | 1 | 0 | Remove 'south' from INSTALLED_APPS, remove south_migrations table from
DB.
Also, you'll need to delete the Migrations folders from your app folders. | 3 | 6 | 0 | 0 | I installed south and tried a few changes using it, which didn't exactly work out the way I wanted it to. Thankfully, my data is safe but locked into south. I want to remove south and use syncdb normally now, how do I do that without affecting my data? | How do I remove south from a django project | 0 | 0.244919 | 1 | 0 | 0 | 4,611 |
6,107,978 | 2011-05-24T08:55:00.000 | 3 | 0 | 0 | 0 | 1 | python,django,django-south | 0 | 6,108,006 | 0 | 4 | 1 | false | 1 | 0 | What does it mean for your data to be "locked into" South? The data lives in the database, and South simply creates the schema for you and migrates it when necessary. If you remove South, the data will stay exactly the same. | 3 | 6 | 0 | 0 | I installed south and tried a few changes using it, which didn't exactly work out the way I wanted it to. Thankfully, my data is safe but locked into south. I want to remove south and use syncdb normally now, how do I do that without affecting my data? | How do I remove south from a django project | 0 | 0.148885 | 1 | 0 | 0 | 4,611 |
6,107,978 | 2011-05-24T08:55:00.000 | 10 | 0 | 0 | 0 | 1 | python,django,django-south | 0 | 6,108,042 | 0 | 4 | 1 | true | 1 | 0 | Remove 'south' from INSTALLED_APPS, remove south_migrations table from DB. | 3 | 6 | 0 | 0 | I installed south and tried a few changes using it, which didn't exactly work out the way I wanted it to. Thankfully, my data is safe but locked into south. I want to remove south and use syncdb normally now, how do I do that without affecting my data? | How do I remove south from a django project | 0 | 1.2 | 1 | 0 | 0 | 4,611 |
6,109,602 | 2011-05-24T11:21:00.000 | 2 | 0 | 0 | 1 | 0 | python,google-app-engine | 0 | 6,112,067 | 0 | 2 | 0 | false | 1 | 0 | Instantiating an email object certainly does not count against your "recipients emailed" quota. Like other App Engine services, you consume quota when you trigger an RPC, i.e. call send().
If you intended to email 1500 recipients and App Engine says you emailed 45,000, your code has a bug. | 2 | 5 | 0 | 0 | just wondering if anyone of you has come across this. I'm playing around with the Python mail API on Google App Engine and I created an app that accepts a message body and address via POST, creates an entity in the datastore, then a cron job is run every minute, grabs 200 entities and sends out the emails, then deletes the entities.
I ran an experiment with 1500 emails, had 1500 entities created in the datastore and 1500 emails were sent out. I then look at my stats and see that approx. 45,000 recipients were used from the quota, how is that possible?
So my question is at which point does the "Recipients Emailed" quota actually count? At the point where I create a mail object or when I actually send() it? I was hoping for the second, but the quotas seem to show something different. I do pass the mail object around between crons and tasks, etc. Anybody has any info on this?
Thanks.
Update: Turns out I actually was sending out 45k emails with a queue of only 1500. It seems that one cron job runs until the previous one is finished and works out with the same entities. So the question changes to "how do I lock the entities and make sure nobody selects them before sending the emails"?
Thanks again! | Google App Engine Locking | 1 | 0.197375 | 1 | 0 | 0 | 467 |
6,109,602 | 2011-05-24T11:21:00.000 | 3 | 0 | 0 | 1 | 0 | python,google-app-engine | 0 | 6,141,535 | 0 | 2 | 0 | false | 1 | 0 | Use tasks to send the email.
Create a task that takes a key as an argument, retrieves the stored entity for that key, then sends the email.
When your handler receives the body and address, store that as you do now but then enqueue a task to do the send and pass the key of your datastore object to the task so it knows which object to send an email for.
You may find that the body and address are small enough that you can simply pass them as arguments to a task and have the task send the email without having to store anything directly in the datastore.
This also has the advantage that if you want to impose a limit on the number of emails sent within a given amount of time (quota) you can set up a task queue with that rate. | 2 | 5 | 0 | 0 | just wondering if anyone of you has come across this. I'm playing around with the Python mail API on Google App Engine and I created an app that accepts a message body and address via POST, creates an entity in the datastore, then a cron job is run every minute, grabs 200 entities and sends out the emails, then deletes the entities.
I ran an experiment with 1500 emails, had 1500 entities created in the datastore and 1500 emails were sent out. I then look at my stats and see that approx. 45,000 recipients were used from the quota, how is that possible?
So my question is at which point does the "Recipients Emailed" quota actually count? At the point where I create a mail object or when I actually send() it? I was hoping for the second, but the quotas seem to show something different. I do pass the mail object around between crons and tasks, etc. Anybody has any info on this?
Thanks.
Update: Turns out I actually was sending out 45k emails with a queue of only 1500. It seems that one cron job runs until the previous one is finished and works out with the same entities. So the question changes to "how do I lock the entities and make sure nobody selects them before sending the emails"?
Thanks again! | Google App Engine Locking | 1 | 0.291313 | 1 | 0 | 0 | 467 |
6,110,384 | 2011-05-24T12:28:00.000 | 0 | 0 | 0 | 0 | 0 | python,postgresql,db2,datamart,greenplum | 0 | 7,550,497 | 0 | 4 | 0 | false | 0 | 0 | Many of Greenplum's utilities are written in python and the current DBMS distribution comes with python 2.6.2 installed, including the pygresql module which you can use to work inside the GPDB.
For data transfer into greenplum, I've written python scripts that connect to the source (Oracle) DB using cx_Oracle and then dumping that output either to flat files or named pipes. gpfdist can read from either sort of source and load the data into the system. | 2 | 0 | 0 | 0 | My company has decided to implement a datamart using [Greenplum] and I have the task of figuring out how to go on about it. A ballpark figure of the amount of data to be transferred from the existing [DB2] DB to the Greenplum DB is about 2 TB.
I would like to know :
1) Is the Greenplum DB the same as vanilla [PostgresSQL]? (I've worked on Postgres AS 8.3)
2) Are there any (free) tools available for this task (extract and import)
3) I have some knowledge of Python. Is it feasible, even easy to do this in a resonable amount of time?
I have no idea how to do this. Any advice, tips and suggestions will be hugely welcome. | Transferring data from a DB2 DB to a greenplum DB | 0 | 0 | 1 | 1 | 0 | 2,294 |
6,110,384 | 2011-05-24T12:28:00.000 | 0 | 0 | 0 | 0 | 0 | python,postgresql,db2,datamart,greenplum | 0 | 23,668,974 | 0 | 4 | 0 | false | 0 | 0 | Generally, it is really slow if you use SQL insert or merge to import big bulk data.
The recommended way is to use the external tables you define to use file-based, web-based or gpfdist protocol hosted files.
And also greenplum has a utility named gpload, which can be used to define your transferring jobs, like source, output, mode(inert, update or merge). | 2 | 0 | 0 | 0 | My company has decided to implement a datamart using [Greenplum] and I have the task of figuring out how to go on about it. A ballpark figure of the amount of data to be transferred from the existing [DB2] DB to the Greenplum DB is about 2 TB.
I would like to know :
1) Is the Greenplum DB the same as vanilla [PostgresSQL]? (I've worked on Postgres AS 8.3)
2) Are there any (free) tools available for this task (extract and import)
3) I have some knowledge of Python. Is it feasible, even easy to do this in a resonable amount of time?
I have no idea how to do this. Any advice, tips and suggestions will be hugely welcome. | Transferring data from a DB2 DB to a greenplum DB | 0 | 0 | 1 | 1 | 0 | 2,294 |
6,115,347 | 2011-05-24T18:50:00.000 | 3 | 0 | 0 | 1 | 1 | python,terminal | 0 | 6,115,447 | 0 | 2 | 0 | false | 0 | 0 | Make sure that your pasted text doesn't contain any embedded control characters (such as a newline), which could end the input. | 1 | 0 | 0 | 0 | I'm reading text in terminal with
description = raw_input()
It works if I write the text and press enter. The problem is when I paste the text from somewhere with Ctrl+Shift+V or with right click + paste. My program immediately ends, description contains only part of the text (I can see it in database).
Do you know how to do this so paste works? I'm using xfce4-terminal in Ubuntu.
thank you | Problem with reading pasted text in terminal | 0 | 0.291313 | 1 | 0 | 0 | 121 |
6,119,038 | 2011-05-25T02:55:00.000 | 2 | 0 | 0 | 1 | 0 | python,bash | 0 | 6,119,079 | 0 | 4 | 0 | false | 0 | 0 | Have you tried echo "Something for input" | python myPythonScript.py ? | 1 | 2 | 0 | 0 | I'm writing a bash script that fires up python and then enters some simple commands before exiting. I've got it firing up python ok, but how do I make the script simulate keyboard input in the python shell, as though a person were doing it? | How to write a bash script that enters text into programs | 0 | 0.099668 | 1 | 0 | 0 | 500 |
6,127,314 | 2011-05-25T15:53:00.000 | 0 | 0 | 0 | 0 | 0 | python,opencv,iplimage | 0 | 6,127,643 | 0 | 3 | 0 | false | 0 | 0 | I do not know opencv python bindings, but in C or C++ you have to get the buffer pointer stored in IplImage. This buffer is coded according to the image format (also stored in IplImage). For RGB you have a byte for R, a byte for G, a byte for B, and so on.
Look at the API of python bindings,you will find how to access the buffer and then you can get to pixel info.
my2c | 1 | 4 | 1 | 0 | I am doing some simple programs with opencv in python. I want to write a few algorithms myself, so need to get at the 'raw' image data inside an image. I can't just do image[i,j] for example, how can I get at the numbers?
Thanks | Opencv... getting at the data in an IPLImage or CvMat | 0 | 0 | 1 | 0 | 0 | 7,276 |
6,127,524 | 2011-05-25T16:11:00.000 | 2 | 0 | 1 | 0 | 0 | python,random,indexing,choice | 0 | 6,127,555 | 0 | 7 | 0 | false | 0 | 0 | If the values are unique in the sequence, you can always say: list.index(value) | 1 | 28 | 0 | 0 | I know very well how to select a random item from a list with random.choice(seq) but how do I know the index of that element? | python: how to know the index when you randomly select an element from a sequence with random.choice(seq) | 0 | 0.057081 | 1 | 0 | 0 | 26,292 |
6,130,438 | 2011-05-25T20:25:00.000 | 1 | 0 | 1 | 0 | 0 | python,string,list,memory,join | 0 | 6,130,864 | 0 | 2 | 0 | false | 0 | 0 | You should also consider what you are going to do with the resulting string. If you just want to write the contents back to a file, there is no need to join the parts first, you can use file.writelines(strings) directly. | 1 | 2 | 0 | 0 | In python I have read in a file into a list using file.readlines() , later on after some logic, I would like to put it back together in a string using fileString = ''.join(file), for some reason, even without a print function, it prints the fileString out to the console up to a certain point, then it just stops. It does not run the rest of the program which is not useful for me.
Why does join do this, how do I perhaps pre-allocate how much memory I would like my list/string to use so that it does not stop. Or some other solution too.
Thank you | python join "large" file | 0 | 0.099668 | 1 | 0 | 0 | 1,190 |
6,159,173 | 2011-05-28T01:47:00.000 | 1 | 1 | 0 | 0 | 0 | python,ping,geo,geoip | 0 | 6,159,184 | 0 | 2 | 0 | false | 0 | 0 | Call all the service API instances and use which ever responds quickest. | 1 | 2 | 0 | 0 | I've been thinking about how to implement mirror picking in Python. When I call on service API I get response with IP address. Now I want to take that address and check if it's close to me or not. If not, retry. I thought about pinging, as I have only ~1ms ping to the IP addresses hosted in same data center, but much higher across the world. I looked up some examples of how to implement pinging in Python, but it seems fairly complicated and feels a bit hackish (like checking if target IP is less than 10ms). There may be better ways to tackle this issue, that I may not be aware of.
What are your ideas? I can't download any test file each time to test speed. GeoIP or ping? Or something else? | How to choose closest/fastest mirror in Python? | 0 | 0.099668 | 1 | 0 | 1 | 881 |
6,163,087 | 2011-05-28T17:04:00.000 | 2 | 0 | 0 | 1 | 0 | python,shell,command-line | 0 | 6,163,126 | 0 | 6 | 0 | false | 0 | 0 | Add a shebang: as the top line of the file: #!/usr/bin/python or #!/usr/bin/python3 (you can use the python -B to prevent generation of .pyc files, which is why I don't use /usr/bin/env)
Make it executable: You will need to do chmod +x app.py
(optional) Add directory to path, so can call it anywhere: Add a directory with your executable to your $PATH environment variable. How you do so depends on your shell, but is either export PATH=$PATH:/home/you/some/path/to/myscripts (e.g. Linux distros which use bash) or setenv PATH $PATH:/home/you/some/path/to/myscripts (e.g. tcsh like in Mac OS X). You will want to put this, for example, in your .bashrc or whatever startup script you have, or else you will have to repeat this step every time you log in.
app.py will need to be in the myscripts (or whatever you name it) folder. You don't even need to call it app.py, but you can just rename it app.
If you wish to skip step #3, you can still do ./app to run it if you are in the same directory. | 2 | 6 | 0 | 0 | When I want to run my python applications from commandline (under ubuntu) I have to be in the directory where is the source code app.py and run the application with command
python app.py
How can I make it (how is it conventionally done) to run the application from arbitrary directory with the command: app ? Similarly as you type ls, mkdir and other commands?
thank you | Turn an application or script into a shell command | 0 | 0.066568 | 1 | 0 | 0 | 3,763 |
6,163,087 | 2011-05-28T17:04:00.000 | 0 | 0 | 0 | 1 | 0 | python,shell,command-line | 0 | 6,163,117 | 0 | 6 | 0 | false | 0 | 0 | I'm pretty sure you have to make the script executable via chmod +x and put it in the PATH variable of your system. | 2 | 6 | 0 | 0 | When I want to run my python applications from commandline (under ubuntu) I have to be in the directory where is the source code app.py and run the application with command
python app.py
How can I make it (how is it conventionally done) to run the application from arbitrary directory with the command: app ? Similarly as you type ls, mkdir and other commands?
thank you | Turn an application or script into a shell command | 0 | 0 | 1 | 0 | 0 | 3,763 |
6,171,112 | 2011-05-29T23:29:00.000 | 1 | 0 | 0 | 0 | 1 | python,sockets | 0 | 6,171,227 | 0 | 2 | 0 | false | 0 | 0 | With TCP sockets it is more typical to leave the connections open, given the teardown & rebuild cost.
Eventually when scaling you will do look into NewIO\RawIO.
If you do not, imagine that the game client might take a step & not get confirmation if sending it to the server & other players. | 1 | 0 | 0 | 0 | I have a client and a server, both written in Python 2.7.
Lets say I wanted to make a multiplayer game server (which I don't at the moment but I'm working towards it). I would need to keep the server up to date (and other clients) on my characters whereabouts, correct?
How would I do this with sockets? Send or request information only when it is needed (e.g the character moves, or another players character moves and the server sends the information to other clients) or would I keep a constant socket open to send data real-time of EVERYBODY's movement regardless of if they have actually done something since the last piece of data was sent or not.
I won't struggle coding it, I just need help with the concept of how I would actually do it. | How would I keep a constant piece of data updated through a socket in Python? | 0 | 0.099668 | 1 | 0 | 1 | 83 |
6,176,445 | 2011-05-30T12:44:00.000 | 0 | 0 | 0 | 0 | 0 | python,selenium-rc,socketexception | 1 | 6,176,514 | 0 | 4 | 0 | false | 0 | 0 | There are several possibilities. If none of your tests can listen on some port (you don't say what port) then perhaps your Windows machine is running something on a port that you previously had open; this new service may have appeared during the reinstall. If, on the other hand, it's only a problem for some tests, or it's a little sporadic, then it may be either a programming issue (forgetting to close a socket in an early test which interferes with a later one) or a timing issue (the earlier test's socket isn't quite through closing before the new one tries to open up). Obviously there are different ways to address each of these problems, but I don't think we can help more than this without more details. | 2 | 2 | 0 | 0 | I have a troublesome problem socket.error error: [Errno 10048]: Address already in use. Only one usage of each socket address (protocol/IP address/port) is normally permitted during automated tests using Selenium with Python. The problem is so interesting that it runs on one machine (Linux) works correctly, but on another machine (WindowsXP) generates this error.
I would add that the problem arose after the reinstallation of the system and set up all over again - with the previous configuration everything worked properly.
Is there maybe something I forgot? Has anyone come up with such a problem before?
Does anyone have an idea of how to deal with this problem?
The current configuration / libraries:
python 2.7, numpy, selenium.py | problem: Socket error [Address already in use] in python/selenium | 0 | 0 | 1 | 0 | 1 | 15,849 |
6,176,445 | 2011-05-30T12:44:00.000 | 0 | 0 | 0 | 0 | 0 | python,selenium-rc,socketexception | 1 | 6,176,956 | 0 | 4 | 0 | false | 0 | 0 | Maybe there is a software on your Windows that already use port 4444, can you try set Selenium to another port and try again? | 2 | 2 | 0 | 0 | I have a troublesome problem socket.error error: [Errno 10048]: Address already in use. Only one usage of each socket address (protocol/IP address/port) is normally permitted during automated tests using Selenium with Python. The problem is so interesting that it runs on one machine (Linux) works correctly, but on another machine (WindowsXP) generates this error.
I would add that the problem arose after the reinstallation of the system and set up all over again - with the previous configuration everything worked properly.
Is there maybe something I forgot? Has anyone come up with such a problem before?
Does anyone have an idea of how to deal with this problem?
The current configuration / libraries:
python 2.7, numpy, selenium.py | problem: Socket error [Address already in use] in python/selenium | 0 | 0 | 1 | 0 | 1 | 15,849 |
6,188,464 | 2011-05-31T13:39:00.000 | 1 | 0 | 1 | 0 | 1 | python,pydev | 0 | 6,188,499 | 0 | 1 | 0 | true | 0 | 0 | The argument self is only necessary for class methods. It doesn't make sense for normal functions.
That means you either omitted a vital information in your question or there is a bug in PyDev. | 1 | 0 | 0 | 0 | I am new to PyDev, written scripts using mainly notepad++ and jedit where I never had these issues.
In a module, I have defined a function - get_user_inputs(self). I used the argument self as PyDev would not let me define the function othewise (and apparently it is the right thing to do).
Now my question is how do I call this function and what argument should I pass?
function(self) does not work and self.function does not work as well.
This issue I am seeing only in PyDev. In jedit and notepad++ I am able to execute same code with no issues. | pydev - call a function with only self as argument | 0 | 1.2 | 1 | 0 | 0 | 214 |
6,189,398 | 2011-05-31T14:49:00.000 | 2 | 1 | 1 | 0 | 0 | python,multithreading,resources,simulation,multiprocess | 0 | 6,189,789 | 0 | 5 | 0 | false | 1 | 0 | I wrote an ant simulation (for finding a good TSP-solution) and a wouldnt recommend a Thread-Solution. I use a loop to calculate for each ant the next step, so my ants do not really behave concurrently (but synchronize after each step).
I don't see any reason to model those ants with Threads. Its no advantage in terms of run-time behavior nor is it an advantage in terms of elegancy (of the code)!
It might be, admittedly, slightly more realistic to use Threads since real ants are concurrent, but for simulations purposes this is IMHO neglectable. | 2 | 6 | 0 | 0 | The simple study is:
Ant life simulation
I'm creating an OO structure that see a Class for the Anthill, a Class for the Ant and a Class for the whole simulator.
Now I'm brainstorming on "how to" make Ants 'live'...
I know that there are projects like this just started but I'm brainstorming, I'm not looking for a just-ready-to-eat-dish.
Sincerely I have to make some tests for understand on "what is better", AFAIK Threads, in Python, use less memory than Processes.
What "Ants" have to do when you start the simulation is just: moving around with random direction, if they found food ->eat/bring to the anthill, if they found another ant from another anthill that is transporting food -> attack -> collect food -> do what have to do.... and so on...that means that I have to "share" information across ants and across the whole enviroment.
so I rewrite:
It's better to create a Process/Thread for each Ant or something else?
EDIT:
In cause of my question "what is better", I'd upvoted all the smart answers that I received, and I also put a comment on them.
After my tests, I'll accept the best answer. | Ant simulation: it's better to create a Process/Thread for each Ant or something else? | 0 | 0.07983 | 1 | 0 | 0 | 1,030 |
6,189,398 | 2011-05-31T14:49:00.000 | 1 | 1 | 1 | 0 | 0 | python,multithreading,resources,simulation,multiprocess | 0 | 6,189,548 | 0 | 5 | 0 | false | 1 | 0 | I agree with @delan - it seems like overkill to allocate a whole thread per Ant, especially if you are looking to scale this to a whole anthill with thousands of the critters running around.
Instead you might consider using a thread to update many ants in a single "cycle". Depending on how you write it - you need to carefully consider what data needs to be shared - you might even be able to use a pool of these threads to scale up your simulation.
Also keep in mind that in CPython the GIL prevents multiple native threads from executing code at the same time. | 2 | 6 | 0 | 0 | The simple study is:
Ant life simulation
I'm creating an OO structure that see a Class for the Anthill, a Class for the Ant and a Class for the whole simulator.
Now I'm brainstorming on "how to" make Ants 'live'...
I know that there are projects like this just started but I'm brainstorming, I'm not looking for a just-ready-to-eat-dish.
Sincerely I have to make some tests for understand on "what is better", AFAIK Threads, in Python, use less memory than Processes.
What "Ants" have to do when you start the simulation is just: moving around with random direction, if they found food ->eat/bring to the anthill, if they found another ant from another anthill that is transporting food -> attack -> collect food -> do what have to do.... and so on...that means that I have to "share" information across ants and across the whole enviroment.
so I rewrite:
It's better to create a Process/Thread for each Ant or something else?
EDIT:
In cause of my question "what is better", I'd upvoted all the smart answers that I received, and I also put a comment on them.
After my tests, I'll accept the best answer. | Ant simulation: it's better to create a Process/Thread for each Ant or something else? | 0 | 0.039979 | 1 | 0 | 0 | 1,030 |
6,191,624 | 2011-05-31T18:06:00.000 | 1 | 1 | 0 | 1 | 0 | python,cron | 0 | 6,192,123 | 0 | 4 | 0 | false | 0 | 0 | If the cron job runs as "you", and if you set the DISPLAY var (export DISPLAY=:0) you should have no issues. | 2 | 3 | 0 | 0 | I coded a python application which was running OK as a cron job. Later I added some libraries (e.g. pynotify and other *) because I wanted to be notified with the message describing what is happening, but it seems that cron can't run such an application.
Do you know some alternative how to run this application every five minutes? I'm using Xubuntu.
import gtk, pygtk, os, os.path, pynotify
I can run the application without cron without problems.
Cron seems to run the application but it won't show the notification message. In /var/log/cron.log there are no errors. The application executed every minute without problems.
my crontab:
*/1 * * * * /home/xralf/pythonsrc/app
thank you | Notification as a cron job | 0 | 0.049958 | 1 | 0 | 0 | 728 |
6,191,624 | 2011-05-31T18:06:00.000 | 0 | 1 | 0 | 1 | 0 | python,cron | 0 | 6,191,715 | 0 | 4 | 0 | false | 0 | 0 | I don't see any problem in cron job with pynotify? What is the error you are getting?
Can you run your python code separately to check whether your python code is working really well but only fails with cron?
Celery is distributed job queue & task manager written in Python but it may be too much for your needs.
Supervisord also can do some sort of cron task if you know that your program shall close in 5 minutes. So you can configure supervisord to start the task soon after. None of them are not easier like cron job. | 2 | 3 | 0 | 0 | I coded a python application which was running OK as a cron job. Later I added some libraries (e.g. pynotify and other *) because I wanted to be notified with the message describing what is happening, but it seems that cron can't run such an application.
Do you know some alternative how to run this application every five minutes? I'm using Xubuntu.
import gtk, pygtk, os, os.path, pynotify
I can run the application without cron without problems.
Cron seems to run the application but it won't show the notification message. In /var/log/cron.log there are no errors. The application executed every minute without problems.
my crontab:
*/1 * * * * /home/xralf/pythonsrc/app
thank you | Notification as a cron job | 0 | 0 | 1 | 0 | 0 | 728 |
6,208,385 | 2011-06-01T22:14:00.000 | 1 | 1 | 0 | 1 | 0 | python,testing,automation,functional-testing,cots | 0 | 6,208,551 | 0 | 1 | 0 | true | 0 | 0 | Interesting problem. One thing to avoid is using the antivirus APIs to check to see if your application triggers them. You want a real live deployment of your application, on the expected operating system, with a real live AV install monitoring it. That way you'll trigger the heuristics monitoring as well as the simple "does this code match that checksum" that the API works with.
You haven't told us what your application is written in, but if your test suite for your application actually exercises portions of the application, rather than testing single code paths, that may be a good start. Ideally, your integration test suite is the same test suite you use to check for problems on your deploy targets. Your integration testing should verify the input AND the output for each test in a live environment, which SHOULD catch crashes and the like. Also, don't forget to check for things that take much longer than they should, that's an unfortunately common failure mode. Most importantly, your test suite needs to be easy enough to write, change, and improve that it actually stays in sync with the product. Tests that don't test everything are useless, and tests that aren't run are even worse. If we had more information about how your program works, we could give better advice about how to automate that.
You'll probably want a suite of VM images across your intended deploy targets, in various states of patch (and unpatch). For some applications, you'll need a separate VM for each variant of IE, since that changes other aspects of the system. Be very careful about which combination of things you have in each VM. Don't test more than one AV at a time. Update the AVs in your snapshots before running your tests. If you have a large enough combination software in your images, you might need to automate image creation - get a base system build, update to the latest patch level, then script the installation of AV and other application combinations.
Yes, maintaining this farm of VMs will be a pain, but if you script the deploy of your application, and have good snapshots and a plan for patching and updating the snapshots, the actual test suite itself shouldn't take all that long to run given appropriate hardware. You'll need to investigate the VM solutions, but I'd probably start with VMWare. | 1 | 5 | 0 | 0 | My (rather small) company develops a popular Windows application, but one thing we've always struggled with is testing - it frequently is only tested by the developers on a system similar to the one they developed it on, and when an update is pushed out to customers, there is a segment of our base that experiences issues due to some weird functionality with a Windows patch, or in the case of certain paranoid antivirus applications (I'm looking at you, Comodo and Kaspersky!), they will false-positive on our app.
We do manual testing on what 70% of our users use, but it's slow and painful, and sometimes isn't as complete as it should be. Management keeps insisting that we need to do better, but they keep punting on the issue when it comes time to release (testing will take HOW LONG? Just push it out and we'll issue a patch to customers who experience issues!).
I'd like to design a better system of automated testing using VMs, but could use some ideas on how to implement it, or if there's a COTS product out there, any suggestions would be great. I'm hacking a Python script together that "runs" every feature of our product, but I'm not sure how to go about testing if we get a Windows crash (besides just checking to see if it's still in the process list), or worse yet, if Comodo has flagged it for some stupid reason.
To best simulate the test environment, I'm trying to keep the VM as "pure" as possible and not load a lot of crap on it outside of the OS and the antivirus, and some common apps (Acrobat Reader, Firefox etc).
Any ideas would be most appreciated! | How can I automate antivirus/WSUS patch testing of my Windows driver and binary? | 0 | 1.2 | 1 | 0 | 0 | 1,381 |
6,219,063 | 2011-06-02T18:59:00.000 | 2 | 0 | 0 | 0 | 0 | python,mobile | 0 | 6,219,095 | 0 | 3 | 0 | false | 1 | 0 | You can use the time module and the sleep function. | 1 | 2 | 0 | 0 | I am writing a program that upload files from my nokia cell phone files to the web server which I am already done writing that. But, my program only does his job only one time and what I want is that I want to call that function for let's say every 5 mins again and again which I do not know how to do it. | Repeating function over certain amount of time | 0 | 0.132549 | 1 | 0 | 0 | 244 |
6,220,274 | 2011-06-02T20:54:00.000 | 3 | 0 | 1 | 1 | 0 | python,module | 0 | 6,220,717 | 1 | 3 | 0 | false | 0 | 0 | If you're installing through setuptools (ie python setup.py), it will install to the lib directory for the python executable you use (unless it's a broken package). | 1 | 8 | 0 | 0 | I have a couple different versions of Python installed on my Mac. The default version is 2.5, so when I install a module it gets installed to 2.5. I need to be able to install some modules to a different version of Python because I am working on projects that use different versions. Any one know how to accomplish this? Thanks for your help. | Install python module to non default version of python on Mac | 0 | 0.197375 | 1 | 0 | 0 | 4,874 |
6,227,589 | 2011-06-03T13:18:00.000 | 1 | 0 | 0 | 0 | 1 | python,pca | 0 | 6,229,101 | 0 | 2 | 0 | false | 0 | 0 | What Sven mentioned in his comments is correct. There is no "default" ordering of the eigenvalues. Each eigenvalue is associated with an eigenvector, and it is important is that the eigenvalue-eigenvector pair is matched correctly. You'll find that all languages and packages will do so.
So if R gives you eigenvalues [e1,e2,e3 and eigenvectors [v1,v2,v3], python probably will give you (say) [e3,e2,e1] and [v3,v2,v1].
Recall that an eigenvalue tells you how much of the variance in your data is explained by the eigenvector associated with it. So, a natural sorting of the eigenvalues (that is intuitive to us) that is useful in PCA, is by size (either ascending or descending). That way, you can easily look at the eigenvalues and identify which ones to keep (large, as they explain most of the data) and which ones to throw (small, which could be high frequency features or just noise) | 1 | 1 | 1 | 0 | I am now trying some stuff with PCA but it's very important for me to know which are the features responsible for each eigenvalue.
numpy.linalg.eig gives us the diagonal matrix already sorted but I wanted this matrix with them at the original positions. Does anybody know how I can make it? | Non sorted eigenvalues for finding features in Python | 0 | 0.099668 | 1 | 0 | 0 | 1,121 |
6,236,794 | 2011-06-04T12:41:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,web-crawler | 0 | 6,236,831 | 0 | 3 | 1 | false | 1 | 0 | HTML parse will parse the page and you can collect the links present in it. These links you can add to queue and visit these pages. Combine these steps in a loop and you made a basic crawler.
Crawling libraries are the ready to use solutions which do the crawling. They provide more features like detection of recursive links, cycles etc. A lot of features you would want to code would have already been done within these libraries.
However first option is preferred if you have some special requirements which libraries do not satisfy. | 1 | 4 | 0 | 0 | I need to grab some data from websites in my django website.
Now i am confused whether i should use python parsing libraries or web crawling libraries. Does search engine libraries also fall in same category
I want to know how much is the difference between the two and if i want to use those functions inside my website which should i use | How much is the difference between html parsing and web crawling in python | 0 | 0.066568 | 1 | 0 | 1 | 2,480 |
6,241,245 | 2011-06-05T05:49:00.000 | 1 | 0 | 1 | 0 | 0 | python,tree | 0 | 6,242,721 | 0 | 3 | 0 | false | 0 | 0 | That they are trees is not even relevant to the solution. You're looking for how long it takes for two single-linked (the parent link) lists to converge into the same list.
Simply follow the links, but keep a length count for each visited node. Once you reach an already visited node, sum the previously found count and the new one. This won't work if either list ends up circular, but if they do it's not a proper tree anyway. A way to fix that case is to track separate visited dictionaries for either branch; if you reach a node visited in its own branch, you can stop traversing that branch as there's no point recounting the loop.
This all naturally assumes you can find the parent of any node. The simplest tree structures don't actually have that link. | 1 | 1 | 0 | 0 | I have two trees in python. I need to compare them in a customized way according to the following specifications. Suppose I have a tree for entity E1 and a tree for entity E2. I need to traverse both the trees starting from E1 and E2 and moving upwards till I get to a common root. (Please note that I have to start the traversal from node E1 on the first tree and node E2 on the second tree.) Then I need to compare the count of the lengths of both their paths.
Can someone provide me an insight as to how to do this in Python? Can the classical tree traversal algorithms be useful here? | Tree traversal in a customised way in Python? | 0 | 0.066568 | 1 | 0 | 0 | 332 |
6,267,308 | 2011-06-07T15:06:00.000 | 0 | 1 | 1 | 0 | 0 | c++,python,debugging,gdb | 0 | 6,658,184 | 0 | 3 | 0 | false | 0 | 0 | You can generate (for example using python) a .gdbrc file with a line containing
'break C::foo'
for every function of your class C and then start gdb. | 1 | 4 | 0 | 0 | I would like to be able to set breakpoints to every method of a C++ class in gdb.
I think the easiest way to do this is probably python, since now python has complete access to gdb. I know very little python, and with gdb on top of it, it's even harder. I am wondering if anyone knows how to write a class python code that sets breakpoints to every method of a named class in gdb. | gdb python programming: how to write code that will set breakpoints to every method of a C++ class? | 0 | 0 | 1 | 0 | 0 | 605 |
6,268,580 | 2011-06-07T16:36:00.000 | 2 | 0 | 0 | 0 | 1 | python,beautifulsoup | 0 | 6,268,759 | 0 | 2 | 0 | false | 1 | 0 | You haven't said what the site is so impossible to answer for sure. But a couple of suggestions. If the url does not change when you click the flag, then either:
a) The english is already in the html document, and the relevant content is being switched with javascript
b) The english content is being fetched via an ajax request and javascript is being used to edit the DOM
c) The page fully reloads with english content.
Presumably in all these cases the language preference must be stored either server-side in the session or client-side with cookies.
First tests are try turning off cookies and javascript to see what happens. Then with cookies, js back on use Firebug or Firefox to view network requests being made. | 1 | 0 | 0 | 0 | I have a website that I'm trying to scrape using Python & BeautifulSoup. The site itself can be viewed in 2 languages(Thai or English); all you have to do is to click on either the Thai or UK flag on the upper right corner of the screen and the data is displayed in the selected language. When in comes to the script though, I can only scrape the data in Thai (which is the default language) and I couldn't figure out how to get the data in English because the URL doesn't change when you click on either the Thai or UK flag. Looking at the source for the page, there are no href associated with either flag. I turned on Firebug tracing and tried to search for something to give me a clue but haven't found anything (then again you'd have to know exactly what to look for in order to know what's going on and that's my problem).
Thanks,
Glenn | Could not scrape data in English, help! | 1 | 0.197375 | 1 | 0 | 0 | 98 |
6,269,493 | 2011-06-07T17:49:00.000 | 0 | 0 | 1 | 0 | 1 | python,eclipse,refactoring,pydev | 0 | 17,780,872 | 0 | 1 | 0 | false | 0 | 0 | delete the project in eclipse and then create a new project in eclipse with the projects new name. This will automatically add the projects contents into the new project | 1 | 6 | 0 | 0 | I would expect the pydev package to rename all references in a project when rafactoring a module name. However, this is not the case. Anyone knows how to fix this? | Pydev for Eclipse does not change all reference when renaming package | 0 | 0 | 1 | 0 | 0 | 616 |
6,275,277 | 2011-06-08T07:02:00.000 | 1 | 0 | 0 | 0 | 0 | android,python,monkeyrunner | 0 | 6,278,753 | 0 | 3 | 0 | false | 1 | 1 | I want to simulate touch event, keyboard event on mobile device. Can I do it with MonkeyRunner?
From your development machine, yes. Per your question title, you cannot use MonkeyRunner on a device.
Also, I have the impression that it only works with SDK?
Yes.
I want the application to be installed on mobile, which will perform some random touch, keyboard events. Is it possible with MonkeyRunner?
Fortunately, no, as this would be a massive security hole. | 1 | 0 | 0 | 0 | I want to simulate touch event, keyboard event on mobile device. Can I do it with MonkeyRunner? Also, I have the impression that it only works with SDK?
I want the application to be installed on mobile, which will perform some random touch, keyboard events. Is it possible with MonkeyRunner?
If yes, please provide me help on how to start writing python for mobile, and how to make project for android in Python. I have used Eclipse for java for android, but not for python. | Can I use MonkeyRunner tool to work in mobile as application | 1 | 0.066568 | 1 | 0 | 0 | 1,492 |
6,289,668 | 2011-06-09T07:50:00.000 | 5 | 1 | 0 | 1 | 0 | c++,python,openmp | 0 | 6,289,692 | 0 | 2 | 0 | false | 0 | 0 | There can be a number of reasons for this, for example:
Increased failure rate in the branch prediction
Exhausted CPU cache
Filled up the memory bus
Too much context switching (this have an effect on many things, including all the previous points) | 1 | 1 | 0 | 0 | I have an urgent problem because my time is running out: I let my calculations process on a server with 8 cores therefore I'm using openMP in my c++ code and it works fine. Of course I'm not the only one who is using the server, so my capacity is not always 800%CPU.
But it happened now several times that someone who started his python prog on the machine paralyzed mine and his prog completely: Although I was still using around 500%CPU the code was running approx. 100x slower - for me and the other guy. Do you have an idea what the reason could be, how to prevent it? | programs paralyzing each other on the server (c++ with openMP and python) | 0 | 0.462117 | 1 | 0 | 0 | 290 |
6,298,813 | 2011-06-09T20:25:00.000 | 0 | 0 | 0 | 0 | 0 | javascript,python,html,django,http-post | 0 | 6,298,838 | 0 | 5 | 0 | false | 1 | 0 | You can't. There is no way to send a form to two ressources.
What you CAN do is send a HTTP request in your register script to the newsletter script, e.g. using urllib2. | 1 | 2 | 0 | 0 | I am developing a django web app in which I would like to have a registration process. In this registration process I have of course a form asking for name, email and password. What I would like to do is, send the form via post to 2 different places. One of which is of course the registration database which saves the password and the like, and the other being the Emencia newsletter app. In the case it helps, Emencia only needs email and a name (optional).
So how can I do this with only one form, 2 places to send it to and, taking just some of the data of the form and not all?
Thank you! | html post form different destinations | 0 | 0 | 1 | 0 | 0 | 2,961 |
6,311,705 | 2011-06-10T20:25:00.000 | 0 | 0 | 1 | 0 | 0 | python,unit-testing,testing,mocking,decorator | 0 | 6,312,040 | 0 | 1 | 0 | false | 0 | 0 | You can't mock a decorator. A decorator replaces your function at compile time with the decorated function. If a function is decorated, you cannot test that function without the decorator without pulling the guts of the function out into another (non-decorated) function. | 1 | 0 | 0 | 0 | I saw you posting around decorators. I am having a hard time finding out how to Mock a decorator. Most searches show me how to write a decorate to help tes, but to be clear, I already have decorators and when I am unittesting a function that HAS a decorator I would like to mock it so its response is not part of the test.
Any guidance? | Python/Django Patching/Mocking a functions current decorator | 0 | 0 | 1 | 0 | 0 | 488 |
6,315,109 | 2011-06-11T09:17:00.000 | 0 | 0 | 1 | 0 | 1 | python,decimal | 1 | 6,315,223 | 0 | 1 | 0 | true | 0 | 0 | Maybe float("0.5")? that might be more suited to your problem. | 1 | 0 | 0 | 0 | I've made a program that calculates the flat rate interest of a loan based on the amount borrowed, the %/year (interest), and the length of time to pay it back...
here's where my problem starts:
I let the user input the years to pay it back, BUT, if the length of time is under a year the user is forced to use a decimal like "0.5" for example and it reads as an error "invalid literal for int() with base 10: '0.5'", am I forgetting to do something??? | I don't know how to change an input into integer with decimals | 0 | 1.2 | 1 | 0 | 0 | 105 |
6,319,575 | 2011-06-12T01:31:00.000 | 1 | 1 | 0 | 0 | 0 | python,apache,lighttpd | 0 | 6,319,726 | 0 | 2 | 0 | false | 1 | 0 | That you have mentioned gevent is important. Does that mean you are specifically trying to implement a long polling application? If you are and that functionality is the bulk of the application, then you will need to put your gevent server behind a front end web server that is implemented using async techniques rather that processes/threading model. Lighttd is an async server and fits that bill whereas Apache isn't. So use of Apache isn't good as front end proxy for long polling application. If that is the criteria though, would actually suggest you use nginx rather than Lighttpd.
Now if you are not doing long polling or anything else that needs high concurrency for long running requests, then you aren't necessarily going to gain too much by using gevent, especially if intention is to use a WSGI layer on top. For WSGI applications, ultimately the performance difference between different servers is minimal because your application is unlikely to be a hello world program that the benchmarks all use. The real bottlenecks are not the server but your application code, database, external callouts, lack of caching etc etc. In light of that, you should just use whatever WSGI hosting mechanism you find easier to use initially and when you properly work out what the hosting requirements are for your application, based on having an actual real application to test, then you can switch to something more appropriate if necessary.
In summary, you are just wasting your time trying to prematurely optimize by trying to find what may be the theoretically best server when in practice your application is what you should be concentrating on initially. After that, you also should be looking at application monitoring tools, because without monitoring tools how are you even going to determine if one hosting solution is better than another. | 2 | 3 | 0 | 0 | I'm a newbie to developing with Python and I'm piecing together the information I need to make intelligent choices in two other open questions. (This isn't a duplicate.)
I'm not developing using a framework but building a web app from scratch using the gevent library. As far as front-end web servers go, it seems I have three choices: nginx, apache, and lighttpd.
From all accounts that I've read, nginx's mod_wsgi isn't suitable.
That leaves two choices - lighttpd and Apache. Under heavy load, am I going to see major differences in performance and memory consumption characteristics? I'm under the impression Apache tends to be memory hungry even when not using prefork, but I don't know how suitable lighttp is for Python apps.
Are there any caveats or benefits to using lighttpd over apache? I really want to hear all the information you can possibly bore me with! | Apache + mod_wsgi / Lighttpd + wsgi - am I going to see differences in performance? | 0 | 0.099668 | 1 | 0 | 0 | 2,472 |
6,319,575 | 2011-06-12T01:31:00.000 | 5 | 1 | 0 | 0 | 0 | python,apache,lighttpd | 0 | 6,319,667 | 0 | 2 | 0 | true | 1 | 0 | Apache...
Apache is by far the most widely used web server out there. Which is a good thing. There is so much more information on how to do stuff with it, and when something goes wrong there are a lot of people who know how to fix it. But, it is also the slowest out of the box; requring a lot of tweaking and a beefier server than Lighttpd. In your case, it will be a lot easier to get off the ground using Apache and Python. There are countless AMP packages out there, and many guides on how to setup python and make your application work. Just a quick google search will get you on your way. Under heavy load, Lighttpd will outshine Apache, but Apache is like a train. It just keeps chugging along.
Pros
Wide User Base
Universal support
A lot of plugins
Cons
Slow out of the box
Requires performance tweaking
Memory whore (No way you could get it working on a 64MB VPS)
Lighttpd...
Lighttpd is the new kid on the block. It is fast, powerful, and kicks ass performance wise (not to mention use like no memory). Out of the box, Lighttpd wipes the floor with Apache. But, not as many people know Lighttpd, so getting it to work is harder. Yes, it is the second most used webserver, but it does not have as much community support behind it. If you look here, on stackoverflow, there is this dude who keeps asking about how to get his Python app working but nobody has helped him. Under heavy load, if configured correctly, Lighttpd will out preform Apache (I did some tests a while back, and you might see a 200-300% performance increase in requests per second).
Pros
Fast out of the box
Uses very little memory
Cons
Not as much support as Apache
Sometimes just does not work
Nginx
If you were running a static website, then you would use nginx. you are correct in saying nginx's mod_wsgi isn't suitable.
Conclusion
Benefits? There are both web servers; designed to be able to replace one another. If both web servers are tuned correctly and you have ample hardware, then there is no real benefit of using one over another. You should try and see which web server meets your need, but asking me; I would say go with Lighttpd. It is, in my opinion, easier to configure and just works.
Also, You should look at Cherokee Web Server. Mad easy to set up and, the performance aint half bad. And you should ask this on Server Fault as well. | 2 | 3 | 0 | 0 | I'm a newbie to developing with Python and I'm piecing together the information I need to make intelligent choices in two other open questions. (This isn't a duplicate.)
I'm not developing using a framework but building a web app from scratch using the gevent library. As far as front-end web servers go, it seems I have three choices: nginx, apache, and lighttpd.
From all accounts that I've read, nginx's mod_wsgi isn't suitable.
That leaves two choices - lighttpd and Apache. Under heavy load, am I going to see major differences in performance and memory consumption characteristics? I'm under the impression Apache tends to be memory hungry even when not using prefork, but I don't know how suitable lighttp is for Python apps.
Are there any caveats or benefits to using lighttpd over apache? I really want to hear all the information you can possibly bore me with! | Apache + mod_wsgi / Lighttpd + wsgi - am I going to see differences in performance? | 0 | 1.2 | 1 | 0 | 0 | 2,472 |
6,319,583 | 2011-06-12T01:33:00.000 | 0 | 0 | 1 | 0 | 0 | algorithm,performance,python | 0 | 6,319,610 | 0 | 6 | 0 | false | 0 | 0 | 1) You are trying to return well formed numbers 'up to' n digits in your approach, that is probably not the thing they were asking for
2) Sorting each number in that range is a bit silly. You may check whether each number is a well formed one by comparing the consecutive digits, which will take O(d) time for each number. However, sorting will definitely take more than that. | 2 | 0 | 0 | 0 | I found this in an interview questions forum:
Write a function to return well formed
numbers of size n. A well formed
number is one in which digit i is less
than digit i+1, for example 123, 246, 349 etc
So here's how I would do it in Python:
input number of digits (x)
loop over all the numbers of x digits
for each number n, if str(n) == "".join(sorted(str(n))), print number
So my question is... Is this method efficient and pythonic? I'm sure there should be a more elegant way out there, so any tips would be great appreciated.
Craig | Return well-formed numbers | 0 | 0 | 1 | 0 | 0 | 371 |
6,319,583 | 2011-06-12T01:33:00.000 | 2 | 0 | 1 | 0 | 0 | algorithm,performance,python | 0 | 6,319,609 | 0 | 6 | 0 | false | 0 | 0 | In my opinion, you've already lost if you're checking every number.
I'd implement this with a stack. Start by putting 1-9 on the stack. When you take a number off of the stack, add another number to it if you can following those rules. If it's n digits, then print it. If it's not n digits, put it back on the stack.
Let's say you grab 7 from the stack. 8 and 9 are the only numbers bigger than 7, so in o(1) time you can put 78 and 79 on the stack. | 2 | 0 | 0 | 0 | I found this in an interview questions forum:
Write a function to return well formed
numbers of size n. A well formed
number is one in which digit i is less
than digit i+1, for example 123, 246, 349 etc
So here's how I would do it in Python:
input number of digits (x)
loop over all the numbers of x digits
for each number n, if str(n) == "".join(sorted(str(n))), print number
So my question is... Is this method efficient and pythonic? I'm sure there should be a more elegant way out there, so any tips would be great appreciated.
Craig | Return well-formed numbers | 0 | 0.066568 | 1 | 0 | 0 | 371 |
6,325,775 | 2011-06-13T01:02:00.000 | 1 | 0 | 1 | 0 | 0 | python,oop | 0 | 6,325,854 | 0 | 6 | 0 | false | 0 | 1 | You're conflating two meanings of the "destroying" idea. The Item should get destroyed in a "gameplay" sense. Let the garbage collector worry about when to destroy it as an object.
Who has a reference to the Item? Perhaps the player has it in his inventory, or it is in a room in the game. In either case your Inventory or Room objects know about the Item. Tell them the Item has been destroyed (in a gameplay sense) and let them handle that. Perhaps they'll now keep a reference to a "broken" Item. Perhaps they'll keep track of it, but not display it to the user. Perhaps they'll delete all references to it, in which case the object in memory will soon be deleted.
The beauty of object-oriented programming is that you can abstract these processes away from the Item itself: pass the messages to whoever needs to know, and let them implement in their own way what it means for the Item to be destroyed. | 3 | 5 | 0 | 0 | Every once in a while I like to take a break from my other projects to try to make a classic adventure text-based-game (in Python, this time) as a fun project, but I always have design issues implementing the item system.
I'd like for the items in the game to descend from one base Item class, containing some attributes that every item has, such as damage and weight. My problems begin when I try to add some functionality to these items. When an item's damage gets past a threshold, it should be destroyed. And there lies my problem: I don't really know how to accomplish that.
Since del self won't work for a million different reasons, (Edit: I am intentionally providing the use of 'del' as something that I know is wrong. I know what garbage collection is, and how it is not what I want.) how should I do this (And other similar tasks)? Should each item contain some kind of reference to it's container (The player, I guess) and 'ask' for itself to be deleted?
The first thing that comes to mind is a big dictionary containing every item in the game, and each object would have a reference to this list, and both have and know it's own unique ID. I don't like this solution at all and I don't think that it's the right way to go at all. Does anybody have any suggestions?
EDIT: I'm seeing a lot of people thinking that I'm worried about garbage collection. What I'm talking about is not garbage collection, but actually removing the object from gameplay. I'm not sure about what objects should initiate the removal, etc. | Managing Items in an Object Oriented game | 0 | 0.033321 | 1 | 0 | 0 | 1,852 |
6,325,775 | 2011-06-13T01:02:00.000 | 0 | 0 | 1 | 0 | 0 | python,oop | 0 | 6,325,828 | 0 | 6 | 0 | false | 0 | 1 | Assuming you call a method when the item is used, you could always return a boolean value indicating whether it's broken. | 3 | 5 | 0 | 0 | Every once in a while I like to take a break from my other projects to try to make a classic adventure text-based-game (in Python, this time) as a fun project, but I always have design issues implementing the item system.
I'd like for the items in the game to descend from one base Item class, containing some attributes that every item has, such as damage and weight. My problems begin when I try to add some functionality to these items. When an item's damage gets past a threshold, it should be destroyed. And there lies my problem: I don't really know how to accomplish that.
Since del self won't work for a million different reasons, (Edit: I am intentionally providing the use of 'del' as something that I know is wrong. I know what garbage collection is, and how it is not what I want.) how should I do this (And other similar tasks)? Should each item contain some kind of reference to it's container (The player, I guess) and 'ask' for itself to be deleted?
The first thing that comes to mind is a big dictionary containing every item in the game, and each object would have a reference to this list, and both have and know it's own unique ID. I don't like this solution at all and I don't think that it's the right way to go at all. Does anybody have any suggestions?
EDIT: I'm seeing a lot of people thinking that I'm worried about garbage collection. What I'm talking about is not garbage collection, but actually removing the object from gameplay. I'm not sure about what objects should initiate the removal, etc. | Managing Items in an Object Oriented game | 0 | 0 | 1 | 0 | 0 | 1,852 |
6,325,775 | 2011-06-13T01:02:00.000 | -1 | 0 | 1 | 0 | 0 | python,oop | 0 | 6,325,868 | 0 | 6 | 0 | false | 0 | 1 | at first: i don't have any python experience, so think about this in a more general way
your item should neither know or care ... your Item should have an interface that says it is something destroyable. containers and other objects that care about things that can be destroyed, can make use of that interface
that destroyable interface could have some option for consuming objects to register a callback or event, triggered when the item gets destroyed | 3 | 5 | 0 | 0 | Every once in a while I like to take a break from my other projects to try to make a classic adventure text-based-game (in Python, this time) as a fun project, but I always have design issues implementing the item system.
I'd like for the items in the game to descend from one base Item class, containing some attributes that every item has, such as damage and weight. My problems begin when I try to add some functionality to these items. When an item's damage gets past a threshold, it should be destroyed. And there lies my problem: I don't really know how to accomplish that.
Since del self won't work for a million different reasons, (Edit: I am intentionally providing the use of 'del' as something that I know is wrong. I know what garbage collection is, and how it is not what I want.) how should I do this (And other similar tasks)? Should each item contain some kind of reference to it's container (The player, I guess) and 'ask' for itself to be deleted?
The first thing that comes to mind is a big dictionary containing every item in the game, and each object would have a reference to this list, and both have and know it's own unique ID. I don't like this solution at all and I don't think that it's the right way to go at all. Does anybody have any suggestions?
EDIT: I'm seeing a lot of people thinking that I'm worried about garbage collection. What I'm talking about is not garbage collection, but actually removing the object from gameplay. I'm not sure about what objects should initiate the removal, etc. | Managing Items in an Object Oriented game | 0 | -0.033321 | 1 | 0 | 0 | 1,852 |
6,335,548 | 2011-06-13T19:53:00.000 | 0 | 0 | 1 | 0 | 0 | python,time | 0 | 6,336,220 | 0 | 2 | 0 | false | 0 | 0 | Fork a subprocess to do the actual job; kill the job if it exceeds your run-time limit. | 1 | 0 | 0 | 0 | I'm having trouble figuring out how to do this.
I'm trying to run a python script for a set duration. And every 1/10 of the duration I need it to run something. The problem is this step can take any amount of time to complete. I cannot go over the maximum duration set at the start.
Example:
Duration 20 hours
Interval = 20/10 = 2 hours (This can change if it needs to)
Every two hours it runs function(). function() takes between 0-60 minutes to complete. And then it sleeps. How can I make it so that it continues to run 9 more times, but doesn't go over the max duration? | Python Time Problem | 0 | 0 | 1 | 0 | 0 | 363 |
6,337,812 | 2011-06-14T00:14:00.000 | 1 | 0 | 1 | 0 | 0 | python,sqlalchemy | 0 | 6,338,431 | 0 | 1 | 0 | true | 0 | 0 | What is the problem here? SQLAlchemy maintains a thread-local connection pool..what else do you need? | 1 | 0 | 0 | 0 | I want to do the following:
Have a software running written in Python 2.7
This software connects to a database (Currently a MySQL database)
This software listen for connections on a port X on TCP
When a connection is established, a client x request or command something, then the software use the database to store, remove or fetch information (Based on the request or command).
What I currently have in head is the classic approach of connecting to the database, store the connection to the database in an object (as a variable) that is passed in the threads that are spawned by the connection listener, then these threads use the variable in the object to do what they need to do with the database connection. (I know that multi-processing is better then multi-threading in Python, but it's not related to my question at this time)
Now my question, how should I use SQLAlchemy in this context? I am quite confused even although I have been reading quite a lot of documentation about it and there doesn't seem to be "good" examples on how to handle this kind of situation specifically even although I have been searching quite a lot. | How to use SQLAlchemy in this context | 0 | 1.2 | 1 | 1 | 0 | 183 |
6,345,156 | 2011-06-14T14:29:00.000 | 4 | 0 | 1 | 0 | 0 | python,image,ms-word,extract,pywin32 | 0 | 6,932,276 | 0 | 4 | 0 | false | 0 | 0 | Docx files can be unzipped for extracting the images. | 1 | 6 | 0 | 0 | I would like to run a script on a folder full of word documents that reads through the documents and pulls out images and their captions (text right below the images). From the research I've done, I think pywin32 might be a viable solution. I know how to use pywin32 to find strings and pull them out, but I need help with the images part. How can I read through a docx file and have an event occur when an image is found? Thank you for any help! I am using Python 2.7. | Using Python to extract images and text from a word document | 0 | 0.197375 | 1 | 0 | 0 | 8,790 |
6,355,449 | 2011-06-15T09:09:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,string | 0 | 6,355,486 | 0 | 1 | 0 | true | 1 | 0 | The most flexible way - is to create custom template filter. If string needs formatting, it will do that, if it doesn't - just output it. | 1 | 1 | 0 | 0 | I have a question list for a questionnaire which are stored in database but some questions has to be modified by certain parameters.
For example, if someone selects an employer name from previous page, some questions should have employer's name "Do you like to work for ........ company ?".
One solution might be savinf question like "Do you like to work for {0}" and formatting it but I am not sure how I can implement it with python.
But how can I detect which questions need to be modified ?
Is there any easy way to do it in django ?
Thanks | Formatting strings with python in django | 0 | 1.2 | 1 | 0 | 0 | 170 |
6,359,581 | 2011-06-15T14:44:00.000 | 1 | 1 | 0 | 0 | 0 | python,views,plone | 0 | 6,360,580 | 0 | 1 | 0 | true | 1 | 0 | Just import it and call it as any other function. You don't want to make it a view - that requires you to do a MultiAdapter lookup which is a real pain, and completely unnecessary.
[Edit - strictly using a view is a MultiAdapter lookup, but you can shortcut it via traversal, but that still isn't worth the effort] | 1 | 0 | 0 | 0 | I have a Python function registered as a View in Plone. I need to be able to call another function from within this registered function. I'm not sure if it would be best to register this other function as a view as well and try to call that (don't know how to call other views), or if there is a better way to handle this.
Basically I'm creating a function in Python that needs to be callable from other Python functions (that are registered as Views).
Edit -
I have tried calling it like any other function:
(pytest.py)
def Test(self):
return "TEST"
And in my Python script registered as a view:
import pytest
def PageFunction(self):
return pytest.Test()
However, this always seems to crash. If I leave the pytest.Test() out and return a simple string, it seems to work fine (so I don't think the import pytest line is causing any problems...) | Python Plone views call others | 0 | 1.2 | 1 | 0 | 0 | 327 |
6,364,430 | 2011-06-15T21:10:00.000 | 6 | 1 | 1 | 0 | 0 | python,perl | 0 | 6,368,606 | 0 | 2 | 0 | false | 0 | 0 | Perl strings definitely are not immutable. Each string has a buffer, the initial offset of the string in the buffer, the length of buffer, and the amount of the buffer used. Additionally, for utf8 strings, the character length is cached when it needs to be calculated. At one point, there was some caching of additional character offset to byte offset information too, but I'm not certain that's still in place.
If the buffer needs to be increased, it reallocs it. Perl on many platforms knows the granularity of the system malloc, so it can allocate a, say, 14 byte buffer for a 11 byte string, knowing that that won't actually take any additional memory.
The initial offset allows O(1) removal of data from the beginning of the string. | 1 | 12 | 0 | 0 | I've been wondering lately how various operations I perform on basic types like strings and integers work in terms of performance, and I figure I could get a much better idea of this if I knew how those basic types were implemented (i.e. I've heard strings and integers are immutable in Python. Does that mean any operation that modifies one character in a string is O(n) because a completely new string has to be created? How about adding numbers?)
I'm curious about this in both Python and Perl, and felt silly asking basically the same question twice, so I'm just wrapping it into one.
If you can include some example operation costs with your answer, that would make it even more helpful. | How are basic data types (strings and integers) implemented in Python and Perl | 0 | 1 | 1 | 0 | 0 | 1,412 |
6,371,097 | 2011-06-16T11:30:00.000 | 2 | 1 | 0 | 0 | 0 | python,cgi | 0 | 6,371,127 | 0 | 2 | 0 | true | 0 | 0 | Definitely the wrong tool. Multiple times.
Store the file outside of the document root.
Store a key to the file in the user's session.
Use a web framework.
Use WSGI. | 2 | 1 | 0 | 0 | I have a web page that uses a Python cgi script to store requested information for later retrieval by me. As an example, the web page has a text box that asks "What is your name?" When the user inputs his name and hits the submit button, the web page calls the Python cgi script which writes the user's name to mytextfile.txt on the web site. The problem is that if anyone goes to www.mydomain.com/mytextfile.txt, they can see all of the information written to the text file. Is there a solution to this? Or am I using the wrong tool? Thanks for your time. | Python CGI how to save requested information securely? | 0 | 1.2 | 1 | 0 | 1 | 104 |
6,371,097 | 2011-06-16T11:30:00.000 | 0 | 1 | 0 | 0 | 0 | python,cgi | 0 | 6,371,124 | 0 | 2 | 0 | false | 0 | 0 | Store it outside the document root. | 2 | 1 | 0 | 0 | I have a web page that uses a Python cgi script to store requested information for later retrieval by me. As an example, the web page has a text box that asks "What is your name?" When the user inputs his name and hits the submit button, the web page calls the Python cgi script which writes the user's name to mytextfile.txt on the web site. The problem is that if anyone goes to www.mydomain.com/mytextfile.txt, they can see all of the information written to the text file. Is there a solution to this? Or am I using the wrong tool? Thanks for your time. | Python CGI how to save requested information securely? | 0 | 0 | 1 | 0 | 1 | 104 |
6,373,779 | 2011-06-16T14:43:00.000 | 4 | 1 | 0 | 0 | 0 | python,command,linkedin | 0 | 6,393,078 | 0 | 2 | 0 | true | 0 | 0 | The Member to Member API will return a 2xx status code if your message is accepted by LinkedIn. And a 4xx status code if there's an error.
This means the message was put into the LinkedIn system, not that it has been opened, read, emailed, etc. You cannot get that via the API. | 1 | 0 | 0 | 0 | I want to access my linkedin account from command prompt and then i wanted to send mails from my account using command.
Also, I need the delivery reports of the mails.
Can anyone knows how can use that? | How to access linkedin from python command | 0 | 1.2 | 1 | 0 | 1 | 1,130 |
6,390,393 | 2011-06-17T18:49:00.000 | 23 | 0 | 0 | 0 | 0 | python,matplotlib | 0 | 34,919,615 | 0 | 10 | 0 | false | 0 | 0 | In current versions of Matplotlib, you can do axis.set_xticklabels(labels, fontsize='small'). | 2 | 421 | 1 | 0 | In a matplotlib figure, how can I make the font size for the tick labels using ax1.set_xticklabels() smaller?
Further, how can one rotate it from horizontal to vertical? | Matplotlib make tick labels font size smaller | 0 | 1 | 1 | 0 | 0 | 870,128 |
6,390,393 | 2011-06-17T18:49:00.000 | 16 | 0 | 0 | 0 | 0 | python,matplotlib | 0 | 37,869,225 | 0 | 10 | 0 | false | 0 | 0 | For smaller font, I use
ax1.set_xticklabels(xticklabels, fontsize=7)
and it works! | 2 | 421 | 1 | 0 | In a matplotlib figure, how can I make the font size for the tick labels using ax1.set_xticklabels() smaller?
Further, how can one rotate it from horizontal to vertical? | Matplotlib make tick labels font size smaller | 0 | 1 | 1 | 0 | 0 | 870,128 |
6,393,620 | 2011-06-18T03:13:00.000 | 0 | 0 | 1 | 0 | 1 | python,regex,text,split | 0 | 6,393,773 | 0 | 5 | 0 | false | 0 | 0 | It's also possible that your file is using a format that's compatible with the csv module, you could also look into that, especially if the format allows quoting, because then line.split would break. If the format doesn't use quoting and it's just delimiters and text, line.split is probably the best.
Also, for the re module, any special characters can be escaped with \, like r'\^'. I'd suggest before jumping to use re to 1) learn how to write regular expressions, 2) first look for a solution to your problem instead of jumping to regular expressions - «Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems. » | 1 | 0 | 0 | 0 | I have a huge text file, each line seems like this:
Some sort of general menu^a_sub_menu_title^^pagNumber
Notice that the first "general menu" has white spaces, the second part (a subtitle) each word is separate with "_" character and finally a number (a pag number). I want to split each line in 3 (obvious) parts, because I want to create some sort of directory in python.
I was trying with re module, but as the caret character has a strong meaning in such module, I couldn't figure it out how to do it.
Could someone please help me???? | Split string with caret character in python | 0 | 0 | 1 | 0 | 0 | 2,372 |
6,424,975 | 2011-06-21T12:08:00.000 | 5 | 0 | 0 | 0 | 0 | python,python-2.7,selenium-webdriver,selenium-rc | 0 | 7,946,600 | 0 | 2 | 0 | false | 0 | 0 | I know this was already answered but it may help someone else. Another way to get your selenium server's version is right click the selenium-server.jar, and open it was any file archiver software such as 7zip or winrar. There you should find a file called VERSION.txt which will tell you your servers version | 1 | 2 | 0 | 0 | I am using selenium2 RC with the python client (selenium.py) and I need to get the version of the selenium on the server. (for example "2rc2","2rc3" etc.)
is there any command i can send to the server to get its version? | how to get the version of Selenium RC server | 1 | 0.462117 | 1 | 0 | 1 | 10,529 |
6,425,535 | 2011-06-21T12:54:00.000 | 4 | 0 | 0 | 0 | 0 | python,django,model | 0 | 6,426,206 | 0 | 4 | 1 | false | 1 | 0 | Can you seriously envisage a possibility that you're going to just ditch the Django ORM, but keep everything else? Or that if you ditched Django totally, any of your code is still going to be applicable?
You don't complain that if you ditched Django, you'll have to rewrite all your templates. Of course you will, that's to be expected. So why is it OK for the presentation layer to be bound up with the framework, but not the persistence layer?
This sort of up-front over-analysis to be avoided. Django is a RAD tool, and is best suited to quick, iterative development. For all that, it's capable of building some powerful, long-lived applications, as plenty of large companies will testify. But it's not Java, and it's not "enterprisey", and it doesn't conform particularly well to OO principles. In the Python world, that's seen as a feature, not a bug. | 4 | 15 | 0 | 0 | So I have completed my OO analysis and design of a web application that I am building and am now getting into implementation. Design decisions have been made to implement the system using Python and the web development framework Django.
I want to start implementing some of my domain entity classes which need persistence. It seems that Django would have me implement these as classes that are inherited from the Django models class in order to use the Django ORM for persistence. However, this seems like far too strong coupling between my class entities and the persistence mechanism. What happens if at some stage I want to ditch Django and use another web development framework, or just ditch Django’s ORM for an alternative? Now I have to re-write my domain entity classes from scratch.
So it would be better to implement my domain classes as standalone Python classes, encapsulating all my business logic in these, and then use some mechanism (design pattern such as bridge or adapter or ???) to delegate persistence storage of these domain classes to the Django ORM, for example through a Django model class that has been appropriately set up for this.
Does anyone have suggestion on how to go about doing this? It seems from all I have read that people simply implement their domain classes as classes inherited from the Django model class and have business logic mixed within this class. This does not seem a good idea for down line changes, maintenance, reusability etc. | Decoupling Domain classes from Django Model Classes | 0 | 0.197375 | 1 | 0 | 0 | 2,441 |
6,425,535 | 2011-06-21T12:54:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,model | 0 | 6,426,280 | 0 | 4 | 1 | false | 1 | 0 | You would not have to "rewrite your models from scratch" if you wanted a different persistence mechanism. The whole point of an activerecord-style persistence system is that it imposes minimal constraints on the model classes, and acts largely transparently.
If you're really worried, abstract out any code that relies on queries into their own methods. | 4 | 15 | 0 | 0 | So I have completed my OO analysis and design of a web application that I am building and am now getting into implementation. Design decisions have been made to implement the system using Python and the web development framework Django.
I want to start implementing some of my domain entity classes which need persistence. It seems that Django would have me implement these as classes that are inherited from the Django models class in order to use the Django ORM for persistence. However, this seems like far too strong coupling between my class entities and the persistence mechanism. What happens if at some stage I want to ditch Django and use another web development framework, or just ditch Django’s ORM for an alternative? Now I have to re-write my domain entity classes from scratch.
So it would be better to implement my domain classes as standalone Python classes, encapsulating all my business logic in these, and then use some mechanism (design pattern such as bridge or adapter or ???) to delegate persistence storage of these domain classes to the Django ORM, for example through a Django model class that has been appropriately set up for this.
Does anyone have suggestion on how to go about doing this? It seems from all I have read that people simply implement their domain classes as classes inherited from the Django model class and have business logic mixed within this class. This does not seem a good idea for down line changes, maintenance, reusability etc. | Decoupling Domain classes from Django Model Classes | 0 | 0 | 1 | 0 | 0 | 2,441 |
6,425,535 | 2011-06-21T12:54:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,model | 0 | 20,776,976 | 0 | 4 | 1 | false | 1 | 0 | I think that there's no implemented solution for decoupling Django models and the domain classes, at least I haven't found any. In fact, the only ORM with such decoupling that I know exists only in Smalltalk world and it's called GLORP. It allows you to persist your domain model in a relational DB without having to modify domain classes. I'm currently trying to implement similar ideas to decouple from Django ORM. My motivation is that current strong coupling between DB tables and domain classes hurts software evolution badly. I'll post again if I succeed :) | 4 | 15 | 0 | 0 | So I have completed my OO analysis and design of a web application that I am building and am now getting into implementation. Design decisions have been made to implement the system using Python and the web development framework Django.
I want to start implementing some of my domain entity classes which need persistence. It seems that Django would have me implement these as classes that are inherited from the Django models class in order to use the Django ORM for persistence. However, this seems like far too strong coupling between my class entities and the persistence mechanism. What happens if at some stage I want to ditch Django and use another web development framework, or just ditch Django’s ORM for an alternative? Now I have to re-write my domain entity classes from scratch.
So it would be better to implement my domain classes as standalone Python classes, encapsulating all my business logic in these, and then use some mechanism (design pattern such as bridge or adapter or ???) to delegate persistence storage of these domain classes to the Django ORM, for example through a Django model class that has been appropriately set up for this.
Does anyone have suggestion on how to go about doing this? It seems from all I have read that people simply implement their domain classes as classes inherited from the Django model class and have business logic mixed within this class. This does not seem a good idea for down line changes, maintenance, reusability etc. | Decoupling Domain classes from Django Model Classes | 0 | 0 | 1 | 0 | 0 | 2,441 |
6,425,535 | 2011-06-21T12:54:00.000 | 3 | 0 | 0 | 0 | 0 | python,django,model | 0 | 6,426,080 | 0 | 4 | 1 | false | 1 | 0 | Well, the way to go with Django is to inherit from Django's base model classes. This is the 'active record' pattern. Your django models will have all CRUD and query methods along with you business logic (if you decide to add it of course). This is seen as an anti-pattern in the java world, but the cool thing about it is that it can speed up development really really fast. | 4 | 15 | 0 | 0 | So I have completed my OO analysis and design of a web application that I am building and am now getting into implementation. Design decisions have been made to implement the system using Python and the web development framework Django.
I want to start implementing some of my domain entity classes which need persistence. It seems that Django would have me implement these as classes that are inherited from the Django models class in order to use the Django ORM for persistence. However, this seems like far too strong coupling between my class entities and the persistence mechanism. What happens if at some stage I want to ditch Django and use another web development framework, or just ditch Django’s ORM for an alternative? Now I have to re-write my domain entity classes from scratch.
So it would be better to implement my domain classes as standalone Python classes, encapsulating all my business logic in these, and then use some mechanism (design pattern such as bridge or adapter or ???) to delegate persistence storage of these domain classes to the Django ORM, for example through a Django model class that has been appropriately set up for this.
Does anyone have suggestion on how to go about doing this? It seems from all I have read that people simply implement their domain classes as classes inherited from the Django model class and have business logic mixed within this class. This does not seem a good idea for down line changes, maintenance, reusability etc. | Decoupling Domain classes from Django Model Classes | 0 | 0.148885 | 1 | 0 | 0 | 2,441 |
6,432,499 | 2011-06-21T21:56:00.000 | 1 | 0 | 0 | 0 | 0 | python,statistics,numpy,probability,random-sample | 0 | 6,432,586 | 0 | 9 | 0 | false | 0 | 0 | Howabout creating 3 "a", 4 "b" and 3 "c" in a list an then just randomly select one. With enough iterations you will get the desired probability. | 2 | 29 | 1 | 0 | Given a list of tuples where each tuple consists of a probability and an item I'd like to sample an item according to its probability. For example, give the list [ (.3, 'a'), (.4, 'b'), (.3, 'c')] I'd like to sample 'b' 40% of the time.
What's the canonical way of doing this in python?
I've looked at the random module which doesn't seem to have an appropriate function and at numpy.random which although it has a multinomial function doesn't seem to return the results in a nice form for this problem. I'm basically looking for something like mnrnd in matlab.
Many thanks.
Thanks for all the answers so quickly. To clarify, I'm not looking for explanations of how to write a sampling scheme, but rather to be pointed to an easy way to sample from a multinomial distribution given a set of objects and weights, or to be told that no such function exists in a standard library and so one should write one's own. | How to do weighted random sample of categories in python | 0 | 0.022219 | 1 | 0 | 0 | 12,259 |
6,432,499 | 2011-06-21T21:56:00.000 | 0 | 0 | 0 | 0 | 0 | python,statistics,numpy,probability,random-sample | 0 | 6,432,588 | 0 | 9 | 0 | false | 0 | 0 | I'm not sure if this is the pythonic way of doing what you ask, but you could use
random.sample(['a','a','a','b','b','b','b','c','c','c'],k)
where k is the number of samples you want.
For a more robust method, bisect the unit interval into sections based on the cumulative probability and draw from the uniform distribution (0,1) using random.random(). In this case the subintervals would be (0,.3)(.3,.7)(.7,1). You choose the element based on which subinterval it falls into. | 2 | 29 | 1 | 0 | Given a list of tuples where each tuple consists of a probability and an item I'd like to sample an item according to its probability. For example, give the list [ (.3, 'a'), (.4, 'b'), (.3, 'c')] I'd like to sample 'b' 40% of the time.
What's the canonical way of doing this in python?
I've looked at the random module which doesn't seem to have an appropriate function and at numpy.random which although it has a multinomial function doesn't seem to return the results in a nice form for this problem. I'm basically looking for something like mnrnd in matlab.
Many thanks.
Thanks for all the answers so quickly. To clarify, I'm not looking for explanations of how to write a sampling scheme, but rather to be pointed to an easy way to sample from a multinomial distribution given a set of objects and weights, or to be told that no such function exists in a standard library and so one should write one's own. | How to do weighted random sample of categories in python | 0 | 0 | 1 | 0 | 0 | 12,259 |
6,445,620 | 2011-06-22T19:48:00.000 | 0 | 0 | 1 | 0 | 1 | python,python-2.7,winapi | 0 | 6,448,014 | 0 | 1 | 0 | false | 0 | 0 | One approach that might work would be to do something like so:
get the window handle (FindWindow() or something similar, there are a few ways to do this)
get the window dimensions (GetClientRect() or GetWindowRect())
get the device context for the window (GetWindowDC())
get the image data from the window (BitBlt() or similar)
It is possible that you will need elevated privelages to access another processes window dc, if so you may need to inject code/dll into the target process space to do this.
HTH. | 1 | 1 | 0 | 0 | I am a new programmer with little experience but I am in the process of learning Python 2.7. I use Python(x,y) or Spydar as the programs are called on Windows 7.
The main packages I'm using are numpy, pil and potentially win32gui.
I am currently trying to write a program to mine information from a 3rd-party software. This is against their wishes and they have made it difficult. I'm using ImageGrab and then numpy to get some results. This however, or so i belive, forces me to keep the window I want to read in focus, which is not optimal.
I'm wondering if there is any way to hijack the whole window and redirect the output directly into a "virtual" copy, just so I can have it running in the background?
When looking at the demos for win32api, there is a script called desktopmanager. I never got it to work, probably since I'm running Windows 7, that's supposed to create new desktops. I don't really know how multiple desktops work but if they run in parallel, there may be a way to create a new desktop around a current window. I don't know how, it's just a thought so far.
The reason it's not working for me is not that it's not creating a new desktop, it's that once it's been created, I can't return from it. The taskbar icon nor the taskbar itself ever appears. | Hijacking, redirecting, display output with Python 2.7 | 0 | 0 | 1 | 0 | 0 | 287 |
6,453,067 | 2011-06-23T11:08:00.000 | 2 | 0 | 0 | 0 | 0 | python,cursor,mysql-python | 0 | 6,453,159 | 0 | 1 | 0 | true | 0 | 0 | Should I close properly my cursor?
Yes, you should. Explicit is better than implicit.
Should I create new cursors for every
query, or one cursor is enough for
multiple different queries in the same
DB?
This depends on how you use this cursor. For simple tasks it is enough to use one cursor. For some complex application it is better to create separate cursor for each batch of SQL-queries. | 1 | 2 | 0 | 0 | I'm trying to figure out how to use python's mysqldb. I can do my job with my current knownledge, but I want to use the best practices.
Should I close properly my cursor? Exiting the program isn't close it autmatically? (Shouldn't I expect the object destructor to do it anyway?)
Should I create new cursors for every query, or one cursor is enough for multiple different queries in the same DB? | How to properly use mysqldb in python | 0 | 1.2 | 1 | 1 | 0 | 742 |
6,463,179 | 2011-06-24T04:00:00.000 | 2 | 0 | 0 | 0 | 1 | python,url,login,download,urllib2 | 0 | 6,463,190 | 0 | 1 | 0 | true | 0 | 0 | It is the task of the remote server/Service to provide the content-disposition header.
There is nothing you can do unless the remote server/service is under your own control.. | 1 | 0 | 0 | 0 | So I finally managed to get my script to login to a website and download a file... however, in some instances I will have a url like "http://www.test.com/index.php?act=Attach&type=post&id=3345". Firefox finds the filename ok... so I should be able to.
I am unable to find the "Content-Disposition" header via something like remotefile.info()['Content-Disposition']
Also, remotefile.geturl() returns the same url.
What am I missing? How do I get the actual filename? I would prefer using the built-in libraries. | Using python (urllib) to download a file, how to get the real filename? | 0 | 1.2 | 1 | 0 | 1 | 580 |
6,467,651 | 2011-06-24T12:15:00.000 | 0 | 0 | 1 | 0 | 0 | c++,python,windows,multithreading,audio | 0 | 6,468,189 | 0 | 4 | 0 | false | 0 | 0 | py2.6 comes with processed-based threading as well so you don't have todo just green-threads | 1 | 1 | 0 | 0 | I'd like to write a program that capture audio stream from the microphone and in the same time mix this stream with a playing audio file.
I'm lookig for library, api, etc..etc... but my concern is about the implementation, should I use threading programming? I don't know how to use a thread yet.
The operating system is windows, the language is c++ or python.
thanks | Should I use threading programming for mixing 2 audio stream? | 0 | 0 | 1 | 0 | 0 | 1,037 |
6,471,569 | 2011-06-24T17:35:00.000 | 1 | 0 | 0 | 0 | 0 | python,pyserial | 0 | 6,474,062 | 0 | 1 | 0 | true | 0 | 0 | You will find it easier to use a USB scanner. These will decode the scan, and send it as if it were typed on the keyboard, and entered with a trailing return.
The barcode is typically written with leading and trailing * characters, but these are not sent with the scan.
Thus you print "*AB123*" using a 3 of 9 font, and when it is scanned sys.stdin.readline().stript() will return "AB123".
There are more than a few options that can be set in the scanner, so you need to read the manual. I have shown the factory default above for a cheap nameless scanner I bought from Amazon. | 1 | 1 | 0 | 0 | I have to read incoming data from a barcode scanner using pyserial. Then I have to store the contents into a MySQL database. I have the database part but not the serial part. can someone show me examples of how to do this. I'm using a windows machine. | Reading incoming data from barcode | 0 | 1.2 | 1 | 1 | 0 | 916 |
6,473,925 | 2011-06-24T21:25:00.000 | 99 | 0 | 0 | 0 | 0 | python,mysql,sqlalchemy,pyramid | 0 | 30,554,677 | 0 | 14 | 0 | false | 0 | 0 | There is a method in engine object to fetch the list of tables name. engine.table_names() | 1 | 133 | 0 | 1 | I couldn't find any information about this in the documentation, but how can I get a list of tables created in SQLAlchemy?
I used the class method to create the tables. | SQLAlchemy - Getting a list of tables | 0 | 1 | 1 | 1 | 0 | 133,023 |
6,483,466 | 2011-06-26T10:50:00.000 | 2 | 0 | 1 | 0 | 0 | python,user-interface,gtk,pygtk,modal-dialog | 0 | 6,483,508 | 0 | 1 | 1 | true | 0 | 0 | Would GtkDialog.run() be the method you need? You "run" the dialog, at the point where you need to ask the user, and when it returns you have your answer. | 1 | 0 | 0 | 0 | The program I am writing can edit a single project at a time. This means that opening a new file/project implies closing the previous one. Now what I want to achieve is the following workflow:
User has uncommitted changes to a project he never previously saved, so the project doesn't have a file name yet.
User presses "open saved project".
A dialogue "A" pops up and says: "Your current project has uncommitted changes, what would you like to do? Abort new project operation, discard changes to current project, or save them?".
User selects "save" dialogue.
Dialogue A closes.
Dialogue B1 (file chooser configured for save operation) pops up.
User select file name for project to save.
Dialogue B1 closes, project gets saved.
Dialogue B2 (same file chooser but configured for load operation) pops up.
User select file to open.
Dialogue B2 closes, project is loaded.
So really, in the above example steps 3 to 8 are a sort of "interruption" in the obvious workflow of opening a saved project, so when dialogue A and B1 open, the obvious workflow is halted, and it is resumed when those dialogue get responded.
My question is: how to implement this mechanism of halting/resuming the normal flow of operation? So far the way I implemented it is via a stack on which - any time I open a popup dialogue - I push the "resume-from-here callback", and any time I respond I pop the callback from.
...yet it seems as mine is a very common scenario for which there should be an easier method (maybe a specific function of PyGTK!).
Many thanks in advance for your help/time! | How to resume program workflow at the right place after dialogue response? | 0 | 1.2 | 1 | 0 | 0 | 79 |
6,488,806 | 2011-06-27T05:10:00.000 | 2 | 1 | 0 | 1 | 0 | python | 0 | 6,531,642 | 0 | 1 | 0 | false | 1 | 0 | You don't need to spawn another process, that would complicate things a lot. Here's how I would do it based on something similar in my current project :
Create a WSGI application, which can live behind a web server.
Create a request handler (or "view") that is accessible from any URL mapping as long as the user doesn't have a session ID cookie.
In the request handler, the user can choose the target application and with it, the hostname, port number, etc. This request handler creates a connection to the target application, for example using httplib and assigns a session ID to it. It sets the session ID cookie and redirects the user back to the same page.
Now when your user hits the application, you can use the already open http connection to redirect the query. Note that WSGI supports passing back an open file-like object as response, including those provided by httplib, for increased performance. | 1 | 1 | 0 | 0 | I need to write a cgi page which will act like a reverse proxy between the user and another page (mbean). The issue is that each mbean uses different port and I do not know ahead of time which port user will want to hit.
Therefore want I need to do is following:
A) Give user a page which will allow him to choose which application he wants to hit
B) spawn a reverse proxy base on information above (which gives me port, server, etc..)
C) the user connects to the remote mbean page via the reverse proxy and therefore never "leaves" the original page.
The reason for C is that user does not have direct access to any of the internal apps only has access to initial port 80.
I looked at twisted and it appears to me like it can do the job. What I don't know is how to spawn twisted process from within cgi so that it can establish the connection and keep further connection within the reverse proxy framework.
BTW I am not married to twisted, if there is another tool that would do the job better, I am all ears. I can't do things like mod_proxy (for instance) since the wide range of ports would make configuration rather silly (at around 1000 different proxy settings). | python reverse proxy spawning via cgi | 0 | 0.379949 | 1 | 0 | 0 | 488 |
6,489,663 | 2011-06-27T07:10:00.000 | 0 | 0 | 1 | 0 | 0 | python,file,search,directory,match | 0 | 6,489,701 | 0 | 5 | 0 | false | 0 | 0 | I do not know the logic of python for this but I would do the following:
Loop through each file in the directory, get the names as strings and check to see if they begin with "myfile" by splitting the string on the "." you can compare what you are looking for with what you have. | 1 | 3 | 0 | 0 | Let's say I want to search for a file name "myfile" in a folder named "myfolder", how can I do it without knowing the format of the file?
Another question: how to list all files of a folder and all the files of it's subfolders (and so on)?
Thank you. | Find file in folder without knowing the extension? | 0 | 0 | 1 | 0 | 0 | 3,120 |
Subsets and Splits