Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,077,649 | 2009-07-03T03:24:00.000 | 3 | 0 | 1 | 0 | python,windows,wxpython | 1,077,697 | 2 | true | 0 | 1 | I would advice to do it in two steps.
First step is to save your prefs. as
string, for that you can
a)
Use any xml lib or output xml by
hand to output string and read
similarly from string
b) Just use pickle module to dump your prefs object as a string
c) Somehow generate a string from prefs which you can read back as prefs e.g. use yaml, config , JSON etc actually JSON is a good option when simplejson makes it so easy.
Once you have your methods to convert to and from string are ready, you just need to store it somewhere where it is persisted and you can read back next time, for that you can
a) Use wx.Config which save to registry in windows and to other places depending on platform so you don't have to worry where it saves, you can just read back values in platform independent way. But if you wish you can just use wx.Config for directly saving reading prefs.
b) Directly save prefs. string to a file in a folder assigned by OS to your app e.g. app data folder in windows.
Benefit of saving to a string and than using wx.Config to save it, is that you can easily change where data is saved in future e.g. in future if there is a need to upload prefs. you can just upload prefs. string. | 1 | 1 | 0 | Developing a project of mine I realize I have a need for some level of persistence across sessions, for example when a user executes the application, changes some preferences and then closes the app. The next time the user executes the app, be it after a reboot or 15 minutes, I would like to be able to retain the preferences that had been changed.
My question relates to this persistence. Whether programming an application using the win32 API or the MFC Framework .. or using the newer tools for higher level languages such as wxPython or wxRuby, how does one maintain the type of persistence I refer to? Is it done as a temporary file written to the disk? Is it saved into some registry setting? Is there some other layer it is stored in that I am unaware of? | Windows Application Programming & wxPython | 1.2 | 0 | 0 | 729 |
1,078,166 | 2009-07-03T07:23:00.000 | -1 | 0 | 0 | 0 | python,django,mod-python | 1,078,235 | 4 | false | 1 | 0 | Use a test server included in Django. (like ./manage.py runserver 0.0.0.0:8080) It will do most things you would need during development. The only drawback is that it cannot handle simultaneous requests with multi-threading.
I've heard that there is a trick that setting Apache's max instances to 1 so that every code change is reflected immediately--but because you said you're running other services, so this may not be your case. | 2 | 8 | 0 | I'm running a Django app on Apache + mod_python. When I make some changes to the code, sometimes they have effect immediately, other times they don't, until I restart Apache. However I don't really want to do that since it's a production server running other stuff too. Is there some other way to force that?
Just to make it clear, since I see some people get it wrong, I'm talking about a production environment. For development I'm using Django's development server, of course. | Restarting a Django application running on Apache + mod_python | -0.049958 | 0 | 0 | 7,641 |
1,078,166 | 2009-07-03T07:23:00.000 | 1 | 0 | 0 | 0 | python,django,mod-python | 1,078,289 | 4 | false | 1 | 0 | You can reduce number of connections to 1 by setting "MaxRequestsPerChild 1" in your httpd.conf file. But do it only on test server, not production.
or
If you don't want to kill existing connections and still restart apache you can restart it "gracefully" by performing "apache2ctl gracefully" - all existing connections will be allowed to complete. | 2 | 8 | 0 | I'm running a Django app on Apache + mod_python. When I make some changes to the code, sometimes they have effect immediately, other times they don't, until I restart Apache. However I don't really want to do that since it's a production server running other stuff too. Is there some other way to force that?
Just to make it clear, since I see some people get it wrong, I'm talking about a production environment. For development I'm using Django's development server, of course. | Restarting a Django application running on Apache + mod_python | 0.049958 | 0 | 0 | 7,641 |
1,078,599 | 2009-07-03T09:35:00.000 | 0 | 0 | 0 | 0 | python,wsgi,web.py | 1,406,572 | 3 | false | 1 | 0 | Here is an example of two hosted apps using cherrypy wsgi server:
#!/usr/bin/python
from web import wsgiserver
import web
# webpy wsgi app
urls = (
'/test.*', 'index'
)
class index:
def GET(self):
web.header("content-type", "text/html")
return "Hello, world1!"
application = web.application(urls, globals(), autoreload=False).wsgifunc()
# generic wsgi app
def my_blog_app(environ, start_response):
status = '200 OK'
response_headers = [('Content-type','text/plain')]
start_response(status, response_headers)
return ['Hello world! - blog\n']
"""
# single hosted app
server = wsgiserver.CherryPyWSGIServer(
('0.0.0.0', 8070), application,
server_name='www.cherrypy.example')
"""
# multiple hosted apps with WSGIPathInfoDispatcher
d = wsgiserver.WSGIPathInfoDispatcher({'/test': application, '/blog': my_blog_app})
server = wsgiserver.CherryPyWSGIServer(('0.0.0.0', 8070), d)
server.start() | 1 | 7 | 0 | I've created a web.py application, and now that it is ready to be deployed, I want to run in not on web.py's built-in webserver. I want to be able to run it on different webservers, Apache or IIS, without having to change my application code. This is where WSGI is supposed to come in, if I understand it correctly.
However, I don't understand what exacly I have to do to make my application deployable on a WSGI server? Most examples assume you are using Pylons/Django/other-framework, on which you simply run some magic command which fixes everything for you.
From what I understand of the (quite brief) web.py documentation, instead of running web.application(...).run(), I should use web.application(...).wsgifunc(). And then what? | Deploying a Web.py application with WSGI, several servers | 0 | 0 | 0 | 6,152 |
1,079,022 | 2009-07-03T11:50:00.000 | 1 | 0 | 0 | 0 | python,flash,google-app-engine,authentication | 1,353,761 | 1 | false | 1 | 0 | Of course this is possible
you just need to use flash to post http request to your server
and your server could communicate to flash through several ways like: xml , html, and AMF or even johnson( I am not sure).
I recommend you use pyamf at server side to build a native support for flash at server side | 1 | 0 | 0 | I am putting my old flash site into GAE. I want to use Google's user authentication too. Now, I want to put Googles login box inside the flash instead of redirecting to Google's login page. Same thing I want for forgot password.
Is it possible to do this? How to do this? | How to put Google login box inside flash in GAE? | 0.197375 | 0 | 0 | 237 |
1,079,690 | 2009-07-03T14:49:00.000 | 0 | 0 | 1 | 0 | python,cpython | 1,080,330 | 4 | false | 0 | 1 | There is a fantastically bulletproof way. Let people create the object, and have Python crash. That should stop them doing it pretty efficiently. ;)
Also you can underscore the class name, to indicate that it should be internal. (At least, I assume you can create underscored classnames from C too, I haven't actually ever done it.) | 1 | 2 | 0 | What's the correct way to prevent invoking (creating an instance of) a C type from Python?
I've considered providing a tp_init that raises an exception, but as I understand it that would still allow __new__ to be called directly on the type.
A C function returns instances of this type -- that's the only way instances of this type are intended to be created.
Edit: My intention is that users of my type will get an exception if they accidentally use it wrongly. The C code is such that calling a function on an object incorrectly created from Python would crash. I realise this is unusual: all of my C extension types so far have worked nicely when instantiated from Python. My question is whether there is a usual way to provide this restriction. | Preventing invoking C types from Python | 0 | 0 | 0 | 339 |
1,079,983 | 2009-07-03T16:06:00.000 | 76 | 0 | 1 | 0 | python | 1,080,192 | 8 | true | 0 | 0 | Smalltalk-80, released by Xerox in 1980, used self. Objective-C (early 1980s) layers Smalltalk features over C, so it uses self too. Modula-3 (1988), Python (late 1980s), and Ruby (mid 1990s) also follow this tradition.
C++, also dating from the early 1980s, chose this instead of self. Since Java was designed to be familiar to C/C++ developers, it uses this too.
Smalltalk uses the metaphor of objects sending messages to each other, so "self" just indicates that the object is sending a message to itself. | 4 | 53 | 0 | Python is the language I know the most, and strangely I still don't know why I'm typing "self" and not "this" like in Java or PHP.
I know that Python is older than Java, but I can't figure out where does this come from. Especially since you can use any name instead of "self": the program will work fine.
So where does this convention come from? | Why do pythonistas call the current reference "self" and not "this"? | 1.2 | 0 | 0 | 5,250 |
1,079,983 | 2009-07-03T16:06:00.000 | 19 | 0 | 1 | 0 | python | 1,079,995 | 8 | false | 0 | 0 | Smalltalk, which predates Java of course. | 4 | 53 | 0 | Python is the language I know the most, and strangely I still don't know why I'm typing "self" and not "this" like in Java or PHP.
I know that Python is older than Java, but I can't figure out where does this come from. Especially since you can use any name instead of "self": the program will work fine.
So where does this convention come from? | Why do pythonistas call the current reference "self" and not "this"? | 1 | 0 | 0 | 5,250 |
1,079,983 | 2009-07-03T16:06:00.000 | 6 | 0 | 1 | 0 | python | 1,081,235 | 8 | false | 0 | 0 | I think that since it's explicitly declared it makes more sense seeing an actual argument called "self" rather than "this". From the grammatical point of view at least, "self" is not as context dependent as "this".
I don't know if I made myself clear enough, but anyway this is just a subjective appreciation. | 4 | 53 | 0 | Python is the language I know the most, and strangely I still don't know why I'm typing "self" and not "this" like in Java or PHP.
I know that Python is older than Java, but I can't figure out where does this come from. Especially since you can use any name instead of "self": the program will work fine.
So where does this convention come from? | Why do pythonistas call the current reference "self" and not "this"? | 1 | 0 | 0 | 5,250 |
1,079,983 | 2009-07-03T16:06:00.000 | 8 | 0 | 1 | 0 | python | 8,620,086 | 8 | false | 0 | 0 | Python follows Smalltalk's footsteps in the aspect - self is used in Smalltalk as well. I guess the real question should be 'why did Bjarne decide to use this in C++'... | 4 | 53 | 0 | Python is the language I know the most, and strangely I still don't know why I'm typing "self" and not "this" like in Java or PHP.
I know that Python is older than Java, but I can't figure out where does this come from. Especially since you can use any name instead of "self": the program will work fine.
So where does this convention come from? | Why do pythonistas call the current reference "self" and not "this"? | 1 | 0 | 0 | 5,250 |
1,080,719 | 2009-07-03T20:22:00.000 | 2 | 0 | 0 | 0 | python,windows,screenshot | 3,586,035 | 7 | false | 0 | 1 | There is a way to do this, using the PrintWindow function. It causes the window to redraw itself on another surface. | 4 | 6 | 0 | So I can use PIL to grab a screenshot of the desktop, and then use pywin32 to get its rectangle and crop out the part I want. However, if there's something in front of the window I want, it'll occlude the application I wanted a screenshot of. Is there any way to get what windows would say an application is currently displaying? It has that data somewhere, even when other windows are in front of it. | Screenshot an application, regardless of what's in front of it? | 0.057081 | 0 | 0 | 12,921 |
1,080,719 | 2009-07-03T20:22:00.000 | 1 | 0 | 0 | 0 | python,windows,screenshot | 1,270,114 | 7 | false | 0 | 1 | Maybe you can position the app offscreen, then take the screenshot, then put it back? | 4 | 6 | 0 | So I can use PIL to grab a screenshot of the desktop, and then use pywin32 to get its rectangle and crop out the part I want. However, if there's something in front of the window I want, it'll occlude the application I wanted a screenshot of. Is there any way to get what windows would say an application is currently displaying? It has that data somewhere, even when other windows are in front of it. | Screenshot an application, regardless of what's in front of it? | 0.028564 | 0 | 0 | 12,921 |
1,080,719 | 2009-07-03T20:22:00.000 | 1 | 0 | 0 | 0 | python,windows,screenshot | 1,080,741 | 7 | false | 0 | 1 | If you can, try saving the order of the windows, then move your app to the front, screenshot, and move it back really quickly. Might produce a bit of annoying flicker, but it might be better than nothing. | 4 | 6 | 0 | So I can use PIL to grab a screenshot of the desktop, and then use pywin32 to get its rectangle and crop out the part I want. However, if there's something in front of the window I want, it'll occlude the application I wanted a screenshot of. Is there any way to get what windows would say an application is currently displaying? It has that data somewhere, even when other windows are in front of it. | Screenshot an application, regardless of what's in front of it? | 0.028564 | 0 | 0 | 12,921 |
1,080,719 | 2009-07-03T20:22:00.000 | 0 | 0 | 0 | 0 | python,windows,screenshot | 1,080,733 | 7 | false | 0 | 1 | IIRC, not in Windows pre-vista - with Aero each window has it's own buffer, but before that, you would just use your method, of getting the rectangle. I'm not sure if pywin32 has Aero support or not. | 4 | 6 | 0 | So I can use PIL to grab a screenshot of the desktop, and then use pywin32 to get its rectangle and crop out the part I want. However, if there's something in front of the window I want, it'll occlude the application I wanted a screenshot of. Is there any way to get what windows would say an application is currently displaying? It has that data somewhere, even when other windows are in front of it. | Screenshot an application, regardless of what's in front of it? | 0 | 0 | 0 | 12,921 |
1,080,832 | 2009-07-03T21:07:00.000 | 4 | 0 | 1 | 1 | python,python-c-api,postmortem-debugging | 1,080,869 | 1 | true | 0 | 0 | It's lots of work, but of course it can be done, especially if you have all the symbols. Look at the header files for the specific version of Python (and compilation options in use to build it): they define PyObject as a struct which includes, first and foremost, a pointer to a type. Lots of macros are used, so you may want to run the compile of that Python from sources again, with exactly the same flags but in addition a -E to stop after preprocessing, so you can refer to the specific C code that made the bits you're seeing in the core dump.
A type object has, among many other things, a string (array of char) that's its name, and from it you can infer what exactly objects of that type contain -- be it content directly, or maybe some content (such as a length, i.e. number of items) and a pointer to the actual data.
I've done such super-advanced post-mortem debugging a couple of times (starting with VERY precise knowledge of the Python versions involved and all the prepared preprocessed sources &c) and each time it took me a day or two (were I still a freelance and charging by the hour, if I had to bid on such a task I'd say at least 20 hours -- at my not-cheap hourly rates!-).
IOW, it's worth it only if it's really truly the only way out of some very costly pickle. On the plus side, it WILL teach you more about Python's internals than you ever thought was there, even after memorizing every line of the sources. Good luck, you'll need some!!! | 1 | 2 | 0 | Is there anyway to discover the python value of a PyObject* from a corefile in gdb | Accessing Python Objects in a Core Dump | 1.2 | 0 | 0 | 1,101 |
1,081,698 | 2009-07-04T06:59:00.000 | 0 | 1 | 0 | 1 | python,linux,upgrade,centos,python-2.6 | 1,085,044 | 4 | false | 0 | 0 | easy_install is good one but there are low level way for installing module, just:
unpack module source to some directory
type "python setup.py install"
Of course you should do this with required installed python interpreter version; for checking it type:
python -V | 3 | 3 | 0 | I have a problem of upgrading python from 2.4 to 2.6:
I have CentOS 5 (Full). It has python 2.4 living in /usr/lib/python2.4/ . Additional modules are living in /usr/lib/python2.4/site-packages/ . I've built python 2.6 from sources at /usr/local/lib/python2.6/ . I've set default python to python2.6 . Now old modules for 2.4 are out of pythonpath and are "lost". In particular, yum is broken ("no module named yum").
So what is the right way to migrate/install modules to python2.6? | Transition from Python2.4 to Python2.6 on CentOS, module migration problem | 0 | 0 | 0 | 6,767 |
1,081,698 | 2009-07-04T06:59:00.000 | 0 | 1 | 0 | 1 | python,linux,upgrade,centos,python-2.6 | 1,081,705 | 4 | false | 0 | 0 | There are a couple of options...
If the modules will run under Python 2.6, you can simply create symbolic links to them from the 2.6 site-packages directory to the 2.4 site-packages directory.
If they will not run under 2.6, then you may need to re-compile them against 2.6, or install up-to-date versions of them. Just make sure you are using 2.6 when calling "python setup.py"
...
You may want to post this on serverfault.com, if you run into additional challenges. | 3 | 3 | 0 | I have a problem of upgrading python from 2.4 to 2.6:
I have CentOS 5 (Full). It has python 2.4 living in /usr/lib/python2.4/ . Additional modules are living in /usr/lib/python2.4/site-packages/ . I've built python 2.6 from sources at /usr/local/lib/python2.6/ . I've set default python to python2.6 . Now old modules for 2.4 are out of pythonpath and are "lost". In particular, yum is broken ("no module named yum").
So what is the right way to migrate/install modules to python2.6? | Transition from Python2.4 to Python2.6 on CentOS, module migration problem | 0 | 0 | 0 | 6,767 |
1,081,698 | 2009-07-04T06:59:00.000 | 0 | 1 | 0 | 1 | python,linux,upgrade,centos,python-2.6 | 1,082,045 | 4 | false | 0 | 0 | Some Python libs may be still not accessible as with Python 2.6 site-packages is changed to dist-packages.
The only way in that case is to do move all stuff generated in site-packages (e.g. by make install) to dist-packages and create a sym-link. | 3 | 3 | 0 | I have a problem of upgrading python from 2.4 to 2.6:
I have CentOS 5 (Full). It has python 2.4 living in /usr/lib/python2.4/ . Additional modules are living in /usr/lib/python2.4/site-packages/ . I've built python 2.6 from sources at /usr/local/lib/python2.6/ . I've set default python to python2.6 . Now old modules for 2.4 are out of pythonpath and are "lost". In particular, yum is broken ("no module named yum").
So what is the right way to migrate/install modules to python2.6? | Transition from Python2.4 to Python2.6 on CentOS, module migration problem | 0 | 0 | 0 | 6,767 |
1,082,692 | 2009-07-04T18:02:00.000 | 0 | 0 | 1 | 1 | python,linux,python-3.x | 1,103,562 | 4 | false | 0 | 0 | Why do you need to use make install at all? After having done make to compile python 3.x, just move the python folder somewhere, and create a symlink to the python executable in your ~/bin directory. Add that directory to your path if it isn't already, and you'll have a working python development version ready to be used. As long as the symlink itself is not named python (I've named mine py), you'll never experience any clashes.
An added benefit is that if you want to change to a new release of python 3.x, for example if you're following the beta releases, you simply download, compile and replace the folder with the new one.
It's slightly messy, but the messiness is confined to one directory, and I find it much more convenient than thinking about altinstalls and the like. | 2 | 2 | 0 | I'm currently toying with python at home and I'm planning to switch to python 3.1. The fact is that I have some scripts that use python 2.6 and I can't convert them since they use some modules that aren't available for python 3.1 atm. So I'm considering installing python 3.1 along with my python 2.6. I only found people on the internet that achieve that by compiling python from the source and use make altinstall instead of the classic make install. Anyway, I think compiling from the source is a bit complicated. I thought running two different versions of a program is easy on Linux (I run fedora 11 for the record). Any hint?
Thanks for reading. | Running both python 2.6 and 3.1 on the same machine | 0 | 0 | 0 | 5,310 |
1,082,692 | 2009-07-04T18:02:00.000 | 1 | 0 | 1 | 1 | python,linux,python-3.x | 1,082,698 | 4 | false | 0 | 0 | You're not supposed to need to run them together.
2.6 already has all of the 3.0 features. You can enable those features with from __future__ import statements.
It's much simpler run 2.6 (with some from __future__ import) until everything you need is in 3.x, then switch. | 2 | 2 | 0 | I'm currently toying with python at home and I'm planning to switch to python 3.1. The fact is that I have some scripts that use python 2.6 and I can't convert them since they use some modules that aren't available for python 3.1 atm. So I'm considering installing python 3.1 along with my python 2.6. I only found people on the internet that achieve that by compiling python from the source and use make altinstall instead of the classic make install. Anyway, I think compiling from the source is a bit complicated. I thought running two different versions of a program is easy on Linux (I run fedora 11 for the record). Any hint?
Thanks for reading. | Running both python 2.6 and 3.1 on the same machine | 0.049958 | 0 | 0 | 5,310 |
1,083,250 | 2009-07-05T00:35:00.000 | 0 | 0 | 1 | 0 | python,json | 1,083,318 | 4 | false | 0 | 0 | evaling JSON is a bit like trying to run XML through a C++ compiler.
eval is meant to evaluate Python code. Although there are some syntactical similarities, JSON isn't Python code. Heck, not only is it not Python code, it's not code to begin with. Therefore, even if you can get away with it for your use-case, I'd argue that it's a bad idea conceptually. Python is an apple, JSON is orange-flavored soda. | 2 | 20 | 0 | Best practices aside, is there a compelling reason not to do this?
I'm writing a post-commit hook for use with a Google Code project, which provides commit data via a JSON object. GC provides an HMAC authentication token along with the request (outside the JSON data), so by validating that token I gain high confidence that the JSON data is both benign (as there's little point in distrusting Google) and valid.
My own (brief) investigations suggest that JSON happens to be completely valid Python, with the exception of the "\/" escape sequence — which GC doesn't appear to generate.
So, as I'm working with Python 2.4 (i.e. no json module), eval() is looking really tempting.
Edit: For the record, I am very much not asking if this is a good idea. I'm quite aware that it isn't, and I very much doubt I'll ever use this technique for any future projects even if I end up using it for this one. I just wanted to make sure that I know what kind of trouble I'll run into if I do. :-) | Running JSON through Python's eval()? | 0 | 0 | 0 | 17,161 |
1,083,250 | 2009-07-05T00:35:00.000 | 3 | 0 | 1 | 0 | python,json | 28,351,359 | 4 | false | 0 | 0 | One major difference is that a boolean in JSON is true|false, but Python uses True|False.
The most important reason not to do this can be generalized: eval should never be used to interpret external input since this allows for arbitrary code execution. | 2 | 20 | 0 | Best practices aside, is there a compelling reason not to do this?
I'm writing a post-commit hook for use with a Google Code project, which provides commit data via a JSON object. GC provides an HMAC authentication token along with the request (outside the JSON data), so by validating that token I gain high confidence that the JSON data is both benign (as there's little point in distrusting Google) and valid.
My own (brief) investigations suggest that JSON happens to be completely valid Python, with the exception of the "\/" escape sequence — which GC doesn't appear to generate.
So, as I'm working with Python 2.4 (i.e. no json module), eval() is looking really tempting.
Edit: For the record, I am very much not asking if this is a good idea. I'm quite aware that it isn't, and I very much doubt I'll ever use this technique for any future projects even if I end up using it for this one. I just wanted to make sure that I know what kind of trouble I'll run into if I do. :-) | Running JSON through Python's eval()? | 0.148885 | 0 | 0 | 17,161 |
1,083,364 | 2009-07-05T02:49:00.000 | 2 | 0 | 1 | 0 | python,csv | 1,083,367 | 4 | false | 0 | 0 | test = csv.reader(c.split('\n')) | 2 | 7 | 1 | i'm just testing out the csv component in python, and i am having some trouble with it.
I have a fairly standard csv string, and the default options all seems to fit with my test, but the result shouldn't group 1, 2, 3, 4 in a row and 5, 6, 7, 8 in a row?
Thanks a lot for any enlightenment provided!
Python 2.6.2 (r262:71600, Apr 16 2009, 09:17:39)
[GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import csv
>>> c = "1, 2, 3, 4\n 5, 6, 7, 8\n"
>>> test = csv.reader(c)
>>> for t in test:
... print t
...
['1']
['', '']
[' ']
['2']
['', '']
[' ']
['3']
['', '']
[' ']
['4']
[]
[' ']
['5']
['', '']
[' ']
['6']
['', '']
[' ']
['7']
['', '']
[' ']
['8']
[]
>>> | python csv question | 0.099668 | 0 | 0 | 850 |
1,083,364 | 2009-07-05T02:49:00.000 | 8 | 0 | 1 | 0 | python,csv | 1,083,376 | 4 | true | 0 | 0 | csv.reader expects an iterable. You gave it "1, 2, 3, 4\n 5, 6, 7, 8\n"; iteration produces characters. Try giving it ["1, 2, 3, 4\n", "5, 6, 7, 8\n"] -- iteration will produce lines. | 2 | 7 | 1 | i'm just testing out the csv component in python, and i am having some trouble with it.
I have a fairly standard csv string, and the default options all seems to fit with my test, but the result shouldn't group 1, 2, 3, 4 in a row and 5, 6, 7, 8 in a row?
Thanks a lot for any enlightenment provided!
Python 2.6.2 (r262:71600, Apr 16 2009, 09:17:39)
[GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import csv
>>> c = "1, 2, 3, 4\n 5, 6, 7, 8\n"
>>> test = csv.reader(c)
>>> for t in test:
... print t
...
['1']
['', '']
[' ']
['2']
['', '']
[' ']
['3']
['', '']
[' ']
['4']
[]
[' ']
['5']
['', '']
[' ']
['6']
['', '']
[' ']
['7']
['', '']
[' ']
['8']
[]
>>> | python csv question | 1.2 | 0 | 0 | 850 |
1,084,697 | 2009-07-05T19:45:00.000 | 0 | 0 | 0 | 1 | python,desktop-application,application-settings | 1,084,700 | 4 | false | 0 | 0 | Well, for Windows APPDATA (environmental variable) points to a user's "Application Data" folder. Not sure about OSX, though.
The correct way, in my opinion, is to do it on a per-platform basis. | 1 | 42 | 0 | I have a python desktop application that needs to store user data. On Windows, this is usually in %USERPROFILE%\Application Data\AppName\, on OSX it's usually ~/Library/Application Support/AppName/, and on other *nixes it's usually ~/.appname/.
There exists a function in the standard library, os.path.expanduser that will get me a user's home directory, but I know that on Windows, at least, "Application Data" is localized into the user's language. That might be true for OSX as well.
What is the correct way to get this location?
UPDATE:
Some further research indicates that the correct way to get this on OSX is by using the function NSSearchPathDirectory, but that's Cocoa, so it means calling the PyObjC bridge... | How do I store desktop application data in a cross platform way for python? | 0 | 0 | 0 | 8,167 |
1,084,935 | 2009-07-05T22:13:00.000 | 25 | 0 | 1 | 0 | c++,python,qt | 1,084,958 | 4 | true | 0 | 1 | Being an expert in both C++ and Python, my mantra has long been "Python where I can, C++ where I must": Python is faster (in term of programmer productivity and development cycle) and easier, C++ can give that extra bit of power when I have to get close to the hardware or be extremely careful about every byte or machine cycle I spend. In your situation, I would recommend Python (and the many excellent books and URLs already recommended in other answers). | 1 | 8 | 0 | I have neglected my programming skills since i left school and now i want to start a few things that are running around in my head. Qt would be the toolkit for me to use but i am undecided if i should use Python (looks to me like the easier to learn with a few general ideas about programming) or C++ (the thing to use with Qt).
In my school we learned the basics with Turbo Pascal, VB and a voluntary C course, though right now i only know a hint of all the things i learned back then.
Can you recommend me a way and a site or book (or two) that would bring me on that path (a perfect one would be one that teaches the language with help of the toolkit)?
Thank you in advance. | C++ or Python as a starting point into GUI programming? | 1.2 | 0 | 0 | 4,290 |
1,085,304 | 2009-07-06T03:33:00.000 | 3 | 0 | 0 | 0 | python,sqlalchemy | 1,087,081 | 3 | false | 0 | 0 | Automatic partitioning is a very database engine specific concept and SQLAlchemy doesn't provide any generic tools to manage partitioning. Mostly because it wouldn't provide anything really useful while being another API to learn. If you want to do database level partitioning then do the CREATE TABLE statements using custom Oracle DDL statements (see Oracle documentation how to create partitioned tables and migrate data to them). You can use a partitioned table in SQLAlchemy just like you would use a normal table, you just need the table declaration so that SQLAlchemy knows what to query. You can reflect the definition from the database, or just duplicate the table declaration in SQLAlchemy code.
Very large datasets are usually time-based, with older data becoming read-only or read-mostly and queries usually only look at data from a time interval. If that describes your data, you should probably partition your data using the date field.
There's also application level partitioning, or sharding, where you use your application to split data across different database instances. This isn't all that popular in the Oracle world due to the exorbitant pricing models. If you do want to use sharding, then look at SQLAlchemy documentation and examples for that, for how SQLAlchemy can support you in that, but be aware that application level sharding will affect how you need to build your application code. | 1 | 2 | 0 | I am not very familiar with databases, and so I do not know how to partition a table using SQLAlchemy.
Your help would be greatly appreciated. | how to make table partitions? | 0.197375 | 1 | 0 | 7,505 |
1,085,538 | 2009-07-06T05:23:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,sqlite | 1,085,550 | 2 | false | 1 | 0 | Afaik, you can only use the GAE specific database. | 1 | 1 | 0 | How to check available Python libraries on Google App Engine & add more?
Is SQLite available or we must use GQL with their database system only?
Thank you in advance. | How to check available Python libraries on Google App Engine & add more | 0.099668 | 0 | 0 | 1,050 |
1,086,758 | 2009-07-06T12:40:00.000 | 0 | 1 | 0 | 0 | c#,python,ruby | 1,121,154 | 1 | false | 0 | 0 | QTP performs GUI recognition and interaction through Windows Handle.
So it has to be running under Citrix (i.e. installed on the same virtual machine as your Application Under Test).
If you have the above, make sure Screen Resolution, Windows Theme, Font size, and other global GUI settings are the same. | 1 | 1 | 0 | When i tried recording using QTP, every thing goes well till the application sign in. i.e it gets upto the user Id and password entry, But QTP fails to recognise afterthat. Is there any way to handle this?
Application is to be invoked using Citirx, in VPN. | How to use QTP to test the application which operates in citrix of Remote Machine? | 0 | 0 | 0 | 697 |
1,087,087 | 2009-07-06T13:49:00.000 | 0 | 0 | 1 | 1 | python,windows,taskmanager | 38,227,531 | 2 | false | 0 | 0 | When I right clicked python in task manager and clicked open file location (win 10) it opened the plex installation folder so for me it is Plex that uses Python.exe to slow my computer down, shame as I use plex all the time for my Roku. Just have to put up with a slow computer and get another one. | 1 | 1 | 0 | I have several python.exe processes running on my Vista machine and I would like to kill one process thanks to the Windows task manager.
What is the best way to find which process to be killed. I've added the 'command line' column on task manager. It can help but not in all cases.
is there a better way? | Finding a python process running on Windows with TaskManager | 0 | 0 | 0 | 12,819 |
1,087,227 | 2009-07-06T14:17:00.000 | -1 | 0 | 0 | 0 | python | 20,517,707 | 11 | false | 0 | 0 | I was having the same problem but wanted to minimize 3rd party dependencies (because this one-off script was to be executed by many users). My solution was to wrap a curl call and make sure that the exit code was 0. Worked like a charm. | 1 | 87 | 0 | I need to write a script that connects to a bunch of sites on our corporate intranet over HTTPS and verifies that their SSL certificates are valid; that they are not expired, that they are issued for the correct address, etc. We use our own internal corporate Certificate Authority for these sites, so we have the public key of the CA to verify the certificates against.
Python by default just accepts and uses SSL certificates when using HTTPS, so even if a certificate is invalid, Python libraries such as urllib2 and Twisted will just happily use the certificate.
How do I verify a certificate in Python? | Validate SSL certificates with Python | -0.01818 | 0 | 1 | 206,011 |
1,087,255 | 2009-07-06T14:23:00.000 | -1 | 0 | 1 | 0 | python,dynamic,eval | 1,087,675 | 14 | false | 0 | 0 | I use exec to create a system of plugins in Python.
try:
exec ("from " + plugin_name + " import Plugin")
myplugin = Plugin(module_options, config=config)
except ImportError, message:
fatal ("No such module " + plugin_name + \
" (or no Plugin constructor) in my Python path: " + str(message))
except Exception:
fatal ("Module " + plugin_name + " cannot be loaded: " + \
str(sys.exc_type) + ": " + str(sys.exc_value) + \
".\n May be a missing or erroneous option?")
With a plugin like:
class Plugin:
def __init__ (self):
pass
def query(self, arg):
...
You will be able to call it like:
result = myplugin.query("something")
I do not think you can have plugins in Python without exec/eval. | 5 | 30 | 0 | There is an eval() function in Python I stumbled upon while playing around. I cannot think of a case when this function is needed, except maybe as syntactic sugar. Can anyone give an example? | Use of eval in Python? | -0.014285 | 0 | 0 | 42,923 |
1,087,255 | 2009-07-06T14:23:00.000 | 1 | 0 | 1 | 0 | python,dynamic,eval | 23,626,496 | 14 | false | 0 | 0 | eval() is for single sentence, while exec() is for multiple ones.
usually we use them to add or visit some scripts just like bash shell.
because of they can run some byte scripts in the memory, if you have some important data or script you can decode and unzip your 'secret' then do everything you wanna. | 5 | 30 | 0 | There is an eval() function in Python I stumbled upon while playing around. I cannot think of a case when this function is needed, except maybe as syntactic sugar. Can anyone give an example? | Use of eval in Python? | 0.014285 | 0 | 0 | 42,923 |
1,087,255 | 2009-07-06T14:23:00.000 | 13 | 0 | 1 | 0 | python,dynamic,eval | 1,087,300 | 14 | false | 0 | 0 | You may want to use it to allow users to enter their own "scriptlets": small expressions (or even small functions), that can be used to customize the behavior of a complex system.
In that context, and if you do not have to care too much for the security implications (e.g. you have an educated userbase), then eval() may be a good choice. | 5 | 30 | 0 | There is an eval() function in Python I stumbled upon while playing around. I cannot think of a case when this function is needed, except maybe as syntactic sugar. Can anyone give an example? | Use of eval in Python? | 1 | 0 | 0 | 42,923 |
1,087,255 | 2009-07-06T14:23:00.000 | 6 | 0 | 1 | 0 | python,dynamic,eval | 1,087,295 | 14 | false | 0 | 0 | In the past I have used eval() to add a debugging interface to my application. I created a telnet service which dropped you into the environment of the running application. Inputs were run through eval() so you can interactively run Python commands in the application. | 5 | 30 | 0 | There is an eval() function in Python I stumbled upon while playing around. I cannot think of a case when this function is needed, except maybe as syntactic sugar. Can anyone give an example? | Use of eval in Python? | 1 | 0 | 0 | 42,923 |
1,087,255 | 2009-07-06T14:23:00.000 | 1 | 0 | 1 | 0 | python,dynamic,eval | 1,087,278 | 14 | false | 0 | 0 | eval() is not normally very useful. One of the few things I have used it for (well, it was exec() actually, but it's pretty similar) was allowing the user to script an application that I wrote in Python. If it were written in something like C++, I would have to embed a Python interpreter in the application. | 5 | 30 | 0 | There is an eval() function in Python I stumbled upon while playing around. I cannot think of a case when this function is needed, except maybe as syntactic sugar. Can anyone give an example? | Use of eval in Python? | 0.014285 | 0 | 0 | 42,923 |
1,087,975 | 2009-07-06T16:40:00.000 | 2 | 0 | 1 | 1 | macos,ipython | 1,088,164 | 2 | false | 0 | 0 | Solved by completely wiping all of site-packages.
I then re-installed Framework Python, re-installed setuptools, and easy_installed ipython FTW. | 1 | 10 | 0 | Whenever I hit the up arrow in IPython, instead of getting history, I get this set of characters "^[[A" (not including the quotes).
Hitting the down arrow gives "^[[B", and tab completion doesn't work (just enters a tab).
How can I fix this? It happens in both Terminal and iTerm.
Running OS X 10.5, Framework Python 2.5.4. Error occurs in both ipython 0.8.3 and ipython 0.9.1. pyreadline-2.5.1 egg is installed in both cases.
(edit: SSH-ing to another linux machine and using IPython there works fine. So does running the normal "python" command on the OS X machine.)
Cheers,
- Dan | IPython OS X: Up arrow gives "^[[A" | 0.197375 | 0 | 0 | 1,800 |
1,088,077 | 2009-07-06T17:00:00.000 | 12 | 0 | 0 | 0 | python,mysql,unit-testing | 1,088,090 | 2 | true | 0 | 0 | There isn't a good way to do that. You want to run your queries against a real MySQL server, otherwise you don't know if they will work or not.
However, that doesn't mean you have to run them against a production server. We have scripts that create a Unit Test database, and then tear it down once the unit tests have run. That way we don't have to maintain a static test database, but we still get to test against the real server. | 1 | 8 | 0 | I want to write some unittests for an application that uses MySQL. However, I do not want to connect to a real mysql database, but rather to a temporary one that doesn't require any SQL server at all.
Any library (I could not find anything on google)? Any design pattern? Note that DIP doesn't work since I will still have to test the injected class. | testing python applications that use mysql | 1.2 | 1 | 0 | 3,416 |
1,089,576 | 2009-07-06T22:55:00.000 | 2 | 1 | 0 | 0 | python,pylons | 1,089,588 | 3 | false | 0 | 0 | Can't you just use standard Python library modules, email to prepare the mail and smtp to send it? What extra value beyond that are you looking for from the "built-in feature"? | 1 | 4 | 0 | I am using Pylons to develop an application and I want my controller actions to send emails to certain addresses. Is there a built in Pylons feature for sending email? | Sending an email from Pylons | 0.132549 | 0 | 0 | 1,532 |
1,090,022 | 2009-07-07T01:58:00.000 | 1 | 0 | 1 | 0 | python,mysql,database,database-design | 1,090,708 | 10 | false | 0 | 0 | You won't be able to do comparisons correctly. "... where x > 500" is not same as ".. where x > '500'" because "500" > "100000"
Performance wise string it would be a hit especially if you use indexes as integer indexes are much faster than string indexes.
On the other hand it really depends upon your situation. If you intend to store something like phone numbers or student enrollment numbers, then it makes perfect sense to use strings. | 8 | 22 | 0 | I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string.
Are there performance or other disadvantages to saving the values as strings? | Drawbacks of storing an integer as a string in a database | 0.019997 | 1 | 0 | 10,684 |
1,090,022 | 2009-07-07T01:58:00.000 | 0 | 0 | 1 | 0 | python,mysql,database,database-design | 1,090,924 | 10 | false | 0 | 0 | Better use independent ID and add string ID if necessary: if there's a business indicator you need to include, why make it system ID?
Main drawbacks:
Integer operations and indexing always show better performance on large scales of data (more than 1k rows in a table, not to speak of connected tables)
You'll have to make additional checks to restrict numeric-only values in a column: these can be regex whether on client or database side. Anyway, you'll have to guarantee somehow that there's actually integer.
And you will create additional context layer for developers to know, and anyway someone will always mess this up :) | 8 | 22 | 0 | I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string.
Are there performance or other disadvantages to saving the values as strings? | Drawbacks of storing an integer as a string in a database | 0 | 1 | 0 | 10,684 |
1,090,022 | 2009-07-07T01:58:00.000 | 3 | 0 | 1 | 0 | python,mysql,database,database-design | 1,090,390 | 10 | false | 0 | 0 | I've just spent the last year dealing with a database that has almost all IDs as strings, some with digits only, and others mixed. These are the problems:
Grossly restricted ID space. A 4 char (digit-only) ID has capacity for 10,000 unique values. A 4 byte numeric has capacity for over 4 billion.
Unpredictable ID space coverage. Once IDs start including non-digits it becomes hard to predict where you can create new IDs without collisions.
Conversion and display problems in certain circumstances, when scripting or on export for instance. If the ID gets interpreted as a number and there is a leading zero, the ID gets altered.
Sorting problems. You can't rely on the natural order being helpful.
Of course, if you run out of IDs, or don't know how to create new IDs, your app is dead. I suggest that if you can't control the format of your incoming IDs then you need to create your own (numeric) IDs and relate the user provided ID to that. You can then ensure that your own ID is reliable and unique (and numeric) but provide a user-viewable ID that can have whatever format your users want, and doesn't even have to be unique across the whole app. This is more work, but if you'd been through what I have you'd know which way to go.
Anil G | 8 | 22 | 0 | I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string.
Are there performance or other disadvantages to saving the values as strings? | Drawbacks of storing an integer as a string in a database | 0.059928 | 1 | 0 | 10,684 |
1,090,022 | 2009-07-07T01:58:00.000 | 37 | 0 | 1 | 0 | python,mysql,database,database-design | 1,090,065 | 10 | true | 0 | 0 | Unless you really need the features of an integer (that is, the ability to do arithmetic), then it is probably better for you to store the product IDs as strings. You will never need to do anything like add two product IDs together, or compute the average of a group of product IDs, so there is no need for an actual numeric type.
It is unlikely that storing product IDs as strings will cause a measurable difference in performance. While there will be a slight increase in storage size, the size of a product ID string is likely to be much smaller than the data in the rest of your database row anyway.
Storing product IDs as strings today will save you much pain in the future if the data provider decides to start using alphabetic or symbol characters. There is no real downside. | 8 | 22 | 0 | I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string.
Are there performance or other disadvantages to saving the values as strings? | Drawbacks of storing an integer as a string in a database | 1.2 | 1 | 0 | 10,684 |
1,090,022 | 2009-07-07T01:58:00.000 | 3 | 0 | 1 | 0 | python,mysql,database,database-design | 1,090,057 | 10 | false | 0 | 0 | It really depends on what kind of id you are talking about. If it's a code like a phone number it would actually be better to use a varchar for the id and then have your own id to be a serial for the db and use for primary key. In a case where the integer have no numerical value, varchars are generally prefered. | 8 | 22 | 0 | I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string.
Are there performance or other disadvantages to saving the values as strings? | Drawbacks of storing an integer as a string in a database | 0.059928 | 1 | 0 | 10,684 |
1,090,022 | 2009-07-07T01:58:00.000 | 0 | 0 | 1 | 0 | python,mysql,database,database-design | 1,090,035 | 10 | false | 0 | 0 | Integers are more efficient from a storage and performance perspective. However, if there is a remote chance that alpha characters may be introduced, then you should use a string. In my opinion, the efficiency and performance benefits are likely to be negligible, whereas the time it takes to modify your code may not be. | 8 | 22 | 0 | I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string.
Are there performance or other disadvantages to saving the values as strings? | Drawbacks of storing an integer as a string in a database | 0 | 1 | 0 | 10,684 |
1,090,022 | 2009-07-07T01:58:00.000 | 1 | 0 | 1 | 0 | python,mysql,database,database-design | 1,090,132 | 10 | false | 0 | 0 | The space an integer would take up would me much less than a string. For example 2^32-1 = 4,294,967,295. This would take 10 bytes to store, where as the integer would take 4 bytes to store. For a single entry this is not very much space, but when you start in the millions... As many other posts suggest there are several other issues to consider, but this is one drawback of the string representation. | 8 | 22 | 0 | I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string.
Are there performance or other disadvantages to saving the values as strings? | Drawbacks of storing an integer as a string in a database | 0.019997 | 1 | 0 | 10,684 |
1,090,022 | 2009-07-07T01:58:00.000 | 18 | 0 | 1 | 0 | python,mysql,database,database-design | 1,090,100 | 10 | false | 0 | 0 | Do NOT consider performance. Consider meaning.
ID "numbers" are not numeric except that they are written with an alphabet of all digits.
If I have part number 12 and part number 14, what is the difference between the two? Is part number 2 or -2 meaningful? No.
Part numbers (and anything that doesn't have units of measure) are not "numeric". They're just strings of digits.
Zip codes in the US, for example. Phone numbers. Social security numbers. These are not numbers. In my town the difference between zip code 12345 and 12309 isn't the distance from my house to downtown.
Do not conflate numbers -- with units -- where sums and differences mean something with strings of digits without sums or differences.
Part ID numbers are -- properly -- strings. Not integers. They'll never be integers because they don't have sums, differences or averages. | 8 | 22 | 0 | I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string.
Are there performance or other disadvantages to saving the values as strings? | Drawbacks of storing an integer as a string in a database | 1 | 1 | 0 | 10,684 |
1,092,379 | 2009-07-07T13:35:00.000 | 1 | 0 | 0 | 1 | python,python-3.x | 1,092,392 | 4 | false | 0 | 0 | All you can access is what the user sends to you.
MAC address is not part of that data. | 1 | 2 | 0 | I have my web page in python, I am able to get the IP address of the user, who will be accessing our web page, we want to get the mac address of the user's PC, is it possible in python, we are using Linux PC, we want to get it on Linux. | want to get mac address of remote PC | 0.049958 | 0 | 1 | 7,015 |
1,093,322 | 2009-07-07T16:17:00.000 | 1 | 0 | 1 | 1 | python,version | 49,765,349 | 24 | false | 0 | 0 | Check Python version: python -V or python --version or apt-cache policy python
you can also run whereis python to see how many versions are installed. | 2 | 1,402 | 0 | How can I check what version of the Python Interpreter is interpreting my script? | How do I check what version of Python is running my script? | 0.008333 | 0 | 0 | 1,741,234 |
1,093,322 | 2009-07-07T16:17:00.000 | -1 | 0 | 1 | 1 | python,version | 17,672,432 | 24 | false | 0 | 0 | If you are working on linux just give command python output will be like this
Python 2.4.3 (#1, Jun 11 2009, 14:09:37)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2
Type "help", "copyright", "credits" or "license" for more
information. | 2 | 1,402 | 0 | How can I check what version of the Python Interpreter is interpreting my script? | How do I check what version of Python is running my script? | -0.008333 | 0 | 0 | 1,741,234 |
1,093,589 | 2009-07-07T17:14:00.000 | 1 | 0 | 0 | 0 | python,sqlite,turbogears,turbogears2 | 1,422,838 | 3 | false | 0 | 0 | I am using two databases for a read-only application. The second database is a cache in case the primary database is down. I use two objects to hold the connection, metadata and compatible Table instances. The top of the view function assigns db = primary or db = secondary and the rest is just queries against db.tableA.join(db.tableB). I am not using the ORM.
The schemata are not strictly identical. The primary database needs a schema. prefix (Table(...schema='schema')) and the cache database does not. To get around this, I create my table objects in a function that takes the schema name as an argument. By calling the function once for each database, I wind up with compatible prefixed and non-prefixed Table objects.
At least in Pylons, the SQLAlchemy meta.Session is a ScopedSession. The application's BaseController in appname/lib/base.py calls Session.remove() after each request. It's probably better to have a single Session that talks to both databases, but if you don't you may need to modify your BaseController to call .remove() on each Session. | 2 | 1 | 0 | I am doing an application which will use multiple sqlite3 databases, prepopuldated with data from an external application. Each database will have the exact same tables, but with different data.
I want to be able to switch between these databases according to user input. What is the most elegant way to do that in TurboGears 2? | Switching databases in TG2 during runtime | 0.066568 | 1 | 0 | 163 |
1,093,589 | 2009-07-07T17:14:00.000 | 1 | 0 | 0 | 0 | python,sqlite,turbogears,turbogears2 | 1,387,164 | 3 | false | 0 | 0 | If ALL databases have the same schema then you should be able to create several Sessions using the same model to the different DBs. | 2 | 1 | 0 | I am doing an application which will use multiple sqlite3 databases, prepopuldated with data from an external application. Each database will have the exact same tables, but with different data.
I want to be able to switch between these databases according to user input. What is the most elegant way to do that in TurboGears 2? | Switching databases in TG2 during runtime | 0.066568 | 1 | 0 | 163 |
1,094,961 | 2009-07-07T21:23:00.000 | 0 | 0 | 1 | 0 | python,specifications | 1,094,966 | 3 | false | 0 | 0 | No, python is defined by its implementation. | 1 | 45 | 0 | Is there anything in Python akin to Java's JLS or C#'s spec? | Is there a Python language specification? | 0 | 0 | 0 | 15,263 |
1,095,250 | 2009-07-07T22:50:00.000 | 2 | 0 | 0 | 0 | python,django,security,file-upload | 5,921,094 | 2 | false | 1 | 0 | Also, you might want to put the target files outside Apache's DocumentRoot directory, so that they are not reachable by any URL. Rules in .htaccess offer a certain amount of protection if they're written well, but if you're seeking for maximum security, just put the files away from web-reachable directory. | 2 | 4 | 0 | I'm creating a very simple django upload application but I want to make it as secure as possible. This is app is going to be completely one way, IE. anybody who uploads a file will never have to retrieve it. So far I've done the following:
Disallow certain file extensions (.php, .html, .py, .rb, .pl, .cgi, .htaccess, etc)
Set a maximum file size limit and file name character length limit.
Password protected the directory that the files are uploaded to (with .htaccess owned by root so the web server cannot possibly overwrite it)
Assuming that apache and mod_python are on the front end of this and that apache itself has been secured, are there any other "best practice" things I should do or consider to protect my application?
Thanks in advance. | Django file upload input validation and security | 0.197375 | 0 | 0 | 4,001 |
1,095,250 | 2009-07-07T22:50:00.000 | 7 | 0 | 0 | 0 | python,django,security,file-upload | 1,095,662 | 2 | true | 1 | 0 | Disallowing a file extension is -- potentially -- a waste of time. A unix server doesn't use the extension -- it uses ownership and permissions.
When accepting an upload, you will often rename the file to prevent it being misused. Uploaded files should be simply named "upload_xxx" with the "xxx" being a key to some database record that provides the claimed name and data type.
You have to actually read the file and confirm that the content of the file is what someone claims it is.
For example, if they claim to upload a .JPG, you have to actually read the file to be sure it's a JPEG, not an .EXE. | 2 | 4 | 0 | I'm creating a very simple django upload application but I want to make it as secure as possible. This is app is going to be completely one way, IE. anybody who uploads a file will never have to retrieve it. So far I've done the following:
Disallow certain file extensions (.php, .html, .py, .rb, .pl, .cgi, .htaccess, etc)
Set a maximum file size limit and file name character length limit.
Password protected the directory that the files are uploaded to (with .htaccess owned by root so the web server cannot possibly overwrite it)
Assuming that apache and mod_python are on the front end of this and that apache itself has been secured, are there any other "best practice" things I should do or consider to protect my application?
Thanks in advance. | Django file upload input validation and security | 1.2 | 0 | 0 | 4,001 |
1,095,265 | 2009-07-07T22:55:00.000 | 4 | 1 | 0 | 0 | python,matlab,file-io,import,matrix | 1,095,296 | 7 | false | 0 | 0 | You could write the matrix in Python to a CSV file and read it in MATLAB using csvread. | 2 | 51 | 1 | I'm working with Python and MATLAB right now and I have a 2D array in Python that I need to write to a file and then be able to read it into MATLAB as a matrix. Any ideas on how to do this?
Thanks! | Matrix from Python to MATLAB | 0.113791 | 0 | 0 | 69,405 |
1,095,265 | 2009-07-07T22:55:00.000 | 5 | 1 | 0 | 0 | python,matlab,file-io,import,matrix | 7,737,622 | 7 | false | 0 | 0 | I would probably use numpy.savetxt('yourfile.mat',yourarray) in Python
and then yourarray = load('yourfile.mat') in MATLAB. | 2 | 51 | 1 | I'm working with Python and MATLAB right now and I have a 2D array in Python that I need to write to a file and then be able to read it into MATLAB as a matrix. Any ideas on how to do this?
Thanks! | Matrix from Python to MATLAB | 0.141893 | 0 | 0 | 69,405 |
1,095,549 | 2009-07-08T00:34:00.000 | 1 | 0 | 0 | 1 | python,windows,subprocess,sigint,signal-handling | 1,095,597 | 1 | false | 0 | 0 | Windows doesn't have the unix signals IPC mechanism.
I would look at sending a CTRL-C to the gdb process. | 1 | 5 | 0 | I've got a Python script managing a gdb process on Windows, and I need to be able to send a SIGINT to the spawned process in order to halt the target process (managed by gdb)
It appears that there is only SIGTERM available in Win32, but clearly if I run gdb from the console and Ctrl+C, it thinks it's receiving a SIGINT. Is there a way I can fake this such that the functionality is available on all platforms?
(I am using the subprocess module, and python 2.5/2.6) | Can I send SIGINT to a Python subprocess on Windows? | 0.197375 | 0 | 0 | 5,776 |
1,096,003 | 2009-07-08T03:26:00.000 | 2 | 0 | 0 | 0 | python,arrays,list,data-structures | 1,096,290 | 2 | false | 0 | 0 | In this kind of situation, I often use a dict. It looks to me as the simplest solution.
I think that Sqlite will cause some unecessary overhead. However it would give you persistence of data. But I guess that your app needs to be online so you don't really need persistence | 2 | 1 | 0 | I'm developing a Sirius XM radio desktop player in Python, in which I want the ability to display a table of all the channels and what is currently playing on each of them. This channel data is obtained from their website as a JSON string.
I'm looking for the best data structure that would allow the cleanest way to compare and update the channel data.
Arrays are problematic because I would want to be able to refer to an item by its channel number, but if I manually set each index I lose the ability to sort the array, as it would remap the index sequentially (while the channels aren't in a perfect sequence).
The other possibility (I can see) is using Sqlite, however I'm not sure if this is overkill.
is there a cleaner approach for referring and maintaining this data? | Comparing and updating array values in Python | 0.197375 | 0 | 0 | 873 |
1,096,003 | 2009-07-08T03:26:00.000 | 4 | 0 | 0 | 0 | python,arrays,list,data-structures | 1,096,014 | 2 | true | 0 | 0 | Why not a dict, with channel number as the key and "what's playing" as the value? Easy to make from JSON, easy to sort (sorted(thedict) sorts by channel, sorted(thedict, key=thedict.get) sorts by value -- all operations are pretty easy (if you specify better exactly what operations you want to do I'll be happy to show corresponding code samples). | 2 | 1 | 0 | I'm developing a Sirius XM radio desktop player in Python, in which I want the ability to display a table of all the channels and what is currently playing on each of them. This channel data is obtained from their website as a JSON string.
I'm looking for the best data structure that would allow the cleanest way to compare and update the channel data.
Arrays are problematic because I would want to be able to refer to an item by its channel number, but if I manually set each index I lose the ability to sort the array, as it would remap the index sequentially (while the channels aren't in a perfect sequence).
The other possibility (I can see) is using Sqlite, however I'm not sure if this is overkill.
is there a cleaner approach for referring and maintaining this data? | Comparing and updating array values in Python | 1.2 | 0 | 0 | 873 |
1,096,300 | 2009-07-08T05:49:00.000 | 2 | 0 | 0 | 0 | python,routes,pylons | 1,096,315 | 2 | false | 1 | 0 | HTML forms are designed to go to a specific URL with a query string (?q=) or an equivalent body in a POST -- either you write clever and subtle Javascript to intercept the form submission and rewrite it in your preferred weird way, or use redirect_to (and the latter will take some doing).
But why do you need such weird behavior rather than just following the standard?! Please explain your use case in terms of application-level needs...! | 2 | 2 | 0 | The behavior I propose:
A user loads up my "search" page, www.site.com/search, types their query into a form, clicks submit, and then ends up at www.site.com/search/the+query instead of www.site.com/search?q=the+query. I've gone through a lot of the Pylons documentation already and just finished reading the Routes documentation and am wondering if this can/should happen at the Routes layer. I have already set up my application to perform a search when given www.site.com/search/the+query, but can not figure out how to send a form to this destination.
Or is this something that should happen inside a controller with a redirect_to()?
Or somewhere else?
Followup:
This is less an actual "set in stone" desire right now and more a curiosity for brainstorming future features. I'm designing an application which uses a Wikipedia dump and have observed that when a user performs a search on Wikipedia and the search isn't too ambiguous it redirects directly to an article link: en.wikipedia.org/wiki/Apple. It is actually performing an in-between HTTP 302 redirect step, and I am just curious if there's a more elegant/cute way of doing this in Pylons. | Pylons/Routes rewrite POST or GET to fancy URL | 0.197375 | 0 | 0 | 1,116 |
1,096,300 | 2009-07-08T05:49:00.000 | 2 | 0 | 0 | 0 | python,routes,pylons | 1,097,943 | 2 | true | 1 | 0 | You can send whatever content you want for any URL, but if you want a particular URL to appear in the browser's address bar, you have to use a redirect. This is independent of whether you use Pylons, Django or Rails on the server side.
In the handling for /search (whether POST or GET), one would normally run the query in the back end, and if there was only one search result (or one overwhelmingly relevant result) you would redirect to that result, otherwise to a page showing links to the top N results. That's just normal practice, AFAIK. | 2 | 2 | 0 | The behavior I propose:
A user loads up my "search" page, www.site.com/search, types their query into a form, clicks submit, and then ends up at www.site.com/search/the+query instead of www.site.com/search?q=the+query. I've gone through a lot of the Pylons documentation already and just finished reading the Routes documentation and am wondering if this can/should happen at the Routes layer. I have already set up my application to perform a search when given www.site.com/search/the+query, but can not figure out how to send a form to this destination.
Or is this something that should happen inside a controller with a redirect_to()?
Or somewhere else?
Followup:
This is less an actual "set in stone" desire right now and more a curiosity for brainstorming future features. I'm designing an application which uses a Wikipedia dump and have observed that when a user performs a search on Wikipedia and the search isn't too ambiguous it redirects directly to an article link: en.wikipedia.org/wiki/Apple. It is actually performing an in-between HTTP 302 redirect step, and I am just curious if there's a more elegant/cute way of doing this in Pylons. | Pylons/Routes rewrite POST or GET to fancy URL | 1.2 | 0 | 0 | 1,116 |
1,096,396 | 2009-07-08T06:29:00.000 | 0 | 0 | 1 | 0 | python,datetime | 1,096,415 | 7 | false | 0 | 0 | I have always been very happy using the datetime package. You get a lot of stuff for free, and it's pretty easy to create datetime objects as well, calculate duration ect. | 1 | 1 | 0 | My use case is that I'm just making a website that I want people all over the world to be able to use, and I want to be able to say things like "This happened at 5:33pm on October 5" and also "This happened 5 minutes ago," etc.
Should I use the datetime module?
Or just strftime?
Or something fancier that isn't part of the std distro of Python? | What is the easiest way to handle dates/times in Python? | 0 | 0 | 0 | 1,653 |
1,097,908 | 2009-07-08T12:59:00.000 | 8 | 0 | 1 | 0 | python,sorting,unicode,internationalization,collation | 5,013,415 | 11 | false | 0 | 0 | A summary and extended answer:
locale.strcoll under Python 2, and locale.strxfrm will in fact solve the problem, and does a good job, assuming that you have the locale in question installed. I tested it under Windows too, where the locale names confusingly are different, but on the other hand it seems to have all locales that are supported installed by default.
ICU doesn't necessarily do this better in practice, it however does way more. Most notably it has support for splitters that can split texts in different languages into words. This is very useful for languages that doesn't have word separators. You'll need to have a corpus of words to use as a base for the splitting, because that's not included, though.
It also has long names for the locales so you can get pretty display names for the locale, support for other calendars than Gregorian (although I'm not sure the Python interface supports that) and tons and tons of other more or less obscure locale supports.
So all in all: If you want to sort alphabetically and locale-dependent, you can use the locale module, unless you have special requirements, or also need more locale dependent functionality, like words splitter. | 1 | 104 | 0 | Python sorts by byte value by default, which means é comes after z and other equally funny things. What is the best way to sort alphabetically in Python?
Is there a library for this? I couldn't find anything. Preferrably sorting should have language support so it understands that åäö should be sorted after z in Swedish, but that ü should be sorted by u, etc. Unicode support is thereby pretty much a requirement.
If there is no library for it, what is the best way to do this? Just make a mapping from letter to a integer value and map the string to a integer list with that? | How do I sort unicode strings alphabetically in Python? | 1 | 0 | 0 | 46,328 |
1,098,643 | 2009-07-08T15:03:00.000 | 7 | 0 | 1 | 0 | python,list,indexoutofboundsexception | 43,755,530 | 7 | false | 0 | 0 | The way Python indexing works is that it starts at 0, so the first number of your list would be [0]. You would have to print[52], as the starting index is 0 and
therefore line 53 is [52].
Subtract 1 from the value and you should be fine. :) | 4 | 49 | 0 | I'm telling my program to print out line 53 of an output. Is this error telling me that there aren't that many lines and therefore can not print it out? | Does "IndexError: list index out of range" when trying to access the N'th item mean that my list has less than N items? | 1 | 0 | 0 | 665,232 |
1,098,643 | 2009-07-08T15:03:00.000 | 5 | 0 | 1 | 0 | python,list,indexoutofboundsexception | 1,098,658 | 7 | false | 0 | 0 | Yes. The sequence doesn't have the 54th item. | 4 | 49 | 0 | I'm telling my program to print out line 53 of an output. Is this error telling me that there aren't that many lines and therefore can not print it out? | Does "IndexError: list index out of range" when trying to access the N'th item mean that my list has less than N items? | 0.141893 | 0 | 0 | 665,232 |
1,098,643 | 2009-07-08T15:03:00.000 | 3 | 0 | 1 | 0 | python,list,indexoutofboundsexception | 1,098,670 | 7 | false | 0 | 0 | That's right. 'list index out of range' most likely means you are referring to n-th element of the list, while the length of the list is smaller than n. | 4 | 49 | 0 | I'm telling my program to print out line 53 of an output. Is this error telling me that there aren't that many lines and therefore can not print it out? | Does "IndexError: list index out of range" when trying to access the N'th item mean that my list has less than N items? | 0.085505 | 0 | 0 | 665,232 |
1,098,643 | 2009-07-08T15:03:00.000 | 2 | 0 | 1 | 0 | python,list,indexoutofboundsexception | 37,785,115 | 7 | false | 0 | 0 | Always keep in mind when you want to overcome this error, the default value of indexing and range starts from 0, so if total items is 100 then l[99] and range(99) will give you access up to the last element.
whenever you get this type of error please cross check with items that comes between/middle in range, and insure that their index is not last if you get output then you have made perfect error that mentioned above. | 4 | 49 | 0 | I'm telling my program to print out line 53 of an output. Is this error telling me that there aren't that many lines and therefore can not print it out? | Does "IndexError: list index out of range" when trying to access the N'th item mean that my list has less than N items? | 0.057081 | 0 | 0 | 665,232 |
1,099,178 | 2009-07-08T16:30:00.000 | 1 | 0 | 1 | 0 | python,regex,recursive-regex | 1,099,197 | 6 | false | 0 | 0 | I'd recommend removing the nesting from the regex itself, looping through the results and performing regexes on that. | 3 | 19 | 0 | I seem to remember that Regular Expressions in DotNet have a special mechanism that allows for the correct matching of nested structures, like the grouping in "( (a ( ( c ) b ) ) ( d ) e )".
What is the python equivalent of this feature? Can this be achieved using regular expressions with some workaround? (Though it seems to be the sort of problem that current implementations of regex aren't designed for) | Matching Nested Structures With Regular Expressions in Python | 0.033321 | 0 | 0 | 14,013 |
1,099,178 | 2009-07-08T16:30:00.000 | 3 | 0 | 1 | 0 | python,regex,recursive-regex | 1,099,215 | 6 | false | 0 | 0 | Python doesn't support recursion in regular expressions. So equivalents to .NET's balancing groups or PCRE regex in Perl are not immediately possible in Python.
Like you said yourself: this is NOT a problem you should really be solving with a single regex. | 3 | 19 | 0 | I seem to remember that Regular Expressions in DotNet have a special mechanism that allows for the correct matching of nested structures, like the grouping in "( (a ( ( c ) b ) ) ( d ) e )".
What is the python equivalent of this feature? Can this be achieved using regular expressions with some workaround? (Though it seems to be the sort of problem that current implementations of regex aren't designed for) | Matching Nested Structures With Regular Expressions in Python | 0.099668 | 0 | 0 | 14,013 |
1,099,178 | 2009-07-08T16:30:00.000 | 22 | 0 | 1 | 0 | python,regex,recursive-regex | 1,101,030 | 6 | false | 0 | 0 | Regular expressions cannot parse nested structures. Nested structures are not regular, by definition. They cannot be constructed by a regular grammar, and they cannot be parsed by a finite state automaton (a regular expression can be seen as a shorthand notation for an FSA).
Today's "regex" engines sometimes support some limited "nesting" constructs, but from a technical standpoint, they shouldn't be called "regular" anymore. | 3 | 19 | 0 | I seem to remember that Regular Expressions in DotNet have a special mechanism that allows for the correct matching of nested structures, like the grouping in "( (a ( ( c ) b ) ) ( d ) e )".
What is the python equivalent of this feature? Can this be achieved using regular expressions with some workaround? (Though it seems to be the sort of problem that current implementations of regex aren't designed for) | Matching Nested Structures With Regular Expressions in Python | 1 | 0 | 0 | 14,013 |
1,099,305 | 2009-07-08T16:55:00.000 | -6 | 0 | 1 | 0 | python,ruby-on-rails,ruby,web-frameworks | 1,100,263 | 13 | false | 1 | 0 | All of this is TOTALLY "IMHO"
In Ruby there is ONE web-application framework, so it is the only framework that is advertised for that language.
Python has had several since inception, just to name a few: Zope, Twisted, Django, TurboGears (it itself a mix of other framework components), Pylons (a kinda-clone of the Rails framework), and so on. None of them are python-community-wide supported as "THE one to use" so all the "groundswell" is spread over several projects.
Rails has the community size solely, or at least in the vast majority, because of Rails.
Both Python and Ruby are perfectly capable of doing the job as a web applications framework. Use the one YOU (and your potential development team) like and can align on. | 10 | 91 | 0 | Python and Ruby are usually considered to be close cousins (though with quite different historical baggage) with similar expressiveness and power. But some have argued that the immense success of the Rails framework really has a great deal to do with the language it is built on: Ruby itself. So why would Ruby be more suitable for such a framework than Python? | Why is Ruby more suitable for Rails than Python? | -1 | 0 | 0 | 14,973 |
1,099,305 | 2009-07-08T16:55:00.000 | 1 | 0 | 1 | 0 | python,ruby-on-rails,ruby,web-frameworks | 1,099,320 | 13 | false | 1 | 0 | Some have said that the type of metaprogramming required to make ActiveRecord (a key component of rails) possible is easier and more natural to do in ruby than in python - I do not know python yet;), so i cannot personally confirm this statement.
I have used rails briefly, and its use of catchalls/interceptors and dynamic evaluation/code injection does allow you to operate at a much higher level of abstraction than some of the other frameworks (before its time). I have little to no experience with Python's framework - but i've heard it's equally capable - and that the python community does a great job supporting and fostering pythonic endeavors. | 10 | 91 | 0 | Python and Ruby are usually considered to be close cousins (though with quite different historical baggage) with similar expressiveness and power. But some have argued that the immense success of the Rails framework really has a great deal to do with the language it is built on: Ruby itself. So why would Ruby be more suitable for such a framework than Python? | Why is Ruby more suitable for Rails than Python? | 0.015383 | 0 | 0 | 14,973 |
1,099,305 | 2009-07-08T16:55:00.000 | 4 | 0 | 1 | 0 | python,ruby-on-rails,ruby,web-frameworks | 1,123,890 | 13 | false | 1 | 0 | I suppose we should not discuss the language features per se but rather the accents the respective communities make on the language features. For example, in Python, re-opening a class is perfectly possible but it is not common; in Ruby, however, re-opening a class is something of the daily practice. this allows for a quick and straightforward customization of the framework to the current requirement and renders Ruby more favorable for Rails-like frameworks than any other dynamic language.
Hence my answer: common use of re-opening classes. | 10 | 91 | 0 | Python and Ruby are usually considered to be close cousins (though with quite different historical baggage) with similar expressiveness and power. But some have argued that the immense success of the Rails framework really has a great deal to do with the language it is built on: Ruby itself. So why would Ruby be more suitable for such a framework than Python? | Why is Ruby more suitable for Rails than Python? | 0.061461 | 0 | 0 | 14,973 |
1,099,305 | 2009-07-08T16:55:00.000 | -1 | 0 | 1 | 0 | python,ruby-on-rails,ruby,web-frameworks | 2,040,561 | 13 | false | 1 | 0 | Two answers :
a. Because rails was written for ruby.
b. For the same reason C more suitable for Linux than Ruby | 10 | 91 | 0 | Python and Ruby are usually considered to be close cousins (though with quite different historical baggage) with similar expressiveness and power. But some have argued that the immense success of the Rails framework really has a great deal to do with the language it is built on: Ruby itself. So why would Ruby be more suitable for such a framework than Python? | Why is Ruby more suitable for Rails than Python? | -0.015383 | 0 | 0 | 14,973 |
1,099,305 | 2009-07-08T16:55:00.000 | 1 | 0 | 1 | 0 | python,ruby-on-rails,ruby,web-frameworks | 1,204,243 | 13 | false | 1 | 0 | I think that the syntax is cleaner and Ruby, for me at least, is just a lot more "enjoyable"- as subjective as that is! | 10 | 91 | 0 | Python and Ruby are usually considered to be close cousins (though with quite different historical baggage) with similar expressiveness and power. But some have argued that the immense success of the Rails framework really has a great deal to do with the language it is built on: Ruby itself. So why would Ruby be more suitable for such a framework than Python? | Why is Ruby more suitable for Rails than Python? | 0.015383 | 0 | 0 | 14,973 |
1,099,305 | 2009-07-08T16:55:00.000 | 15 | 0 | 1 | 0 | python,ruby-on-rails,ruby,web-frameworks | 1,104,198 | 13 | false | 1 | 0 | The real answer is neither Python or Ruby are better/worse candidates for a web framework. If you want objectivity you need to write some code in both and see which fits your personal preference best, including community.
Most people who argue for one or other have either never used the other language seriously or are 'voting' for their personal preference.
I would guess most people settle on which ever they come in to contact with first because it teaches them something new (MVC, testing, generators etc.) or does something better (plugins, templating etc). I used to develop with PHP and came in to contact with RubyOnRails. If I had have known about MVC before finding Rails I would more than likely never left PHP behind. But once I started using Ruby I enjoyed the syntax, features etc.
If I had have found Python and one of its MVC frameworks first I would more than likely be praising that language instead! | 10 | 91 | 0 | Python and Ruby are usually considered to be close cousins (though with quite different historical baggage) with similar expressiveness and power. But some have argued that the immense success of the Rails framework really has a great deal to do with the language it is built on: Ruby itself. So why would Ruby be more suitable for such a framework than Python? | Why is Ruby more suitable for Rails than Python? | 1 | 0 | 0 | 14,973 |
1,099,305 | 2009-07-08T16:55:00.000 | 53 | 0 | 1 | 0 | python,ruby-on-rails,ruby,web-frameworks | 1,100,340 | 13 | false | 1 | 0 | The python community believes that doing things the most simple and straight forward way possible is the highest form of elegance. The ruby community believes doing things in clever ways that allow for cool code is the highest form of elegance.
Rails is all about if you follow certain conventions, loads of other things magically happen for you. That jives really well with the ruby way of looking at the world, but doesn't really follow the python way. | 10 | 91 | 0 | Python and Ruby are usually considered to be close cousins (though with quite different historical baggage) with similar expressiveness and power. But some have argued that the immense success of the Rails framework really has a great deal to do with the language it is built on: Ruby itself. So why would Ruby be more suitable for such a framework than Python? | Why is Ruby more suitable for Rails than Python? | 1 | 0 | 0 | 14,973 |
1,099,305 | 2009-07-08T16:55:00.000 | 8 | 0 | 1 | 0 | python,ruby-on-rails,ruby,web-frameworks | 1,127,435 | 13 | false | 1 | 0 | Because Rails is developed to take advantage of Rubys feature set.
A similarly gormless question would be "Why is Python more suitable for Django than Ruby is?". | 10 | 91 | 0 | Python and Ruby are usually considered to be close cousins (though with quite different historical baggage) with similar expressiveness and power. But some have argued that the immense success of the Rails framework really has a great deal to do with the language it is built on: Ruby itself. So why would Ruby be more suitable for such a framework than Python? | Why is Ruby more suitable for Rails than Python? | 1 | 0 | 0 | 14,973 |
1,099,305 | 2009-07-08T16:55:00.000 | 11 | 0 | 1 | 0 | python,ruby-on-rails,ruby,web-frameworks | 1,099,454 | 13 | false | 1 | 0 | Python has a whole host of Rails-like frameworks. There are so many that a joke goes that during the typical talk at PyCon at least one web framework will see the light.
The argument that Rubys meta programming would make it better suited is IMO incorrect. You don't need metaprogramming for frameworks like this.
So I think we can conclude that Ruby are not better (and likely neither worse) than Python in this respect. | 10 | 91 | 0 | Python and Ruby are usually considered to be close cousins (though with quite different historical baggage) with similar expressiveness and power. But some have argued that the immense success of the Rails framework really has a great deal to do with the language it is built on: Ruby itself. So why would Ruby be more suitable for such a framework than Python? | Why is Ruby more suitable for Rails than Python? | 1 | 0 | 0 | 14,973 |
1,099,305 | 2009-07-08T16:55:00.000 | 26 | 0 | 1 | 0 | python,ruby-on-rails,ruby,web-frameworks | 1,141,854 | 13 | false | 1 | 0 | Is this debate a new "vim versus emacs" debate?
I am a Python/Django programmer and thus far I've never found a problem in that language/framework that would lead me to switch to Ruby/Rails.
I can imagine that it would be the same if I were experienced with Ruby/Rails.
Both have similar philosophy and do the job in a fast and elegant way. The better choice is what you already know. | 10 | 91 | 0 | Python and Ruby are usually considered to be close cousins (though with quite different historical baggage) with similar expressiveness and power. But some have argued that the immense success of the Rails framework really has a great deal to do with the language it is built on: Ruby itself. So why would Ruby be more suitable for such a framework than Python? | Why is Ruby more suitable for Rails than Python? | 1 | 0 | 0 | 14,973 |
1,100,924 | 2009-07-08T22:41:00.000 | 17 | 0 | 0 | 0 | python,mysql,django | 1,101,410 | 5 | true | 1 | 0 | You have managers in Django.
Use a customized manager to do creates and maintain the FK relationships.
The manager can update the counts as the sets of children are updated.
If you don't want to make customized managers, just extend the save method. Everything you want to do for denormalizing counts and sums can be done in save.
You don't need signals. Just extend save. | 2 | 16 | 0 | I'm developing a simple web app, and it makes a lot of sense to store some denormalized data.
Imagine a blogging platform that keeps track of Comments, and the BlogEntry model has a "CommentCount" field that I'd like to keep up to date.
One way of doing this would be to use Django signals.
Another way of doing this would be to put hooks directly in my code that creates and destrys Comment objects to synchronously call some methods on BlogEntry to increment/decrement the comment count.
I suppose there are other pythonic ways of accomplishing this with decorators or some other voodoo.
What is the standard Design Pattern for denormalizing in Django? In practice, do you also have to write consistency checkers and data fixers in case of errors? | Best way to denormalize data in Django? | 1.2 | 0 | 0 | 5,270 |
1,100,924 | 2009-07-08T22:41:00.000 | 5 | 0 | 0 | 0 | python,mysql,django | 1,101,004 | 5 | false | 1 | 0 | The first approach (signals) has the advantage to loose the coupling between models.
However, signals are somehow more difficult to maintain, because dependencies are less explicit (at least, in my opinion).
If the correctness of the comment count is not so important, you could also think of a cron job that will update it every n minutes.
However, no matter the solution, denormalizing will make maintenance more difficult; for this reason I would try to avoid it as much as possible, resolving instead to using caches or other techniques -- for example, using with comments.count as cnt in templates may improve performance quite a lot.
Then, if everything else fails, and only in that case, think about what could be the best approach for the specific problem. | 2 | 16 | 0 | I'm developing a simple web app, and it makes a lot of sense to store some denormalized data.
Imagine a blogging platform that keeps track of Comments, and the BlogEntry model has a "CommentCount" field that I'd like to keep up to date.
One way of doing this would be to use Django signals.
Another way of doing this would be to put hooks directly in my code that creates and destrys Comment objects to synchronously call some methods on BlogEntry to increment/decrement the comment count.
I suppose there are other pythonic ways of accomplishing this with decorators or some other voodoo.
What is the standard Design Pattern for denormalizing in Django? In practice, do you also have to write consistency checkers and data fixers in case of errors? | Best way to denormalize data in Django? | 0.197375 | 0 | 0 | 5,270 |
1,102,134 | 2009-07-09T06:23:00.000 | 3 | 0 | 0 | 1 | python,architecture,groovy,uml,dynamic-languages | 1,102,219 | 2 | false | 1 | 0 | UML isn't too well equipped to handle such things, but you can still use it to communicate your design if you are willing to do some mental mapping. You can find an isomorphism between most dynamic concepts and UMLs static object-model.
For example you can think of a closure as an object implementing a one method interface. It's probably useful to model such interfaces as something a bit more specific than interface Callable { call(args[0..*]: Object) : Object }.
Duck typing can similarly though of as an interface. If you have a method that takes something that can quack, model it as taking an object that is a specialization of the interface _interface Quackable { quack() }.
You can use your imagination for other concepts. Keep in mind that the purpose of design diagrams is to communicate ideas. So don't get overly pedantic about modeling everything 100%, think what do you want your diagrams to say, make sure that they say that and eliminate any extraneous detail that would dilute the message. And if you use some concepts that aren't obvious to your target audience, explain them.
Also, if UML really can't handle what you want to say, try other ways to visualize your message. UML is only a good choice because it gives you a common vocabulary so you don't have to explain every concept on your diagram. | 1 | 1 | 0 | Say I want to write a large application in groovy, and take advantage of closures, categories and other concepts (that I regularly use to separate concerns). Is there a way to diagram or otherwise communicate in a simple way the architecture of some of this stuff? How do you detail (without verbose documentation) the things that a map of closures might do, for example? I understand that dynamic language features aren't usually recommended on a larger scale because they are seen as complex but does that have to be the case? | Is there a way to plan and diagram an architecture for dynamic scripting languages like groovy or python? | 0.291313 | 0 | 0 | 359 |
1,103,487 | 2009-07-09T12:19:00.000 | 37 | 0 | 0 | 0 | python,django,jython | 13,682,964 | 5 | false | 1 | 0 | The most clear-cut way is:
import platform
platform.python_implementation()
'CPython'
By default, most of the time the underlying interpreter is CPython only which is also arguably the most efficient one :) | 2 | 14 | 0 | I'm working on a small django project that will be deployed in a servlet container later. But development is much faster if I work with cPython instead of Jython. So what I want to do is test if my code is running on cPython or Jython in my settiings.py so I can tell it to use the appropriate db driver (postgresql_psycopg2 or doj.backends.zxjdbc.postgresql). Is there a simple way to do this? | Can I detect if my code is running on cPython or Jython? | 1 | 0 | 0 | 6,376 |
1,103,487 | 2009-07-09T12:19:00.000 | 3 | 0 | 0 | 0 | python,django,jython | 1,103,680 | 5 | false | 1 | 0 | You'll have unique settings.py for every different environment.
Your development settings.py should not be your QA/Test or production settings.py.
What we do is this.
We have a "master" settings.py that contains the installed apps and other items which don't change much.
We have environment-specific files with names like settings_dev_win32.py and settings_qa_linux2.py and
'settings_co_linux2.py`, etc.
Each of these environment-specific settings imports the "master" settings, and then overrides things like the DB driver. Since each settings file is unique to an environment, there are no if-statements and no detecting which environment we're running in.
Production (in Apache, using mod_wsgi and mysql) uses the settings_prod_linux2.py file and no other.
Development (in Windows using sqlite) uses the settings_dev_win32.py file. | 2 | 14 | 0 | I'm working on a small django project that will be deployed in a servlet container later. But development is much faster if I work with cPython instead of Jython. So what I want to do is test if my code is running on cPython or Jython in my settiings.py so I can tell it to use the appropriate db driver (postgresql_psycopg2 or doj.backends.zxjdbc.postgresql). Is there a simple way to do this? | Can I detect if my code is running on cPython or Jython? | 0.119427 | 0 | 0 | 6,376 |
1,103,940 | 2009-07-09T13:36:00.000 | 3 | 1 | 1 | 0 | python,file,upload,wsgi | 1,104,012 | 3 | true | 0 | 0 | wsgi.input should be a file like stream object. You can read from that in blocks, and write those blocks directly to disk. That shouldn't use up any significant memory.
Or maybe I misunderstood the question? | 2 | 1 | 0 | I need to upload a potentially huge plain-text file to a very simple wsgi-app without eating up all available memory on the server. How do I accomplish that? I want to use standard python modules and avoid third-party modules if possible. | Upload a potentially huge textfile to a plain WSGI-server in Python | 1.2 | 0 | 0 | 1,494 |
1,103,940 | 2009-07-09T13:36:00.000 | 2 | 1 | 1 | 0 | python,file,upload,wsgi | 1,209,507 | 3 | false | 0 | 0 | If you use the cgi module to parse the input (which most frameworks use, e.g., Pylons, WebOb, CherryPy) then it will automatically save the uploaded file to a temporary file, and not load it into memory. | 2 | 1 | 0 | I need to upload a potentially huge plain-text file to a very simple wsgi-app without eating up all available memory on the server. How do I accomplish that? I want to use standard python modules and avoid third-party modules if possible. | Upload a potentially huge textfile to a plain WSGI-server in Python | 0.132549 | 0 | 0 | 1,494 |
1,104,066 | 2009-07-09T13:59:00.000 | 2 | 0 | 1 | 0 | python,packaging | 1,104,081 | 5 | false | 0 | 0 | The simplest approach is to just use zip. A jar file in Java is a zipfile containing some metadata such as a manifest; but you don't necessarily need the metatada -- Python can import from inside a zipfile as long as you place that zipfile on sys.path, just as you would do for any directory. In the zipfile you can have the sources (.py files), but then Python will have to compile them on the fly each time a process first imports them; or you can have the bytecode files (.pyc or .pyo) but then you're limited to a specific release of Python and to either absence (for .pyc) or presence (for .pyo) of flag -O (or -OO).
As other answers indicated, there are formats such as .egg that enrich the zipfile with metatada in Python as well, like Java .jar, but whether in a particular use case that gives you extra value wrt a plain zipfile is a decision for you to make | 1 | 7 | 0 | Is there a way to put together Python files, akin to JAR in Java? I need a way of packaging set of Python classes and functions, but unlike a standard module, I'd like it to be in one file. | Combining module files in Python | 0.07983 | 0 | 0 | 6,882 |
1,105,429 | 2009-07-09T17:39:00.000 | 1 | 0 | 0 | 0 | python,postgresql,storage,photos,photo-management | 1,105,534 | 6 | false | 1 | 0 | a DB might be faster than a filesystem on some operations, but loading a well-identified chunk of data 100s of KB is not one of them.
also, a good frontend webserver (like nginx) is way faster than any webapp layer you'd have to write to read the blob from the DB. in some tests nginx is roughly on par with memcached for raw data serving of medium-sized files (like big HTMLs or medium-sized images).
go FS. no contest. | 4 | 11 | 0 | My specific situation
Property management web site where users can upload photos and lease documents. For every apartment unit, there might be 4 photos, so there won't be an overwhelming number of photo in the system.
For photos, there will be thumbnails of each.
My question
My #1 priority is performance. For the end user, I want to load pages and show the image as fast as possible.
Should I store the images inside the database, or file system, or doesn't matter? Do I need to be caching anything?
Thanks in advance! | storing uploaded photos and documents - filesystem vs database blob | 0.033321 | 1 | 0 | 7,730 |
1,105,429 | 2009-07-09T17:39:00.000 | 9 | 0 | 0 | 0 | python,postgresql,storage,photos,photo-management | 1,105,444 | 6 | false | 1 | 0 | File system. No contest.
The data has to go through a lot more layers when you store it in the db.
Edit on caching:
If you want to cache the file while the user uploads it to ensure the operation finishes as soon as possible, dumping it straight to disk (i.e. file system) is about as quick as it gets. As long as the files aren't too big and you don't have too many concurrent users, you can 'cache' the file in memory, return to the user, then save to disk. To be honest, I wouldn't bother.
If you are making the files available on the web after they have been uploaded and want to cache to improve the performance, file system is still the best option. You'll get caching for free (may have to adjust a setting or two) from your web server. You wont get this if the files are in the database.
After all that it sounds like you should never store files in the database. Not the case, you just need a good reason to do so. | 4 | 11 | 0 | My specific situation
Property management web site where users can upload photos and lease documents. For every apartment unit, there might be 4 photos, so there won't be an overwhelming number of photo in the system.
For photos, there will be thumbnails of each.
My question
My #1 priority is performance. For the end user, I want to load pages and show the image as fast as possible.
Should I store the images inside the database, or file system, or doesn't matter? Do I need to be caching anything?
Thanks in advance! | storing uploaded photos and documents - filesystem vs database blob | 1 | 1 | 0 | 7,730 |
1,105,429 | 2009-07-09T17:39:00.000 | 3 | 0 | 0 | 0 | python,postgresql,storage,photos,photo-management | 1,105,479 | 6 | false | 1 | 0 | Definitely store your images on the filesystem. One concern that folks don't consider enough when considering these types of things is bloat; cramming images as binary blobs into your database is a really quick way to bloat your DB way up. With a large database comes higher hardware requirements, more difficult replication and backup requirements, etc. Sticking your images on a filesystem means you can back them up / replicate them with many existing tools easily and simply. Storage space is far easier to increase on filesystem than in database, as well. | 4 | 11 | 0 | My specific situation
Property management web site where users can upload photos and lease documents. For every apartment unit, there might be 4 photos, so there won't be an overwhelming number of photo in the system.
For photos, there will be thumbnails of each.
My question
My #1 priority is performance. For the end user, I want to load pages and show the image as fast as possible.
Should I store the images inside the database, or file system, or doesn't matter? Do I need to be caching anything?
Thanks in advance! | storing uploaded photos and documents - filesystem vs database blob | 0.099668 | 1 | 0 | 7,730 |
1,105,429 | 2009-07-09T17:39:00.000 | 10 | 0 | 0 | 0 | python,postgresql,storage,photos,photo-management | 1,105,453 | 6 | false | 1 | 0 | While there are exceptions to everything, the general case is that storing images in the file system is your best bet. You can easily provide caching services to the images, you don't need to worry about additional code to handle image processing, and you can easily do maintenance on the images if needed through standard image editing methods.
It sounds like your business model fits nicely into this scenario. | 4 | 11 | 0 | My specific situation
Property management web site where users can upload photos and lease documents. For every apartment unit, there might be 4 photos, so there won't be an overwhelming number of photo in the system.
For photos, there will be thumbnails of each.
My question
My #1 priority is performance. For the end user, I want to load pages and show the image as fast as possible.
Should I store the images inside the database, or file system, or doesn't matter? Do I need to be caching anything?
Thanks in advance! | storing uploaded photos and documents - filesystem vs database blob | 1 | 1 | 0 | 7,730 |
1,106,759 | 2009-07-09T22:24:00.000 | 3 | 0 | 0 | 0 | python,egg,pypi | 1,106,782 | 3 | false | 0 | 0 | You will need to license the code. Despite what some people may think, the authors of content actually need to grant the license on their own. The Cheese Shop can't grant a license to other people to use the content until you've granted it as the copyright owner. | 1 | 14 | 0 | Suppose I'd like to upload some eggs on the Cheese Shop. Do I have any obligation? Am I required to provide a license? Am I required to provide tests? Will I have any obligations to the users of this egg ( if any ) ?
I haven't really released anything as open source 'till now, and I'd like to know the process. | Do I have any obligations if I upload an egg to the CheeseShop? | 0.197375 | 0 | 0 | 346 |
1,107,826 | 2009-07-10T05:16:00.000 | 0 | 0 | 0 | 1 | python,web-services,scheduling,long-running-processes | 1,108,019 | 5 | false | 0 | 0 | The usual design pattern for a scheduler would be:
Maintain a list of scheduled jobs, sorted by next-run-time (as Date-Time value);
When woken up, compare the first job in the list with the current time. If it's due or overdue, remove it from the list and run it. Continue working your way through the list this way until the first job is not due yet, then go to sleep for (next_job_due_date - current_time);
When a job finishes running, re-schedule it if appropriate;
After adding a job to the schedule, wake up the scheduler process.
Tweak as appropriate for your situation (eg. sometimes you might want to re-schedule jobs to run again at the point that they start running rather than finish). | 1 | 2 | 0 | I want to write a long running process (linux daemon) that serves two purposes:
responds to REST web requests
executes jobs which can be scheduled
I originally had it working as a simple program that would run through runs and do the updates which I then cron’d, but now I have the added REST requirement, and would also like to change the frequency of some jobs, but not others (let’s say all jobs have different frequencies).
I have 0 experience writing long running processes, especially ones that do things on their own, rather than responding to requests.
My basic plan is to run the REST part in a separate thread/process, and figured I’d run the jobs part separately.
I’m wondering if there exists any patterns, specifically python, (I’ve looked and haven’t really found any examples of what I want to do) or if anyone has any suggestions on where to begin with transitioning my project to meet these new requirements.
I’ve seen a few projects that touch on scheduling, but I’m really looking for real world user experience / suggestions here. What works / doesn’t work for you? | python long running daemon job processor | 0 | 0 | 0 | 3,378 |
1,107,858 | 2009-07-10T05:29:00.000 | 0 | 1 | 0 | 0 | python,testing,integration-testing | 1,108,750 | 5 | false | 1 | 0 | Few things help as much as testing.
These two quotes are really important.
"how many unit tests you can afford to write."
"From time to time embarrassing mistakes still occur,"
If mistakes occur, you haven't written enough tests. If you're still having mistakes, then you can afford to write more unit tests. It's that simple.
Each embarrassing mistake is a direct result of not writing enough unit tests.
Each management report that describes an embarrassing mistake should also describe what testing is required to prevent that mistake from ever happening again.
A unit test is a permanent prevention of further problems. | 1 | 1 | 0 | I have the 'luck' of develop and enhance a legacy python web application for almost 2 years. The major contribution I consider I made is the introduction of the use of unit test, nosestest, pychecker and CI server. Yes, that's right, there are still project out there that has no single unit test (To be fair, it has a few doctest, but are broken).
Nonetheless, progress is slow, because literally the coverage is limited by how many unit tests you can afford to write.
From time to time embarrassing mistakes still occur, and it does not look good on management reports. (e.g. even pychecker cannot catch certain "missing attribute" situation, and the program just blows up in run time)
I just want to know if anyone has any suggestion about what additional thing I can do to improve the QA. The application uses WebWare 0.8.1, but I have expermentially ported it to cherrypy, so I can potentially take advantage of WSGI to conduct integration tests.
Mixed language development and/or hiring an additional tester are also options I am thinking.
Nothing is too wild, as long as it works. | Looking for testing/QA idea for Python Web Application Project | 0 | 0 | 0 | 664 |
1,108,119 | 2009-07-10T07:09:00.000 | 9 | 0 | 1 | 0 | python,windows,call-graph | 10,366,447 | 3 | false | 0 | 0 | PyCallGraph produces the dynamic graph resulting from the specific execution of a Python program and not the static graph extracted from the source code. Does anybody know of a tool which produces the static graph? | 1 | 11 | 0 | Do you know an integrated tool that will generate the call graph of a function from Python sources? I need one that is consistent and can run on Windows OS. | How can I extract the call graph of a function from Python source files? | 1 | 0 | 0 | 3,100 |
1,108,428 | 2009-07-10T08:41:00.000 | 1 | 0 | 1 | 0 | python,excel,datetime | 55,736,184 | 14 | false | 0 | 0 | excel stores dates and times as a number representing the number of days since 1900-Jan-0, if you want to get the dates in date format using python, just subtract 2 days from the days column, as shown below:
Date = sheet.cell(1,0).value-2 //in python
at column 1 in my excel, i have my date and above command giving me date values minus 2 days, which is same as date present in my excel sheet | 1 | 62 | 0 | How can I convert an Excel date (in a number format) to a proper date in Python? | How do I read a date in Excel format in Python? | 0.014285 | 0 | 0 | 134,183 |
1,108,918 | 2009-07-10T10:58:00.000 | 2 | 0 | 0 | 0 | python,mysql,perl,ip-address | 59,109,834 | 7 | false | 0 | 0 | for both ipv4 and ipv6 compatibility, use VARBINARY(16) , ipv4's will always be BINARY(4) and ipv6 will always be BINARY(16), so VARBINARY(16) seems like the most efficient way to support both. and to convert them from the normal readable format to binary, use INET6_ATON('127.0.0.1'), and to reverse that, use INET6_NTOA(binary) | 3 | 25 | 0 | We've got a healthy debate going on in the office this week. We're creating a Db to store proxy information, for the most part we have the schema worked out except for how we should store IPs. One camp wants to use 4 smallints, one for each octet and the other wants to use a 1 big int,INET_ATON.
These tables are going to be huge so performance is key. I am in middle here as I normally use MS SQL and 4 small ints in my world. I don't have enough experience with this type of volume storing IPs.
We'll be using perl and python scripts to access the database to further normalize the data into several other tables for top talkers, interesting traffic etc.
I am sure there are some here in the community that have done something simular to what we are doing and I am interested in hearing about their experiences and which route is best, 1 big int, or 4 small ints for IP addresses.
EDIT - One of our concerns is space, this database is going to be huge like in 500,000,000 records a day. So we are trying to weigh the space issue along with the performance issue.
EDIT 2 Some of the conversation has turned over to the volume of data we are going to store...that's not my question. The question is which is the preferable way to store an IP address and why. Like I've said in my comments, we work for a large fortune 50 company. Our log files contain usage data from our users. This data in turn will be used within a security context to drive some metrics and to drive several security tools. | How to store an IP in mySQL | 0.057081 | 1 | 0 | 14,213 |
1,108,918 | 2009-07-10T10:58:00.000 | 0 | 0 | 0 | 0 | python,mysql,perl,ip-address | 56,818,264 | 7 | false | 0 | 0 | Old thread, but for the benefit of readers, consider using ip2long. It translates ip into an integer.
Basically, you will be converting with ip2long when storing into DB then converting back with long2ip when retrieving from DB. The field type in DB will INT, so you will save space and gain better performance compared to storing ip as a string. | 3 | 25 | 0 | We've got a healthy debate going on in the office this week. We're creating a Db to store proxy information, for the most part we have the schema worked out except for how we should store IPs. One camp wants to use 4 smallints, one for each octet and the other wants to use a 1 big int,INET_ATON.
These tables are going to be huge so performance is key. I am in middle here as I normally use MS SQL and 4 small ints in my world. I don't have enough experience with this type of volume storing IPs.
We'll be using perl and python scripts to access the database to further normalize the data into several other tables for top talkers, interesting traffic etc.
I am sure there are some here in the community that have done something simular to what we are doing and I am interested in hearing about their experiences and which route is best, 1 big int, or 4 small ints for IP addresses.
EDIT - One of our concerns is space, this database is going to be huge like in 500,000,000 records a day. So we are trying to weigh the space issue along with the performance issue.
EDIT 2 Some of the conversation has turned over to the volume of data we are going to store...that's not my question. The question is which is the preferable way to store an IP address and why. Like I've said in my comments, we work for a large fortune 50 company. Our log files contain usage data from our users. This data in turn will be used within a security context to drive some metrics and to drive several security tools. | How to store an IP in mySQL | 0 | 1 | 0 | 14,213 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.