Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,559,372 | 2009-10-13T10:25:00.000 | 17 | 1 | 1 | 0 | python,setuptools,distutils,pip | 1,559,521 | 2 | false | 0 | 0 | There are two completely opposing camps: one in favor of system-provided packages, and one in favor of separate installation. I'm personally in the "system packages" camp. I'll provide arguments from each side below.
Pro system packages: system packager already cares about dependency, and compliance with overall system policies (such as file layout). System packages provide security updates while still caring about not breaking compatibility - so they sometimes backport security fixes that the upstream authors did not backport. System packages are "safe" wrt. system upgrades: after a system upgrade, you probably also have a new Python version, but all your Python modules are still there if they come from a system packager. That's all personal experience with Debian.
Con system packages: not all software may be provided as a system package, or not in the latest version; installing stuff yourself into the system may break system packages. Upgrades may break your application.
Pro separate installation: Some people (in particular web application developers) argue that you absolutely need a repeatable setup, with just the packages you want, and completely decoupled from system Python. This goes beyond self-installed vs. system packages, since even for self-installed, you might still modify the system python; with the separate installation, you won't. As Lennart discusses, there are now dedicated tool chains to support this setup. People argue that only this approach can guarantee repeatable results.
Con separate installation: you need to deal with bug fixes yourself, and you need to make sure all your users use the separate installation. In the case of web applications, the latter is typically easy to achieve. | 1 | 11 | 0 | Usually I tend to install things via the package manager, for unixy stuff. However, when I programmed a lot of perl, I would use CPAN, newer versions and all that.
In general, I used to install system stuff via package manager, and language stuff via it's own package manager ( gem/easy_install|pip/cpan)
Now using python primarily, I am wondering what best practice is? | Which is the most pythonic: installing python modules via a package manager ( macports, apt) or via pip/easy_install/setuptools | 1 | 0 | 0 | 1,252 |
1,560,245 | 2009-10-13T13:26:00.000 | 1 | 0 | 1 | 0 | python,api,collections,equality | 1,569,134 | 2 | true | 0 | 0 | Take a look at "collections.py". The latest version (from version control) implements an OrderedDict with an __eq__. There's also an __eq__ in sets.py | 1 | 6 | 0 | I'm working on a collection class that I want to create an __eq__ method for. It's turning out to be more nuanced than I thought it would be and I've noticed several intricacies as far as how the built-in collection classes work.
What would really help me the most is a good example. Are there any pure Python implementations of an __eq__ method either in the standard library or in any third-party libraries? | What is a good example of an __eq__ method for a collection class? | 1.2 | 0 | 0 | 8,480 |
1,561,104 | 2009-10-13T15:44:00.000 | 1 | 0 | 0 | 0 | python,pygame,pitch | 1,561,314 | 1 | true | 0 | 1 | Well, it depends on how you're doing your sounds: I'm not sure if this is possible with pygame, but SDL (which pygame is based off of) lets you have a callback to retrieve data for the sound buffer, and it's possible to change the frequency of the sine wave (or whatever) to get different tones in the callback, given that you generate the sound there.
If you're using a pre-rendered tone, or sound file, then you'll probably have to resample it to get it to play at different frequencies, although it'd be difficult to keep the same length. If you're talking about changing the timbre of the sound, then that's a whole different ballpark...
Also, it depends on how fast the sound needs to change: if you can accept a little lag in response, you could probably generate a few short sounds, and play/loop them as necessary. I'm not sure how constant replaying of sounds would impact performance/the overall audio quality, though: you'd have to make sure the ends of all the waveform ends smoothly transition to the beginning of the next one (maybe). | 1 | 1 | 0 | Is there a way to do this? Also, I need this to work with pygame, since I want audio in my game. I'm asking this because I didn't see any tone change function in pygame.. Anyone knows?
Update:
I need to do something like the noise of a car accelerating. I don't really know if it is timbre or tone. | Playing sounds with python and changing their tone during playback? | 1.2 | 0 | 0 | 1,240 |
1,562,483 | 2009-10-13T19:39:00.000 | 7 | 0 | 0 | 0 | macos,wxpython,osx-snow-leopard,py2app | 1,583,793 | 1 | false | 0 | 1 | I solved this by doing a clean Snow Leopard install, and installing python from Python.org, then the corresponding wxPython. py2app I built from source. | 1 | 4 | 0 | After upgrading to Snow Leopard, I'm having trouble building my application. It looks like py2app is building and copying over wxPython, but when I run from the buld app, it can't find wx. | py2app dropping wxpython (Snow Leopard) | 1 | 0 | 0 | 997 |
1,563,088 | 2009-10-13T21:43:00.000 | 1 | 0 | 0 | 0 | python,database,django,url,content-management-system | 1,563,359 | 2 | false | 1 | 0 | Your question is a little bit twisted, but I think what you're asking for is something similar to how django.contrib.flatpages handles this. Basically it uses middleware to catch the 404 error and then looks to see if any of the flatpages have a URL field that matches.
We did this on one site where all of the URLs were made "search engine friendly". We overrode the save() method, munged the title into this_is_the_title.html (or whatever) and then stored that in a separate table that had a URL => object class/id mapping.ng (this means it is listed before flatpages in the middleware list). | 1 | 1 | 0 | I've produced a few Django sites but up until now I have been mapping individual views and URLs in urls.py.
Now I've tried to create a small custom CMS but I'm having trouble with the URLs. I have a database table (SQLite3) which contains code for the pages like a column for header, one for right menu, one for content.... so on, so on. I also have a column for the URL. How do I get Django to call the information in the database table from the URL stored in the column rather than having to code a view and the URL for every page (which obviously defeats the purpose of a CMS)?
If someone can just point me at the right part of the docs or a site which explains this it would help a lot.
Thanks all. | URLs stored in database for Django site | 0.099668 | 1 | 0 | 3,215 |
1,563,165 | 2009-10-13T21:58:00.000 | 11 | 0 | 0 | 0 | python,google-app-engine,xpath,beautifulsoup,mechanize | 1,563,177 | 5 | false | 1 | 0 | Beautiful Soup. | 2 | 2 | 0 | I currently have some Ruby code used to scrape some websites. I was using Ruby because at the time I was using Ruby on Rails for a site, and it just made sense.
Now I'm trying to port this over to Google App Engine, and keep getting stuck.
I've ported Python Mechanize to work with Google App Engine, but it doesn't support DOM inspection with XPATH.
I've tried the built-in ElementTree, but it choked on the first HTML blob I gave it when it ran into '&mdash'.
Do I keep trying to hack ElementTree in there, or do I try to use something else?
thanks,
Mark | What pure Python library should I use to scrape a website? | 1 | 0 | 1 | 1,959 |
1,563,165 | 2009-10-13T21:58:00.000 | 6 | 0 | 0 | 0 | python,google-app-engine,xpath,beautifulsoup,mechanize | 1,563,301 | 5 | false | 1 | 0 | lxml -- 100x better than elementtree | 2 | 2 | 0 | I currently have some Ruby code used to scrape some websites. I was using Ruby because at the time I was using Ruby on Rails for a site, and it just made sense.
Now I'm trying to port this over to Google App Engine, and keep getting stuck.
I've ported Python Mechanize to work with Google App Engine, but it doesn't support DOM inspection with XPATH.
I've tried the built-in ElementTree, but it choked on the first HTML blob I gave it when it ran into '&mdash'.
Do I keep trying to hack ElementTree in there, or do I try to use something else?
thanks,
Mark | What pure Python library should I use to scrape a website? | 1 | 0 | 1 | 1,959 |
1,563,967 | 2009-10-14T02:31:00.000 | 1 | 0 | 0 | 0 | python,sql,postgresql,psycopg2 | 1,564,226 | 5 | false | 0 | 0 | Quoting parameters manually in general is a bad idea. What if there is a mistake in escaping rules? What if escape doesn't match used version of DB? What if you just forget to escape some parameter or erroneously assumed it can't contain data requiring escaping? That all may cause SQL injection vulnerability. Also, DB can have some restrictions on SQL statement length while you need to pass large data chunk for LOB column. That's why Python DB API and most databases (Python DB API module will transparently escape parameters, if database doesn't support this, as early MySQLdb did) allow passing parameters separated from statement:
.execute(operation[,parameters]) | 3 | 17 | 0 | I need to generate a list of insert statements (for postgresql) from html files, is there a library available for python to help me properly escape and quote the names/values? in PHP i use PDO to do the escaping and quoting, is there any equivalent library for python?
Edit: I need to generate a file with sql statements for execution later | Generate SQL statements with python | 0.039979 | 1 | 0 | 50,856 |
1,563,967 | 2009-10-14T02:31:00.000 | 2 | 0 | 0 | 0 | python,sql,postgresql,psycopg2 | 1,563,981 | 5 | false | 0 | 0 | For robustness, I recommend using prepared statements to send user-entered values, no matter what language you use. :-) | 3 | 17 | 0 | I need to generate a list of insert statements (for postgresql) from html files, is there a library available for python to help me properly escape and quote the names/values? in PHP i use PDO to do the escaping and quoting, is there any equivalent library for python?
Edit: I need to generate a file with sql statements for execution later | Generate SQL statements with python | 0.07983 | 1 | 0 | 50,856 |
1,563,967 | 2009-10-14T02:31:00.000 | 13 | 0 | 0 | 0 | python,sql,postgresql,psycopg2 | 1,564,224 | 5 | false | 0 | 0 | SQLAlchemy provides a robust expression language for generating SQL from Python.
Like every other well-designed abstraction layer, however, the queries it generates insert data through bind variables rather than through attempting to mix the query language and the data being inserted into a single string. This approach avoids massive security vulnerabilities and is otherwise The Right Thing. | 3 | 17 | 0 | I need to generate a list of insert statements (for postgresql) from html files, is there a library available for python to help me properly escape and quote the names/values? in PHP i use PDO to do the escaping and quoting, is there any equivalent library for python?
Edit: I need to generate a file with sql statements for execution later | Generate SQL statements with python | 1 | 1 | 0 | 50,856 |
1,564,237 | 2009-10-14T04:27:00.000 | 1 | 1 | 0 | 0 | python,gmail,imap,pop3,imaplib | 1,564,294 | 6 | false | 0 | 0 | Nobody uses POP because typically they want the extra functionality of IMAP, such as tracking message state. When that functionality is only getting in your way and needs workarounds, I think using POP's your best bet!-) | 1 | 4 | 0 | Right now its a gmail box but sooner or later I want it to scale.
I want to sync a copy of a live personal mailbox (inbox and outbox) somewhere else, but I don't want to affect the unread state of any unread messages.
what type of access will make this easiest? I can't find any information if IMAP will affect the read state, but it appears I can manually reset a message to unread. Pop by definition doesn't affect unread state but nobody seems to use pop to access their gmail, why? | get email unread content, without affecting unread state | 0.033321 | 0 | 0 | 5,515 |
1,564,414 | 2009-10-14T05:30:00.000 | 3 | 0 | 1 | 0 | python | 1,564,429 | 6 | false | 0 | 0 | Look into Python's concept called sequence slicing! | 1 | 0 | 0 | How to extract substrings from a string at specified positions
For e.g.: ‘ABCDEFGHIJKLM’. I have To extract the substring from 3 to 6 and 8 to 10.
Required output: DEFG, IJK
Thanks in advance. | Extracting substrings at specified positions | 0.099668 | 0 | 0 | 226 |
1,566,266 | 2009-10-14T13:34:00.000 | 3 | 0 | 1 | 0 | python | 1,566,316 | 12 | false | 0 | 0 | Ok, personal opinion here, but Append and Prepend imply precise positions in a set.
Push and Pop are really concepts that can be applied to either end of a set... Just as long as you're consistent... For some reason, to me, Push() seems like it should apply to the front of a set... | 5 | 295 | 0 | Does anyone know why Python's list.append function is not called list.push given that there's already a list.pop that removes and returns the last element (that indexed at -1) and list.append semantic is consistent with that use? | Why do python lists have pop() but not push() | 0.049958 | 0 | 0 | 259,387 |
1,566,266 | 2009-10-14T13:34:00.000 | 10 | 0 | 1 | 0 | python | 1,566,294 | 12 | false | 0 | 0 | Because "append" intuitively means "add at the end of the list". If it was called "push", then it would be unclear whether we're adding stuff at the tail or at head of the list. | 5 | 295 | 0 | Does anyone know why Python's list.append function is not called list.push given that there's already a list.pop that removes and returns the last element (that indexed at -1) and list.append semantic is consistent with that use? | Why do python lists have pop() but not push() | 1 | 0 | 0 | 259,387 |
1,566,266 | 2009-10-14T13:34:00.000 | 10 | 0 | 1 | 0 | python | 1,566,287 | 12 | false | 0 | 0 | Because it appends an element to a list? Push is usually used when referring to stacks. | 5 | 295 | 0 | Does anyone know why Python's list.append function is not called list.push given that there's already a list.pop that removes and returns the last element (that indexed at -1) and list.append semantic is consistent with that use? | Why do python lists have pop() but not push() | 1 | 0 | 0 | 259,387 |
1,566,266 | 2009-10-14T13:34:00.000 | 0 | 0 | 1 | 0 | python | 1,566,293 | 12 | false | 0 | 0 | Probably because the original version of Python (CPython) was written in C, not C++.
The idea that a list is formed by pushing things onto the back of something is probably not as well-known as the thought of appending them. | 5 | 295 | 0 | Does anyone know why Python's list.append function is not called list.push given that there's already a list.pop that removes and returns the last element (that indexed at -1) and list.append semantic is consistent with that use? | Why do python lists have pop() but not push() | 0 | 0 | 0 | 259,387 |
1,566,266 | 2009-10-14T13:34:00.000 | -2 | 0 | 1 | 0 | python | 4,752,493 | 12 | false | 0 | 0 | Push and Pop make sense in terms of the metaphor of a stack of plates or trays in a cafeteria or buffet, specifically the ones in type of holder that has a spring underneath so the top plate is (more or less... in theory) in the same place no matter how many plates are under it.
If you remove a tray, the weight on the spring is a little less and the stack "pops" up a little, if you put the plate back, it "push"es the stack down. So if you think about the list as a stack and the last element as being on top, then you shouldn't have much confusion. | 5 | 295 | 0 | Does anyone know why Python's list.append function is not called list.push given that there's already a list.pop that removes and returns the last element (that indexed at -1) and list.append semantic is consistent with that use? | Why do python lists have pop() but not push() | -0.033321 | 0 | 0 | 259,387 |
1,566,411 | 2009-10-14T14:00:00.000 | 0 | 0 | 1 | 0 | python,jython | 1,566,484 | 3 | false | 0 | 0 | I would expect that the developers will be working towards compatability with 3.0 at this point. Since they released 2.5 in june I'd hope for a 3.0 version no earlier than Jan.-Mar. 2010, but given their slow release cycle, it could be a while. | 3 | 4 | 0 | I have a large infrastructure that is written in Python 2.6, and I recently took a stab at porting to 3.1 (was much smoother than I expected) despite the lack of backwards compatibility.
I eventually want to integrate some of this Python code with a lot of Java based code that we have, and was thinking about giving Jython a try. However, from looking at the Jython tutorials, all the examples are in 2.6 syntax (e.g., print is not yet a function).
Does/will Jython support Python 3.x syntax at present or in the near future? Or should I roll back to 2.6 if I want to eventually use Jython? | Should I keep my Python code at 2.x or migrate to 3.x if I plan to eventually use Jython? | 0 | 0 | 0 | 237 |
1,566,411 | 2009-10-14T14:00:00.000 | 5 | 0 | 1 | 0 | python,jython | 1,566,453 | 3 | true | 0 | 0 | Jython will not support Python 3.x in the near future. For your code, I recommend to keep it in 2.x form, such that 3.x support becomes available by merely running 2to3 (i.e. with no further source changes). IOW, port to 3.x in a way so that the code remains compatible with 2.x. | 3 | 4 | 0 | I have a large infrastructure that is written in Python 2.6, and I recently took a stab at porting to 3.1 (was much smoother than I expected) despite the lack of backwards compatibility.
I eventually want to integrate some of this Python code with a lot of Java based code that we have, and was thinking about giving Jython a try. However, from looking at the Jython tutorials, all the examples are in 2.6 syntax (e.g., print is not yet a function).
Does/will Jython support Python 3.x syntax at present or in the near future? Or should I roll back to 2.6 if I want to eventually use Jython? | Should I keep my Python code at 2.x or migrate to 3.x if I plan to eventually use Jython? | 1.2 | 0 | 0 | 237 |
1,566,411 | 2009-10-14T14:00:00.000 | 0 | 0 | 1 | 0 | python,jython | 1,567,742 | 3 | false | 0 | 0 | With time 2.x will be surpassed by the new features of his 3.x. If you wish to programming in Python in the future then "the sooner = the better" | 3 | 4 | 0 | I have a large infrastructure that is written in Python 2.6, and I recently took a stab at porting to 3.1 (was much smoother than I expected) despite the lack of backwards compatibility.
I eventually want to integrate some of this Python code with a lot of Java based code that we have, and was thinking about giving Jython a try. However, from looking at the Jython tutorials, all the examples are in 2.6 syntax (e.g., print is not yet a function).
Does/will Jython support Python 3.x syntax at present or in the near future? Or should I roll back to 2.6 if I want to eventually use Jython? | Should I keep my Python code at 2.x or migrate to 3.x if I plan to eventually use Jython? | 0 | 0 | 0 | 237 |
1,569,049 | 2009-10-14T21:15:00.000 | 3 | 0 | 1 | 0 | python,exception,assert | 1,569,618 | 8 | false | 0 | 0 | To see if try has any overhead I tried this experiment
here is myassert.py
def myassert(e):
raise e
def f1(): #this is the control for the experiment
cond=True
def f2():
cond=True
try:
assert cond, "Message"
except AssertionError, e:
raise Exception(e.args)
def f3():
cond=True
assert cond or myassert(RuntimeError)
def f4():
cond=True
if __debug__:
raise(RuntimeError)
$ python -O -mtimeit -n100 -r1000 -s'import myassert' 'myassert.f1()'
100 loops, best of 1000: 0.42 usec per loop
$ python -O -mtimeit -n100 -r1000 -s'import myassert' 'myassert.f2()'
100 loops, best of 1000: 0.479 usec per loop
$ python -O -mtimeit -n100 -r1000 -s'import myassert' 'myassert.f3()'
100 loops, best of 1000: 0.42 usec per loop
$ python -O -mtimeit -n100 -r1000 -s'import myassert' 'myassert.f4()'
100 loops, best of 1000: 0.42 usec per loop | 2 | 51 | 0 | Can I make assert throw an exception that I choose instead of AssertionError?
UPDATE:
I'll explain my motivation: Up to now, I've had assertion-style tests that raised my own exceptions; For example, when you created a Node object with certain arguments, it would check if the arguments were good for creating a node, and if not it would raise NodeError.
But I know that Python has a -o mode in which asserts are skipped, which I would like to have available because it would make my program faster. But I would still like to have my own exceptions. That's why I want to use assert with my own exceptions. | Making Python's `assert` throw an exception that I choose | 0.07486 | 0 | 0 | 49,170 |
1,569,049 | 2009-10-14T21:15:00.000 | 12 | 0 | 1 | 0 | python,exception,assert | 1,569,579 | 8 | false | 0 | 0 | Never use an assertion for logic! Only for optional testing checks. Remember, if Python is running with optimizations turned on, asserts aren't even compiled into the bytecode. If you're doing this, you obviously care about the exception being raised and if you care, then you're using asserts wrong in the first place. | 2 | 51 | 0 | Can I make assert throw an exception that I choose instead of AssertionError?
UPDATE:
I'll explain my motivation: Up to now, I've had assertion-style tests that raised my own exceptions; For example, when you created a Node object with certain arguments, it would check if the arguments were good for creating a node, and if not it would raise NodeError.
But I know that Python has a -o mode in which asserts are skipped, which I would like to have available because it would make my program faster. But I would still like to have my own exceptions. That's why I want to use assert with my own exceptions. | Making Python's `assert` throw an exception that I choose | 1 | 0 | 0 | 49,170 |
1,570,401 | 2009-10-15T05:13:00.000 | 1 | 0 | 0 | 1 | python,linux,usergroups | 1,571,882 | 4 | false | 0 | 0 | There are no library calls for creating a group. This is because there's really no such thing as creating a group. A GID is simply a number assigned to a process or a file. All these numbers exist already - there is nothing you need to do to start using a GID. With the appropriate privileges, you can call chown(2) to set the GID of a file to any number, or setgid(2) to set the GID of the current process (there's a little more to it than that, with effective IDs, supplementary IDs, etc).
Giving a name to a GID is done by an entry in /etc/group on basic Unix/Linux/POSIX systems, but that's really just a convention adhered to by the Unix/Linux/POSIX userland tools. Other network-based directories also exist, as mentioned by Jack Lloyd.
The man page group(5) describes the format of the /etc/group file, but it is not recommended that you write to it directly. Your distribution will have policies on how unnamed GIDs are allocated, such as reserving certain spaces for different purposes (fixed system groups, dynamic system groups, user groups, etc). The range of these number spaces differs on different distributions. These policies are usually encoded in the command-line tools that a sysadmin uses to assign unnamed GIDs.
This means the best way to add a group locally is to use the command-line tools. | 3 | 6 | 0 | I want to create a user group using python on CentOS system. When I say 'using python' I mean I don't want to do something like os.system and give the unix command to create a new group. I would like to know if there is any python module that deals with this.
Searching on the net did not reveal much about what I want, except for python user groups.. so I had to ask this.
I learned about the grp module by searching here on SO, but couldn't find anything about creating a group.
EDIT: I dont know if I have to start a new question for this, but I would also like to know how to add (existing) users to the newly created group.
Any help appreciated.
Thank you. | Create a user-group in linux using python | 0.049958 | 0 | 0 | 6,393 |
1,570,401 | 2009-10-15T05:13:00.000 | 5 | 0 | 0 | 1 | python,linux,usergroups | 1,570,429 | 4 | false | 0 | 0 | I think you should use the commandline programs from your program, a lot of care has gone into making sure that they don't break the groups file if something goes wrong.
However the file format is quite straight forward to write something yourself if you choose to go that way | 3 | 6 | 0 | I want to create a user group using python on CentOS system. When I say 'using python' I mean I don't want to do something like os.system and give the unix command to create a new group. I would like to know if there is any python module that deals with this.
Searching on the net did not reveal much about what I want, except for python user groups.. so I had to ask this.
I learned about the grp module by searching here on SO, but couldn't find anything about creating a group.
EDIT: I dont know if I have to start a new question for this, but I would also like to know how to add (existing) users to the newly created group.
Any help appreciated.
Thank you. | Create a user-group in linux using python | 0.244919 | 0 | 0 | 6,393 |
1,570,401 | 2009-10-15T05:13:00.000 | 11 | 0 | 0 | 1 | python,linux,usergroups | 1,570,448 | 4 | true | 0 | 0 | I don't know of a python module to do it, but the /etc/group and /etc/gshadow format is pretty standard, so if you wanted you could just open the files, parse their current contents and then add the new group if necessary.
Before you go doing this, consider:
What happens if you try to add a group that already exists on the system
What happens when multiple instances of your program try to add a group at the same time
What happens to your code when an incompatible change is made to the group format a couple releases down the line
NIS, LDAP, Kerberos, ...
If you're not willing to deal with these kinds of problems, just use the subprocess module and run groupadd. It will be way less likely to break your customers machines.
Another thing you could do that would be less fragile than writing your own would be to wrap the code in groupadd.c (in the shadow package) in Python and do it that way. I don't see this buying you much versus just exec'ing it, though, and it would add more complexity and fragility to your build. | 3 | 6 | 0 | I want to create a user group using python on CentOS system. When I say 'using python' I mean I don't want to do something like os.system and give the unix command to create a new group. I would like to know if there is any python module that deals with this.
Searching on the net did not reveal much about what I want, except for python user groups.. so I had to ask this.
I learned about the grp module by searching here on SO, but couldn't find anything about creating a group.
EDIT: I dont know if I have to start a new question for this, but I would also like to know how to add (existing) users to the newly created group.
Any help appreciated.
Thank you. | Create a user-group in linux using python | 1.2 | 0 | 0 | 6,393 |
1,571,598 | 2009-10-15T10:50:00.000 | 1 | 1 | 0 | 0 | python,xml-rpc | 1,608,160 | 2 | false | 0 | 0 | I don't think you have a library specific problem. When using any library or framework you typically want to trap all errors, log them somewhere, and throw up "Oops, we're having problems. You may want to contact us at [email protected] with error number 100 and tell us what you did." So wrap your failable entry points in try/catches, create a generic logger and off you go... | 1 | 1 | 0 | Standard libraries (xmlrpclib+SimpleXMLRPCServer in Python 2 and xmlrpc.server in Python 3) report all errors (including usage errors) as python exceptions which is not suitable for public services: exception strings are often not easy understandable without python knowledge and might expose some sensitive information. It's not hard to fix this, but I prefer to avoid reinventing the wheel. Is there a third party library with better error reporting? I'm interested in good fault messages for all usage errors and hiding internals when reporting internal errors (this is better done with logging).
xmlrpclib already have the constants for such errors: NOT_WELLFORMED_ERROR, UNSUPPORTED_ENCODING, INVALID_ENCODING_CHAR, INVALID_XMLRPC, METHOD_NOT_FOUND, INVALID_METHOD_PARAMS, INTERNAL_ERROR. | XML-RPC server with better error reporting | 0.099668 | 0 | 1 | 1,435 |
1,572,661 | 2009-10-15T14:16:00.000 | 1 | 0 | 0 | 0 | python,zope,zenoss | 1,578,608 | 1 | true | 1 | 0 | The problem turned out to be that none of the template changes I made actually had any impact on the final page output. The changes were picked up, they just didn't matter. | 1 | 0 | 0 | I'm writing a ZenPack for Zenoss which includes a new DataSource. The DataSource has a ToOne relationship with another persistent object and I'm trying to construct the user interface to allow a user to specify the value of this relationship. I've given the DataSource a factory_type_information attribute with an "immediate_view" key mapped to the name of a skin/template - "viewAgentScriptDataSource". In my ZenPack's skins directory, I created viewAgentScriptDataSource.pt. Zenoss seems to have liked this and now when I view an instance of the DataSource, I see a page based on viewAgentScriptDataSource.pt.
However, after this first success, any edits I make to the skin/template file are ignored. I tried replacing the dummy content of the file with something more realistic and reloading the data source view. The dummy content still appears. I tried restarting Zenoss and reloading the view. The dummy content still appears. I tried deleting my ZenPack and re-installing it. The dummy content still appears.
How do I get Zenoss to load the new contents of the skin file? | How can I make Zenoss recognize skin changes? | 1.2 | 0 | 0 | 433 |
1,572,691 | 2009-10-15T14:23:00.000 | 6 | 0 | 0 | 0 | python,python-imaging-library | 1,573,679 | 4 | false | 0 | 1 | You might consider a rather different approach to your image... build it out of tiles of a fixed size. That way, as you need to expand, you just add new image tiles. When you have completed all of your computation, you can determine the final size of the image, create a blank image of that size, and paste the tiles into it. That should reduce the amount of copying you're looking at for completing the task.
(You'd likely want to encapsulate such a tiled image into an object that hid the tiling aspects from the other layers of code, of course.) | 1 | 26 | 0 | I am probably looking for the wrong thing in the handbook, but I am looking to take an image object and expand it without resizing (stretching/squishing) the original image.
Toy example: imagine a blue rectangle, 200 x 100, then I perform some operation and I have a new image object, 400 x 300, consisting of a white background upon which a 200 x 100 blue rectangle rests. Bonus if I can control in which direction this expands, or the new background color, etc.
Essentially, I have an image to which I will be adding iteratively, and I do not know what size it will be at the outset.
I suppose it would be possible for me to grab the original object, make a new, slightly larger object, paste the original on there, draw a little more, then repeat. It seems like it might be computationally expensive. However, I thought there would be a function for this, as I assume it is a common operation. Perhaps I assumed wrong. | In Python, Python Image Library 1.1.6, how can I expand the canvas without resizing? | 1 | 0 | 0 | 13,934 |
1,573,166 | 2009-10-15T15:35:00.000 | 2 | 0 | 0 | 0 | python,database,zenoss | 7,033,005 | 2 | false | 1 | 0 | I am working on this very problem this week with Zenoss 3.1.
Caveat-
If you make a bad zenpack - no wait - when you make a bad one, it can get stuck in Zope's db, and there is no way to get it out AFAIK. So-
First use the GUI to make a complete backup of a clean Zenoss site.
Later you will need to restore using zenrestore to clean up the mess.
There are two answers, I think:
1) if its a portlet-
Portlets can only be installed using an egg. Normally Zenoss docs recommend you create eggs using the GUI interface, but that makes for a ridiculous development iteration. However there are explanations in the docs of other ways. If your code, possibly starting with a well-know community portlet like Show Graph or Google Maps, is correct for portlets as opposed to regular zenpacks, then
you name the top directory of your code in the standard zenpack form,
with versions.
cd into that directory and run
python setup.py bdist_egg
which will create dist and build directories.
The egg will be in the dist directory.
Install the egg using the GUI.
Notice its not fully installed... grrrrrr.
Restart the daemons - zopectl restart ; zenhub restart
Test.
Delete the portlet using the GUI. Repeat.
Gotchas:
- You must have setup.py and maybe one or more of- INSTALL.txt MANIFEST.in README.txt in the top directory.
Setup.py must match your directory names.
If you are using old or copied init.py files with their init.pyc versions, then you may need to delete these pyc files to force the python script to re-create them.
I like to run the script as follows just to be certain:
rm -f ./dist ./build ; python setup.py bdist_egg
2) If it's a regular zenpack
The docs tell you how to do this.
Get your zenpack installed from whatever source; often you will just start with the empty one created by the GUI.
Copy the files from /usr/local/zenoss/zenoss/Zenpacks/yourzenpack into your code development area.
Un-install the zenpack using the GUI.
On the command line as zenoss user, run the zpack install --link command ( look up syntax) to install the zenpack thats actually in your code area.
Test
Update your code.
On the command line as zenoss, run zopectl restart ; zenhub restart
Test.
Repeat. Be Happy. | 1 | 2 | 0 | ZenPack development seems to involve the creation of a variety of persistent state. There are model classes which represent explicitly persistent state. There are skins which are associated with model objects. There are organizers and instances of persistent classes (data sources, graphs, etc).
Considering that during development, many things are done wrong before they're done right, and considering that loading up a ZenPack that does things wrong has persistent consequences on the Zenoss instance it is loaded into and that these consequences are hard to undo, what is the usual approach for development of a ZenPack? | What is the typical workflow for development of a Zenoss ZenPack? | 0.197375 | 0 | 0 | 1,306 |
1,575,966 | 2009-10-16T01:03:00.000 | 4 | 1 | 0 | 1 | python,unit-testing,twisted | 1,580,776 | 4 | false | 0 | 0 | As others mentioned, you should be using Trial for unit tests in Twisted.
You also should be unit testing from the bottom up - that's what the "unit" in unit testing implies. Test your data and logic before you test your interface. For a HTTP interface, you should be calling processGET, processPOST, etc with a mock request, but you should only be doing this after you've tested what these methods are calling. Each test should assume that the units tested elsewhere are working as designed.
If you're speaking HTTP, or you need a running server or other state, you're probably making higher level tests such as functional or integration tests. This isn't a bad thing, but you might want to rephrase your question. | 1 | 19 | 0 | I'm writing unit tests for a portion of an application that runs as an HTTP server. The approach I have been trying to take is to import the module that contains the HTTP server, start it. Then, the unit tests will use urllib2 to connect, send data, and check the response.
Our HTTP server is using Twisted. One problem here is that I'm just not that familiar with Twisted :)
Now, I instantiate our HTTP server and start it in the setUp() method and then I stop it in the tearDown() method.
Problem is, Twisted doesn't appear to like this, and it will only run one unit test. After the first one, the reactor won't start anymore.
I've searched and searched and searched, and I just can't seem to find an answer that makes sense.
Am I taking the wrong approach entirely, or just missing something obvious? | Python - Twisted and Unit Tests | 0.197375 | 0 | 0 | 5,134 |
1,575,985 | 2009-10-16T01:10:00.000 | 3 | 0 | 1 | 0 | python,multiprocessing | 1,576,115 | 2 | true | 0 | 0 | I don't really see a "style" argument to be made here, either way -- both multiprocessing in CPython 2.6, and threading in (e.g.) the current versions of Jython and IronPython, let you code in extremely similar ways (and styles;-). So, I'd choose on the basis of very "hard-nosed" considerations -- what is performance like with each choice (if I'm so CPU-bound as to benefit from multiple cores, then performance is obviously of paramount importance), could I use with serious benefit any library that's CPython-only (like numpy) or maybe something else that's JVM- or .NET- only, and so forth. | 1 | 1 | 0 | This is more a style question. For CPU bound processes that really benefit for having multiple cores, do you typically use the multiprocessing module or use threads with an interpreter that doesn't have the GIL? I've used the multiprocessing library only lightly, but also have no experience with anything besides CPython. I'm curious what the preferred approach is and if it is to use a different interpreter, which one. | Python on multiprocessor machines: multiprocessing or a non-GIL interpreter | 1.2 | 0 | 0 | 792 |
1,576,459 | 2009-10-16T06:39:00.000 | 0 | 0 | 1 | 0 | python,html,diff,prettify | 1,576,663 | 7 | false | 0 | 0 | try first of all clean up both of HTML by lxml.html, and the check the difference by difflib | 1 | 33 | 0 | I have two chunks of text that I would like to compare and see which words/lines have been added/removed/modified in Python (similar to a Wiki's Diff Output).
I have tried difflib.HtmlDiff but it's output is less than pretty.
Is there a way in Python (or external library) that would generate clean looking HTML of the diff of two sets of text chunks? (not just line level, but also word/character modifications within a line) | Generate pretty diff html in Python | 0 | 0 | 0 | 36,654 |
1,576,737 | 2009-10-16T08:07:00.000 | 0 | 1 | 1 | 0 | c++,python,swig,gil | 1,576,959 | 3 | false | 0 | 1 | You can use the same API call as for C. No difference. Include "python.h" and call the appoproate function.
Also, see if SWIG doesn't have a typemap or something to indicate that the GIL shuold not be held for a specific function. | 1 | 7 | 0 | I've got a library written in C++ which I wrap using SWIG and use in python. Generally there is one class with few methods. The problem is that calling these methods may be time consuming - they may hang my application (GIL is not released when calling these methods). So my question is:
What is the simplest way to release GIL for these method calls?
(I understand that if I used a C library I could wrap this with some additional C code, but here I use C++ and classes) | Releasing Python GIL while in C++ code | 0 | 0 | 0 | 3,393 |
1,576,784 | 2009-10-16T08:21:00.000 | 1 | 1 | 0 | 0 | python,svn | 1,593,977 | 2 | true | 0 | 0 | Got it! I missed the export in my post-commit hook script!
It should have been:
export PYTHONPATH=/usr/local/lib/svn-python
Problem solved :) | 1 | 1 | 0 | I am experiencing issues with my SVN post-commit hook and the fact that it is executed with an empty environment. Everything was working fine till about two weeks ago when my systems administrator upgraded a few things on the server.
My post-commit hook executes a Python script that uses a SVN module to email information about the commit to me. After the recent upgrades, however, Python cannot find the SVN module when executed via the hook. When executed by hand (ie with all environment variables intact) everything works fine.
I have tried setting the PYTHONPATH variable in my post-commit hook directly (PYTHONPATH=/usr/local/lib/svn-python), but that makes no difference.
How can I tell Python where the module is located? | SVN hook environment issues with Python script | 1.2 | 0 | 0 | 419 |
1,577,175 | 2009-10-16T09:53:00.000 | 4 | 0 | 0 | 0 | python,user-interface,cross-platform | 1,577,197 | 4 | false | 0 | 1 | You might want to check out wxPython. It's a mature project and should work on Windows
and Linux (Gnome). | 1 | 4 | 0 | I need to create a desktop app that will work with Windows and Gnome(Ubuntu). I would like to use Python to do this. The GUI part of the app will be a single form with a message area and a couple of buttons.
The list of GUI's for Python seems overwhelming. I am looking for something simple if possible, the main requirements is it must work with Gnome(2.26 and up) and Windows XP/Vista/7. | Python GUI Library for Windows/Gnome | 0.197375 | 0 | 0 | 895 |
1,578,010 | 2009-10-16T13:21:00.000 | 1 | 0 | 1 | 0 | compilation,ironpython | 4,176,979 | 5 | false | 0 | 0 | Yes I have found it too difficult to compile an exe so I have switched back to using standard Python. They should give a good tutorial on it on the IronPython site | 1 | 18 | 0 | I already attempted using py2exe (not compatible with ipy) and PYC (out of date). Can anyone point me in the direction of a good compiler? | Ironpython 2.6 .py -> .exe | 0.039979 | 0 | 0 | 22,377 |
1,579,744 | 2009-10-16T18:48:00.000 | 6 | 0 | 1 | 0 | python,arrays,types,tuples | 1,579,821 | 5 | false | 0 | 0 | How do you decide which data type to use? Easy:
You look at which are available and choose the one that does what you want. And if there isn't one, you make one.
In this case a dict is a pretty obvious solution. | 3 | 3 | 0 | I'm working through some tutorials on Python and am at a position where I am trying to decide what data type/structure to use in a certain situation.
I'm not clear on the differences between arrays, lists, dictionaries and tuples.
How do you decide which one is appropriate - my current understanding doesn't let me distinguish between them at all - they seem to be the same thing.
What are the benefits/typical use cases for each one? | How do I know what data type to use in Python? | 1 | 0 | 0 | 345 |
1,579,744 | 2009-10-16T18:48:00.000 | 0 | 0 | 1 | 0 | python,arrays,types,tuples | 1,579,758 | 5 | false | 0 | 0 | Do you really require speed/efficiency? Then go with a pure and simple dict. | 3 | 3 | 0 | I'm working through some tutorials on Python and am at a position where I am trying to decide what data type/structure to use in a certain situation.
I'm not clear on the differences between arrays, lists, dictionaries and tuples.
How do you decide which one is appropriate - my current understanding doesn't let me distinguish between them at all - they seem to be the same thing.
What are the benefits/typical use cases for each one? | How do I know what data type to use in Python? | 0 | 0 | 0 | 345 |
1,579,744 | 2009-10-16T18:48:00.000 | 0 | 0 | 1 | 0 | python,arrays,types,tuples | 1,580,736 | 5 | false | 0 | 0 | Personal:
I mostly work with lists and dictionaries.
It seems that this satisfies most cases.
Sometimes:
Tuples can be helpful--if you want to pair/match elements. Besides that, I don't really use it.
However:
I write high-level scripts that don't need to drill down into the core "efficiency" where every byte and every memory/nanosecond matters. I don't believe most people need to drill this deep. | 3 | 3 | 0 | I'm working through some tutorials on Python and am at a position where I am trying to decide what data type/structure to use in a certain situation.
I'm not clear on the differences between arrays, lists, dictionaries and tuples.
How do you decide which one is appropriate - my current understanding doesn't let me distinguish between them at all - they seem to be the same thing.
What are the benefits/typical use cases for each one? | How do I know what data type to use in Python? | 0 | 0 | 0 | 345 |
1,579,919 | 2009-10-16T19:21:00.000 | 3 | 0 | 1 | 0 | python,arrays,list | 1,579,927 | 6 | false | 0 | 0 | Have you considered using a lightweight database like SQLite? | 1 | 6 | 0 | I want to create an object in python that is a collection of around 200,000,000 true/false values. So that I can most effectively change or recall any given true/false value, so that I can quickly determine if any given number, like 123,456,000 is true or false or change its value.
Is the best way to do this a list? or an array? or a class? or just a long int using bit operations? or something else?
I'm a bit noob so you may have to spell things out for me more than if I were asking the question in one of the other languages I know better. Please give me examples of how operating on this object would look.
Thanks | Extremely large Boolean list in Python | 0.099668 | 0 | 0 | 6,140 |
1,581,087 | 2009-10-17T00:50:00.000 | 0 | 0 | 0 | 0 | python,tcp,network-programming,network-protocols,raw-sockets | 1,581,097 | 5 | false | 0 | 0 | I know this isn't directly Python related but if you are looking to do heavy network processing, you should consider Erlang instead of Python. Just a suggestion really... you can always take a shot a doing this with Twisted... if you feel adventurous (and have lots of time on your side) ;-) | 1 | 8 | 0 | Is there a python library which implements a standalone TCP stack?
I can't use the usual python socket library because I'm receiving a stream of packets over a socket (they are being tunneled to me over this socket). When I receive a TCP SYN packet addressed to a particular port, I'd like to accept the connection (send a syn-ack) and then get the data sent by the other end (ack'ing appropriately).
I was hoping there was some sort of TCP stack already written which I could utilize. Any ideas? I've used lwip in the past for a C project -- something along those lines in python would be perfect. | Python TCP stack implementation | 0 | 0 | 1 | 8,496 |
1,581,782 | 2009-10-17T09:04:00.000 | 0 | 0 | 0 | 0 | c#,python,windows,cross-platform,vlc | 1,582,934 | 2 | false | 0 | 1 | This is a bit OOT, but in Windows 7, shaking the active window will hide others to reveal the desktop (and so will clicking/hovering the rightmost taskbar button). Instead of hiding/moving vlc, you could just temporarily reveal the whole desktop. Shaking the active window again brings everything back. | 1 | 2 | 0 | I'm sure others have run into this problem too...
I often watch videos in a small VLC window while working on other tasks, but no matter where the window is placed, I eventually need to access something in the GUI behind it, and have to manually reposition the video window first.
This could be solved by having the VLC window snap to another corner whenever the mouse pointer is moved over it. I haven't found an app that does this, so would like to write one. What technologies could I use to do this? Cross platform might be harder... so what if just on Windows?
I'd prefer something in C# (or Python), but am willing to learn something new if need be. | Reposition a VLC window programmatically | 0 | 0 | 0 | 565 |
1,581,895 | 2009-10-17T10:21:00.000 | 1 | 0 | 1 | 0 | python,multithreading,queue | 1,581,902 | 13 | false | 0 | 0 | The way I solved this (actually I did this in Scala, not Python) was to use both a Set and a Queue, only adding links to the queue (and set) if they did not already exist in the set.
Both the set and queue were encapsulated in a single thread, exposing only a queue-like interface to the consumer threads.
Edit: someone else suggested SQLite and that is also something I am considering, if the set of visited URLs needs to grow large. (Currently each crawl is only a few hundred pages so it easily fits in memory.) But the database is something that can also be encapsulated within the set itself, so the consumer threads need not be aware of it. | 5 | 17 | 0 | I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again? | How check if a task is already in python Queue? | 0.015383 | 0 | 1 | 12,544 |
1,581,895 | 2009-10-17T10:21:00.000 | 1 | 0 | 1 | 0 | python,multithreading,queue | 1,581,903 | 13 | false | 0 | 0 | SQLite is so simple to use and would fit perfectly... just a suggestion. | 5 | 17 | 0 | I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again? | How check if a task is already in python Queue? | 0.015383 | 0 | 1 | 12,544 |
1,581,895 | 2009-10-17T10:21:00.000 | -3 | 0 | 1 | 0 | python,multithreading,queue | 1,581,908 | 13 | false | 0 | 0 | Also, instead of a set you might try using a dictionary. Operations on sets tend to get rather slow when they're big, whereas a dictionary lookup is nice and quick.
My 2c. | 5 | 17 | 0 | I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again? | How check if a task is already in python Queue? | -0.046121 | 0 | 1 | 12,544 |
1,581,895 | 2009-10-17T10:21:00.000 | 1 | 0 | 1 | 0 | python,multithreading,queue | 1,581,920 | 13 | false | 0 | 0 | Why only use the array (ideally, a dictionary would be even better) to filter things you've already visited? Add things to your array/dictionary as soon as you queue them up, and only add them to the queue if they're not already in the array/dict. Then you have 3 simple separate things:
Links not yet seen (neither in queue nor array/dict)
Links scheduled to be visited (in both queue and array/dict)
Links already visited (in array/dict, not in queue) | 5 | 17 | 0 | I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again? | How check if a task is already in python Queue? | 0.015383 | 0 | 1 | 12,544 |
1,581,895 | 2009-10-17T10:21:00.000 | 0 | 0 | 1 | 0 | python,multithreading,queue | 1,582,421 | 13 | false | 0 | 0 | instead of "array of pages already visited" make an "array of pages already added to the queue" | 5 | 17 | 0 | I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again? | How check if a task is already in python Queue? | 0 | 0 | 1 | 12,544 |
1,582,105 | 2009-10-17T12:33:00.000 | 0 | 1 | 0 | 0 | python,c,cython | 4,445,452 | 7 | false | 0 | 1 | Cython does not support threads well at all. It holds the GIL (Global Intrepreter Lock) the entire time! This makes your code thread-safe by (virtually) disabling concurrent execution. So I wouldn't use it for general purpose development. | 4 | 20 | 0 | I know a bunch of scripting languages, (python, ruby, lua, php) but I don't know any compiled languages like C/C++ , I wanted to try and speed up some python code using cython, which is essentially a python -> C compiler, aimed at creating C extensions for python. Basically you code in a stricter version of python which compiles into C -> native code.
here's the problem, I don't know C, yet the cython documentation is aimed at people who obviously already know C (nothing is explained, only presented), and is of no help to me, I need to know if there are any good cython tutorials aimed at python programmers, or if I'm gonna have to learn C before I learn Cython.
bear in mind I'm a competent python programmer, i would much rather learn cython from the perspective of the language I'm already good at, rather than learn a whole new language in order to learn cython.
1) PLEASE don't recommend psyco
edit: ANY information that will help understand the oficial cython docs is useful information | Noob-Ready Cython Tutorials | 0 | 0 | 0 | 8,650 |
1,582,105 | 2009-10-17T12:33:00.000 | 0 | 1 | 0 | 0 | python,c,cython | 2,582,450 | 7 | false | 0 | 1 | About all the C that you really need to know is:
C types are much faster than Python types (adding to C ints or doubles can be done in a single clock cycle) but less safe (they are not arbitrarily sized and may silently overflow).
C function (cdef) calls are much faster than Python (def) function calls (but are less flexible).
This will get you most of the way there. If you want to eke out that last 10-20% speedup for most applications, there's no getting around knowing C, and how modern processes work (pointers, cache, ...). | 4 | 20 | 0 | I know a bunch of scripting languages, (python, ruby, lua, php) but I don't know any compiled languages like C/C++ , I wanted to try and speed up some python code using cython, which is essentially a python -> C compiler, aimed at creating C extensions for python. Basically you code in a stricter version of python which compiles into C -> native code.
here's the problem, I don't know C, yet the cython documentation is aimed at people who obviously already know C (nothing is explained, only presented), and is of no help to me, I need to know if there are any good cython tutorials aimed at python programmers, or if I'm gonna have to learn C before I learn Cython.
bear in mind I'm a competent python programmer, i would much rather learn cython from the perspective of the language I'm already good at, rather than learn a whole new language in order to learn cython.
1) PLEASE don't recommend psyco
edit: ANY information that will help understand the oficial cython docs is useful information | Noob-Ready Cython Tutorials | 0 | 0 | 0 | 8,650 |
1,582,105 | 2009-10-17T12:33:00.000 | 1 | 1 | 0 | 0 | python,c,cython | 11,103,468 | 7 | false | 0 | 1 | You can do a lot of very useful things with Cython if you can answer the following C quiz...
(1) What is a double? What is an int?
(2) What does the word "compile" mean?
(3) What is a header (.h) file?
To answer these questions you don't need to read a whole C book! ...maybe chapter 1.
Once you can pass that quiz, try again with the tutorial.
What I usually do is start with pure python code, and add Cython elements bit by bit. In that situation, you can learn the Cython features bit by bit. For example I don't understand C strings, because so far I have not tried to cythonize code that involves strings. When I do, I will first look up how strings work in C, and then second look up how strings work in Cython.
Again, once you've gotten started with Cython, you will now and then run into some complication that requires learning slightly more C. And of course the more C you know, the more dextrous you will be with taking full advantage of Cython, not to mention troubleshooting if something goes wrong. But that shouldn't make you reluctant to start! | 4 | 20 | 0 | I know a bunch of scripting languages, (python, ruby, lua, php) but I don't know any compiled languages like C/C++ , I wanted to try and speed up some python code using cython, which is essentially a python -> C compiler, aimed at creating C extensions for python. Basically you code in a stricter version of python which compiles into C -> native code.
here's the problem, I don't know C, yet the cython documentation is aimed at people who obviously already know C (nothing is explained, only presented), and is of no help to me, I need to know if there are any good cython tutorials aimed at python programmers, or if I'm gonna have to learn C before I learn Cython.
bear in mind I'm a competent python programmer, i would much rather learn cython from the perspective of the language I'm already good at, rather than learn a whole new language in order to learn cython.
1) PLEASE don't recommend psyco
edit: ANY information that will help understand the oficial cython docs is useful information | Noob-Ready Cython Tutorials | 0.028564 | 0 | 0 | 8,650 |
1,582,105 | 2009-10-17T12:33:00.000 | 1 | 1 | 0 | 0 | python,c,cython | 10,643,399 | 7 | false | 0 | 1 | Cython does support concurrency (you can use native POSIX threads with c, that can be compiled in extent ion module) , you just need to be careful enough to not to modify any python objects when GIL is released and keep in mind the interpreter itself is not thread safe. You can also use multiprocessing with python to use more cores for parallelism which can in turn use your compiled cython extensions to speed up even more. But all in all you definitely have to know c programming model , static types etc | 4 | 20 | 0 | I know a bunch of scripting languages, (python, ruby, lua, php) but I don't know any compiled languages like C/C++ , I wanted to try and speed up some python code using cython, which is essentially a python -> C compiler, aimed at creating C extensions for python. Basically you code in a stricter version of python which compiles into C -> native code.
here's the problem, I don't know C, yet the cython documentation is aimed at people who obviously already know C (nothing is explained, only presented), and is of no help to me, I need to know if there are any good cython tutorials aimed at python programmers, or if I'm gonna have to learn C before I learn Cython.
bear in mind I'm a competent python programmer, i would much rather learn cython from the perspective of the language I'm already good at, rather than learn a whole new language in order to learn cython.
1) PLEASE don't recommend psyco
edit: ANY information that will help understand the oficial cython docs is useful information | Noob-Ready Cython Tutorials | 0.028564 | 0 | 0 | 8,650 |
1,582,708 | 2009-10-17T17:19:00.000 | 7 | 0 | 0 | 0 | python,ajax,django,timeout | 1,582,971 | 3 | true | 1 | 0 | Ajax doesn't require any particular technology on the server side. All you need is to return a response in some form that some Javascript on the client side can understand. JSON is an excellent choice here, as it's easy to create in Python (there's a json library in 2.6, and Django has django.utils.simplejson for other versions).
So all you need to do is to put your data in JSON form then send it just as you would any other response - ie by wrapping it in an HTTPResponse. | 1 | 5 | 0 | I've been programming Python a while, but DJango and web programming in general is new to me.
I have a very long operation performed in a Python view. Since the local() function in my view takes so long to return, there's an HTTP timeout. Fair enough, I understand that part.
What's the best way to give an HTTPresponse back to my users immediately, then dynamically show the results of some python code within the page? I suspect the answer may lie in AJAX but I;m not sure how AJAX on the client can be fed from Python on the server, or even the modules one would commonly use to do such a thing. | Long, slow operation in Django view causes timeout. Any way for Python to speak AJAX instead? | 1.2 | 0 | 0 | 3,585 |
1,582,718 | 2009-10-17T17:24:00.000 | 3 | 1 | 0 | 0 | python,c,refactoring,profiling | 1,582,864 | 4 | false | 0 | 0 | There are lots of different ways that people approach development.
Sometimes people follow your three steps and discover that the slow bits are due to the external environment, therefore rewriting Python into C does not address the problem. That type of slowness can sometimes be solved on the system side, and sometimes it can be solved in Python by applying a different algorithm. For instance you can cache network responses so that you don't have to go to the network every time, or in SQL you can offload work into `stored procedures which run on the server and reduce the size of the result set. Generally, when you do have something that needs to be rewritten in C, the first thing to do is to look for a pre-existing library and just create a Python wrapper, if one does not already exist. Lots of people have been down these paths before you.
Often step 1 is to thrash out the application architecture, suspect that there may be a performance issue in some area, then choose a C library (perhaps already wrapped for Python) and use that. Then step 2 simply confirms that there are no really big performance issues that need to be addressed.
I would say that it is better for a team with one or more experienced developers to attempt to predict performance bottlenecks and mitigate them with pre-existing modules right from the beginning. If you are a beginner with python, then your 3-step process is perfectly valid, i.e. get into building and testing code, knowing that there is a profiler and the possibility of fast C modules if you need it. And then there is psyco, and the various tools for freezing an application into a binary executable.
An alternative approach to this, if you know that you will need to use some C or C++ modules, is to start from scratch writing the application in C but embedding Python to do most of the work. This works well for experienced C or C++ developers because they have a rough idea of the type of code that is tedious to do in C. | 3 | 4 | 0 | A popular software development pattern seems to be:
Thrash out the logic and algorithms in Python.
Profile to find out where the slow bits are.
Replace those with C.
Ship code that is the best trade-off between high-level and speedy.
I say popular simply because I've seen people talk about it as being a great idea.
But are there any large projects that have actually used this method? Preferably Free software projects so I can have a look and see how they did it - and maybe learn some best practices. | Best practice for the Python-then-profile-then-C design pattern? | 0.148885 | 0 | 0 | 279 |
1,582,718 | 2009-10-17T17:24:00.000 | 0 | 1 | 0 | 0 | python,c,refactoring,profiling | 1,582,784 | 4 | false | 0 | 0 | Step 3 is wrong. In the modern world, more than half the time "the slow bits" are I/O or network bound, or limited by some other resource outside the process. Rewriting them in anything is only going to introduce bugs. | 3 | 4 | 0 | A popular software development pattern seems to be:
Thrash out the logic and algorithms in Python.
Profile to find out where the slow bits are.
Replace those with C.
Ship code that is the best trade-off between high-level and speedy.
I say popular simply because I've seen people talk about it as being a great idea.
But are there any large projects that have actually used this method? Preferably Free software projects so I can have a look and see how they did it - and maybe learn some best practices. | Best practice for the Python-then-profile-then-C design pattern? | 0 | 0 | 0 | 279 |
1,582,718 | 2009-10-17T17:24:00.000 | 2 | 1 | 0 | 0 | python,c,refactoring,profiling | 1,583,268 | 4 | false | 0 | 0 | I also thought that way when I started using Python
I've done step 3 twice (that I can recall) in 12 years. Not often enough to call it a design pattern. Usually it's enough to wrap an existing C library. Usually someone else has already written the wrapper. | 3 | 4 | 0 | A popular software development pattern seems to be:
Thrash out the logic and algorithms in Python.
Profile to find out where the slow bits are.
Replace those with C.
Ship code that is the best trade-off between high-level and speedy.
I say popular simply because I've seen people talk about it as being a great idea.
But are there any large projects that have actually used this method? Preferably Free software projects so I can have a look and see how they did it - and maybe learn some best practices. | Best practice for the Python-then-profile-then-C design pattern? | 0.099668 | 0 | 0 | 279 |
1,583,284 | 2009-10-17T21:32:00.000 | 0 | 0 | 0 | 0 | python,audio,pygame | 1,583,298 | 3 | false | 1 | 1 | I think setting the separate channel volume is the only way. Pygame doesn't seem to have any notion of world space or positioning for sounds. | 1 | 2 | 0 | Is there a way to do panning or 3d sound in Pygame? The only way I've found to control sound playback is to set the volume for both the left and right channels. | positioning sound with pygame? | 0 | 0 | 0 | 2,176 |
1,583,350 | 2009-10-17T22:11:00.000 | 5 | 0 | 0 | 0 | python,sqlite,python-db-api | 1,583,379 | 2 | false | 0 | 0 | No, it's not the only way. Alternatively, you can also fetch one row, iterate over it, and inspect the individual column Python objects and types. Unless the value is None (in which case the SQL field is NULL), this should give you a fairly precise indication what the database column type was.
sqlite3 only uses sqlite3_column_decltype and sqlite3_column_type in one place, each, and neither are accessible to the Python application - so their is no "direct" way that you may have been looking for. | 1 | 5 | 0 | When using the sqlite3 module in python, all elements of cursor.description except the column names are set to None, so this tuple cannot be used to find the column types for a query result (unlike other DB-API compliant modules). Is the only way to get the types of the columns to use pragma table_info(table_name).fetchall() to get a description of the table, store it in memory, and then match the column names from cursor.description to that overall table description? | sqlite3 and cursor.description | 0.462117 | 1 | 0 | 3,955 |
1,585,181 | 2009-10-18T15:35:00.000 | 3 | 0 | 1 | 0 | python,multithreading,gil | 1,585,641 | 3 | false | 0 | 0 | Perhaps the confusion comes about because most people assume Python has one interpreter per process. I recall reading that the support for multiple interpreters via the C API was largely untested and hardly ever used. (And when I gave it a go, didn't work properly.) | 2 | 12 | 0 | I often see people talking that the GIL is per Python Interpreter (even here on stackoverflow).
But what I see in the source code it seems to be that the GIL is a global variable and therefore there is one GIL for all Interpreters in each python process. I know they did this because there is no interpreter object passed around like lua or TCL does, it was just not designed well in the beginning. And thread local storage seems to be not portable for the python guys to use.
Is this correct? I had a short look at the 2.4 version I'm using in a project here.
Had this changed in later versions, especially in 3.0? | Is the Python GIL really per interpreter? | 0.197375 | 0 | 0 | 1,279 |
1,585,181 | 2009-10-18T15:35:00.000 | 12 | 0 | 1 | 0 | python,multithreading,gil | 1,585,939 | 3 | true | 0 | 0 | The GIL is indeed per-process, not per-interpreter. This is unchanged in 3.x. | 2 | 12 | 0 | I often see people talking that the GIL is per Python Interpreter (even here on stackoverflow).
But what I see in the source code it seems to be that the GIL is a global variable and therefore there is one GIL for all Interpreters in each python process. I know they did this because there is no interpreter object passed around like lua or TCL does, it was just not designed well in the beginning. And thread local storage seems to be not portable for the python guys to use.
Is this correct? I had a short look at the 2.4 version I'm using in a project here.
Had this changed in later versions, especially in 3.0? | Is the Python GIL really per interpreter? | 1.2 | 0 | 0 | 1,279 |
1,585,756 | 2009-10-18T19:12:00.000 | 3 | 0 | 1 | 0 | python,import,python-3.x | 1,586,005 | 3 | true | 0 | 0 | You can use -m flag of the python interpreter to run modules in sub-packages (or even packages in 3.1.). | 2 | 7 | 0 | I've recently ported my Python project to run on Python 3.1. For that I had to adopt the policy of relative imports within the submodules and subpackages of my project. I've don’t that and now the project itself works, but I noticed I can't execute any of the subpackages or submodules in it. If I try, I get "builtins.ValueError: Attempted relative import in non-package". I can only import the whole project.
Is this normal? | Python: Do relative imports mean you can't execute a subpackage by itself? | 1.2 | 0 | 0 | 1,966 |
1,585,756 | 2009-10-18T19:12:00.000 | 4 | 0 | 1 | 0 | python,import,python-3.x | 1,585,801 | 3 | false | 0 | 0 | Yes, it's normal. If you want to execute a module that is also a part of a package (in itself a strange thing to do) you need to have absolute imports. When you execute the module it is not, from the interpreters point of view, a part of a package, but the __main__ module. So it wouldn't know where the relative packages are.
The standard way to do it is to have functions in the packages, and separate executable scripts that call the functions, as this enables you to put the executable scripts outside the module, for example in /usr/bin | 2 | 7 | 0 | I've recently ported my Python project to run on Python 3.1. For that I had to adopt the policy of relative imports within the submodules and subpackages of my project. I've don’t that and now the project itself works, but I noticed I can't execute any of the subpackages or submodules in it. If I try, I get "builtins.ValueError: Attempted relative import in non-package". I can only import the whole project.
Is this normal? | Python: Do relative imports mean you can't execute a subpackage by itself? | 0.26052 | 0 | 0 | 1,966 |
1,586,008 | 2009-10-18T20:56:00.000 | 0 | 0 | 0 | 0 | php,python,ruby-on-rails,database | 1,586,035 | 4 | false | 1 | 0 | It would be great if code written for one platform would work on every other without any modification whatsoever, but this is usually not the case and probably never will be. What the current frameworks do is about all anyone can. | 3 | 2 | 0 | I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to.
I firmly believe that using database abstraction layers for SQL generation is not the right way of writing database applications that should run on multiple database engines, especially when you throw in really expensive databases like Oracle. And this is more or less global, it doesn't apply to only a few languages.
Just a simple example, using query pagination and insertion: when using Oracle one could use the FIRST_ROWS and APPEND hints(where appropriate). Going to advanced examples I could mention putting in the database lots of Stored Procedures/Packages where it makes sense. And those are different for every RDBMS.
By using only a limited set of features, commonly available to many RDBMS one doesn't exploit the possibilities that those expensive and advanced database engines have to offers.
So getting back to the heart of the question: how do you develop PHP, Python, Ruby etc. applications that should run on multiple database engines?
I am especially interested hearing how you separate/use the queries that are especially written for running on a single RDBMS. Say you've got a statement that should run on 3 RDBMS: Oracle, DB2 and Sql Server and for each of these you write a separate SQL statement in order to make use of all features this RDBMS has to offer. How do you do it?
Letting this aside, what is you opinion walking this path? Is it worth it in your experience? Why? Why not? | PHP, Python, Ruby application with multiple RDBMS | 0 | 1 | 0 | 369 |
1,586,008 | 2009-10-18T20:56:00.000 | 2 | 0 | 0 | 0 | php,python,ruby-on-rails,database | 1,586,105 | 4 | false | 1 | 0 | If you want to leverage the bells and whistles of various RDBMSes, you can certainly do it. Just apply standard OO Principles. Figure out what kind of API your persistence layer will need to provide.
You'll end up writing a set of isomorphic persistence adapter classes. From the perspective of your model code (which will be calling adapter methods to load and store data), these classes are identical. Writing good test coverage should be easy, and good tests will make life a lot easier. Deciding how much abstraction is provided by the persistence adapters is the trickiest part, and is largely application-specific.
As for whether this is worth the trouble: it depends. It's a good exercise if you've never done it before. It may be premature if you don't actually know for sure what your target databases are.
A good strategy might be to implement two persistence adapters to start. Let's say you expect the most common back end will be MySQL. Implement one adapter tuned for MySQL. Implement a second that uses your database abstraction library of choice, and uses only standard and widely available SQL features. Now you've got support for a ton of back ends (everything supported by your abstraction library of choice), plus tuned support for mySQL. If you decide you then want to provide an optimized adapter from Oracle, you can implement it at your leisure, and you'll know that your application can support swappable database back-ends. | 3 | 2 | 0 | I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to.
I firmly believe that using database abstraction layers for SQL generation is not the right way of writing database applications that should run on multiple database engines, especially when you throw in really expensive databases like Oracle. And this is more or less global, it doesn't apply to only a few languages.
Just a simple example, using query pagination and insertion: when using Oracle one could use the FIRST_ROWS and APPEND hints(where appropriate). Going to advanced examples I could mention putting in the database lots of Stored Procedures/Packages where it makes sense. And those are different for every RDBMS.
By using only a limited set of features, commonly available to many RDBMS one doesn't exploit the possibilities that those expensive and advanced database engines have to offers.
So getting back to the heart of the question: how do you develop PHP, Python, Ruby etc. applications that should run on multiple database engines?
I am especially interested hearing how you separate/use the queries that are especially written for running on a single RDBMS. Say you've got a statement that should run on 3 RDBMS: Oracle, DB2 and Sql Server and for each of these you write a separate SQL statement in order to make use of all features this RDBMS has to offer. How do you do it?
Letting this aside, what is you opinion walking this path? Is it worth it in your experience? Why? Why not? | PHP, Python, Ruby application with multiple RDBMS | 0.099668 | 1 | 0 | 369 |
1,586,008 | 2009-10-18T20:56:00.000 | 2 | 0 | 0 | 0 | php,python,ruby-on-rails,database | 1,587,887 | 4 | true | 1 | 0 | You cannot eat a cake and have it, choose on of the following options.
Use your database abstraction layer whenever you can and in the rare cases when you have a need for a hand-made query (eg. performance reasons) stick to the lowest common denominator and don't use stored procedures or any proprietary extensions that you database has to offer. In this case deploying the application on a different RDBMS should be trivial.
Use the full power of your expensive RDBMS, but take into account that your application won't be easily portable. When the need arises you will have to spend considerable effort on porting and maintenance. Of course a decent layered design encapsulating all the differences in a single module or class will help in this endeavor.
In other words you should consider how probable is it that your application will be deployed to multiple RDBMSes and make an informed choice. | 3 | 2 | 0 | I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to.
I firmly believe that using database abstraction layers for SQL generation is not the right way of writing database applications that should run on multiple database engines, especially when you throw in really expensive databases like Oracle. And this is more or less global, it doesn't apply to only a few languages.
Just a simple example, using query pagination and insertion: when using Oracle one could use the FIRST_ROWS and APPEND hints(where appropriate). Going to advanced examples I could mention putting in the database lots of Stored Procedures/Packages where it makes sense. And those are different for every RDBMS.
By using only a limited set of features, commonly available to many RDBMS one doesn't exploit the possibilities that those expensive and advanced database engines have to offers.
So getting back to the heart of the question: how do you develop PHP, Python, Ruby etc. applications that should run on multiple database engines?
I am especially interested hearing how you separate/use the queries that are especially written for running on a single RDBMS. Say you've got a statement that should run on 3 RDBMS: Oracle, DB2 and Sql Server and for each of these you write a separate SQL statement in order to make use of all features this RDBMS has to offer. How do you do it?
Letting this aside, what is you opinion walking this path? Is it worth it in your experience? Why? Why not? | PHP, Python, Ruby application with multiple RDBMS | 1.2 | 1 | 0 | 369 |
1,587,776 | 2009-10-19T09:42:00.000 | 1 | 1 | 1 | 0 | python,testing,dependencies,circular-dependency | 1,588,192 | 3 | false | 0 | 0 | "We want to test LibA using LibT, but because LibT depends on LibA, we'd rather it to use a stable version of LibA, while testing LibT "
It doesn't make sense to use T + A to test A. What does make sense is the following.
LibA is really two things mashed together: A1 and A2.
T depends on A1.
What's really happening is that you're upgrading and testing A2, using T and A1.
If you decompose LibA into the parts that T requires and the other parts, you may be able to break this circular dependency. | 1 | 5 | 0 | We've got a python library that we're developing. During development, I'd like to use some parts of that library in testing the newer versions of it. That is, use the stable code in order to test the development code. Is there any way of doing this in python?
Edit: To be more specific, we've got a library (LibA) that has many useful things. Also, we've got a testing library that uses LibA in order to provide some testing facilities (LibT). We want to test LibA using LibT, but because LibT depends on LibA, we'd rather it to use a stable version of LibA, while testing LibT (because we will change LibT to work with newer LibA only once tests pass etc.). So, when running unit-tests, LibA-dev tests will use LibT code that depends on LibA-stable.
One idea we've come up with is calling the stable code using RPyC on a different process, but it's tricky to implement in an air-tight way (making sure it dies properly etc, and allowing multiple instances to execute at the same time on the same computer etc.).
Thanks | Using different versions of a python library in the same process | 0.066568 | 0 | 0 | 680 |
1,587,902 | 2009-10-19T10:13:00.000 | -1 | 0 | 0 | 0 | python,diff,webpage,snapshot | 1,588,461 | 4 | false | 1 | 0 | just take snapshots of the files with MD5 or SHA1...if the values differ the next time you check, then they are modified. | 1 | 6 | 0 | I have snapshots of multiple webpages taken at 2 times. What is a reliable method to determine which webpages have been modified?
I can't rely on something like an RSS feed, and I need to ignore minor noise like date text.
Ideally I am looking for a Python solution, but an intuitive algorithm would also be great.
Thanks! | how to determine if webpage has been modified | -0.049958 | 0 | 1 | 2,946 |
1,587,991 | 2009-10-19T10:42:00.000 | 2 | 1 | 1 | 0 | python | 1,588,007 | 2 | true | 0 | 0 | This is what Apache is for.
Create a directory that will have the reports.
Configure Apache to serve files from that directory.
If the report exists, redirect to a URL that Apache will serve.
Otherwise, the report doesn't exist, so create it. Then redirect to a URL that Apache will serve.
There's no "hashing". You have a key ("a string (equations, numbers, and lists of words), arbitrary length, almost 99% will be less than about 200 words") and a value, which is a file. Don't waste time on a hash. You just have a long key.
You can compress this key somewhat by making a "slug" out of it: remove punctuation, replace spaces with _, that kind of thing.
You should create an internal surrogate key which is a simple integer.
You're simply translating a long key to a "report" which either exists as a file or will be created as a file. | 1 | 1 | 0 | I have a web server that is dynamically creating various reports in several formats (pdf and doc files). The files require a fair amount of CPU to generate, and it is fairly common to have situations where two people are creating the same report with the same input.
Inputs:
raw data input as a string (equations, numbers, and
lists of words), arbitrary length, almost 99% will be less than about 200 words
the version of the report creation tool
When a user attempts to generate a report, I would like to check to see if a file already exists with the given input, and if so return a link to the file. If the file doesn't already exist, then I would like to generate it as needed.
What solutions are already out there? I've cached simple http requests before, but the keys were extremely simple (usually database id's)
If I have to do this myself, what is the best way. The input can be several hundred words, and I was wondering how I should go about transforming the strings into keys sent to the cache.
//entire input, uses too much memory, one to one mapping
cache['one two three four five six seven eight nine ten eleven...']
//short keys
cache['one two'] => 5 results, then I must narrow these down even more
Is this something that should be done in a database, or is it better done within the web app code (python in my case)
Thanks you everyone. | Caching system for dynamically created files? | 1.2 | 0 | 0 | 141 |
1,588,708 | 2009-10-19T13:36:00.000 | 2 | 0 | 0 | 1 | python,google-app-engine,couchdb | 1,588,748 | 3 | false | 1 | 0 | Consider the situation where you have many entity types but few instances of each entity. In this case you will have many tables each with a few records so a relational approach is not suitable. | 2 | 3 | 0 | I'm looking at using CouchDB for one project and the GAE app engine datastore in the other. For relational stuff I tend to use postgres, although I much prefer an ORM.
Anyway, what use cases suit non relational datastores best? | What are the use cases for non relational datastores? | 0.132549 | 1 | 0 | 587 |
1,588,708 | 2009-10-19T13:36:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,couchdb | 1,589,186 | 3 | false | 1 | 0 | In some cases that are simply nice. ZODB is a Python-only object database, that is so well-integrated with Python that you can simply forget that it's there. You don't have to bother about it, most of the time. | 2 | 3 | 0 | I'm looking at using CouchDB for one project and the GAE app engine datastore in the other. For relational stuff I tend to use postgres, although I much prefer an ORM.
Anyway, what use cases suit non relational datastores best? | What are the use cases for non relational datastores? | 0 | 1 | 0 | 587 |
1,589,150 | 2009-10-19T14:54:00.000 | 0 | 0 | 0 | 0 | python,xml-rpc | 1,590,010 | 3 | false | 0 | 0 | There are several choices:
Use single-process-single-thread server like SimpleXMLRPCServer to process requests subsequently.
Use threading.Lock() in threaded server.
You some external locking mechanism (like lockfile module or GET_LOCK() function in mysql) in multiprocess server. | 2 | 7 | 0 | I'm looking for a way to prevent multiple hosts from issuing simultaneous commands to a Python XMLRPC listener. The listener is responsible for running scripts to perform tasks on that system that would fail if multiple users tried to issue these commands at the same time. Is there a way I can block all incoming requests until the single instance has completed? | Python XMLRPC with concurrent requests | 0 | 0 | 1 | 5,976 |
1,589,150 | 2009-10-19T14:54:00.000 | 0 | 0 | 0 | 0 | python,xml-rpc | 1,589,181 | 3 | false | 0 | 0 | Can you have another communication channel? If yes, then have a "call me back when it is my turn" protocol running between the server and the clients.
In other words, each client would register its intention to issue requests to the server and the said server would "callback" the next-up client when it is ready. | 2 | 7 | 0 | I'm looking for a way to prevent multiple hosts from issuing simultaneous commands to a Python XMLRPC listener. The listener is responsible for running scripts to perform tasks on that system that would fail if multiple users tried to issue these commands at the same time. Is there a way I can block all incoming requests until the single instance has completed? | Python XMLRPC with concurrent requests | 0 | 0 | 1 | 5,976 |
1,589,743 | 2009-10-19T16:33:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine | 4,219,064 | 5 | false | 1 | 0 | If you develop with web2py your code will run GAE other architectures wihtout changes using any of the 10 supported relational databases. The compatibility layer covers database api (including blobs and listproperty), email, and fetching). | 1 | 6 | 0 | I'm planning an application running on Google App Engine. The only worry I would have is portability. Or just the option to have the app run on a local, private cluster.
I expected an option for Google App Engine applications to run on other systems, a compatibility layer, to spring up. I could imagine a GAE compatible framework utilizing Amazon SimpleDB or CouchDB to offer near 100% compatibility, if needs be through an abstraction layer. I prefer Python though Java would be acceptable.
However, as far as I know, none such facility exists today. Am I mistaken and if so where could I find this Googe App Engine compatibility layer. If I'm not, the questions is "why"? Are there unforetold technical issues or is there just no demand from the market (which would potentially hint at low rates of GAE adoption).
Regards,
Iwan | Google App Engine compatibility layer | 0 | 0 | 0 | 702 |
1,590,474 | 2009-10-19T19:03:00.000 | 1 | 0 | 0 | 1 | python,windows-xp,scheduled-tasks | 1,591,169 | 7 | false | 0 | 0 | At the risk of not answering your question, can I suggest that if what you have to run is important or even critical then Windows task-Scheduler is not the way to run it.
There are so many awful flows when using the task-scheduler. Lets just start with the obvious ones:
There is no logging. There is no way to investigate what happens when things go wrong. There's no way to distribute work across PCs. There's no fault-tolerance. It's Windows only and the interface is crappy.
If any of the above is a problem for you you need something a bit more sophisticated. My suggestion is that you try Hudson, a.k.a. Sun's continuous integration server.
In addition to all of the above it can do cron-style scheduling, with automatic expiry of logs. It can be set to jabber or email on failure and you can even make it auto diagnose what went wrong with your process if you can make it produce some XML output.
Please please, do not use Windows Scheduled tasks. There are many better things to use, and I speak from experience when I say that I never regretted dumping the built-in scheduler. | 4 | 1 | 0 | I have a Scheduled Task on a WinXP SP2 machine that is set up to run a python script:
Daily
Start time: 12:03 AM
Schedule task daily: every 1 day
Start date: some time in the past
Repeat task: every 5 minutes
Until: Duration 24 hours
Basically, i want the script to run every five minutes, for ever.
My problem is the task runs sometime after 23:47 every night (presumably after 23:55) and does not run after that. What am I doing wrong? Alternatively, is there a different method you can suggest other than using Windows scheduled tasks? | Scheduled tasks in Win32 | 0.028564 | 0 | 0 | 3,296 |
1,590,474 | 2009-10-19T19:03:00.000 | 1 | 0 | 0 | 1 | python,windows-xp,scheduled-tasks | 1,590,518 | 7 | false | 0 | 0 | On the first pane (labeled "Task") do you have "Run only if logged on" unchecked and "Enabled (scheduled task runs at specified time" checked?
I've run python jobs via Windows scheduled task with settings very similar to what you show. | 4 | 1 | 0 | I have a Scheduled Task on a WinXP SP2 machine that is set up to run a python script:
Daily
Start time: 12:03 AM
Schedule task daily: every 1 day
Start date: some time in the past
Repeat task: every 5 minutes
Until: Duration 24 hours
Basically, i want the script to run every five minutes, for ever.
My problem is the task runs sometime after 23:47 every night (presumably after 23:55) and does not run after that. What am I doing wrong? Alternatively, is there a different method you can suggest other than using Windows scheduled tasks? | Scheduled tasks in Win32 | 0.028564 | 0 | 0 | 3,296 |
1,590,474 | 2009-10-19T19:03:00.000 | 1 | 0 | 0 | 1 | python,windows-xp,scheduled-tasks | 1,590,558 | 7 | false | 0 | 0 | Also, for the past year or so I've seen a common bug where Scheduled Tasks on Server 2003 or XP do not run if either of the following checkboxes are on:
"Don't start the task if the computer is running on batteries"
"Stop the task if battery mode begins"
It seems that Windows gets a little confused if you have a battery (on a laptop) or a UPS (on a server, for example), whether or not your utility power is working.
Also, as a rule I would trim down the time or uncheck the option to "Stop the task if it runs for X minutes" when you're running it so often. | 4 | 1 | 0 | I have a Scheduled Task on a WinXP SP2 machine that is set up to run a python script:
Daily
Start time: 12:03 AM
Schedule task daily: every 1 day
Start date: some time in the past
Repeat task: every 5 minutes
Until: Duration 24 hours
Basically, i want the script to run every five minutes, for ever.
My problem is the task runs sometime after 23:47 every night (presumably after 23:55) and does not run after that. What am I doing wrong? Alternatively, is there a different method you can suggest other than using Windows scheduled tasks? | Scheduled tasks in Win32 | 0.028564 | 0 | 0 | 3,296 |
1,590,474 | 2009-10-19T19:03:00.000 | 1 | 0 | 0 | 1 | python,windows-xp,scheduled-tasks | 1,590,873 | 7 | false | 0 | 0 | Until: Duration 24 hours
That shuts it off at the end of the first day.
Remove that, see if it keeps going. It should, and you shouldn't need to install Python in the process. :) | 4 | 1 | 0 | I have a Scheduled Task on a WinXP SP2 machine that is set up to run a python script:
Daily
Start time: 12:03 AM
Schedule task daily: every 1 day
Start date: some time in the past
Repeat task: every 5 minutes
Until: Duration 24 hours
Basically, i want the script to run every five minutes, for ever.
My problem is the task runs sometime after 23:47 every night (presumably after 23:55) and does not run after that. What am I doing wrong? Alternatively, is there a different method you can suggest other than using Windows scheduled tasks? | Scheduled tasks in Win32 | 0.028564 | 0 | 0 | 3,296 |
1,591,555 | 2009-10-19T22:47:00.000 | 1 | 1 | 1 | 0 | php,python,multithreading | 1,591,593 | 3 | false | 0 | 0 | If you are on a sane operating system then shared libraries should only be loaded once and shared among all processes using them. Memory for data structures and connection handles will obviously be duplicated, but the overhead of stopping and starting the systems may be greater than keeping things up while idle. If you are using something like gearman it might make sense to let several workers stay up even if idle and then have a persistent monitoring process that will start new workers if all the current workers are busy up until a threshold such as the number of available CPUs. That process could then kill workers in a LIFO manner after they have been idle for some period of time. | 2 | 3 | 0 | Right now I'm running 50 PHP (in CLI mode) individual workers (processes) per machine that are waiting to receive their workload (job). For example, the job of resizing an image. In workload they receive the image (binary data) and the desired size. The worker does it's work and returns the resized image back. Then it waits for more jobs (it loops in a smart way). I'm presuming that I have the same executable, libraries and classes loaded and instantiated 50 times. Am I correct? Because this does not sound very effective.
What I'd like to have now is one process that handles all this work and being able to use all available CPU cores while having everything loaded only once (to be more efficient). I presume a new thread would be started for each job and after it finishes, the thread would stop. More jobs would be accepted if there are less than 50 threads doing the work. If all 50 threads are busy, no additional jobs are accepted.
I am using a lot of libraries (for Memcached, Redis, MogileFS, ...) to have access to all the various components that the system uses and Python is pretty much the only language apart from PHP that has support for all of them.
Can Python do what I want and will it be faster and more efficient that the current PHP solution? | From PHP workers to Python threads | 0.066568 | 0 | 0 | 958 |
1,591,555 | 2009-10-19T22:47:00.000 | 4 | 1 | 1 | 0 | php,python,multithreading | 1,591,616 | 3 | true | 0 | 0 | Most probably - yes. But don't assume you have to do multithreading. Have a look at the multiprocessing module. It already has an implementation of a Pool included, which is what you could use. And it basically solves the GIL problem (multithreading can run only 1 "standard python code" at any time - that's a very simplified explanation).
It will still fork a process per job, but in a different way than starting it all over again. All the initialisations done- and libraries loaded before entering the worker process will be inherited in a copy-on-write way. You won't do more initialisations than necessary and you will not waste memory for the same libarary/class if you didn't actually make it different from the pre-pool state.
So yes - looking only at this part, python will be wasting less resources and will use a "nicer" worker-pool model. Whether it will really be faster / less CPU-abusing, is hard to tell without testing, or at least looking at the code. Try it yourself.
Added: If you're worried about memory usage, python may also help you a bit, since it has a "proper" garbage collector, while in php GC is a not a priority and not that good (and for a good reason too). | 2 | 3 | 0 | Right now I'm running 50 PHP (in CLI mode) individual workers (processes) per machine that are waiting to receive their workload (job). For example, the job of resizing an image. In workload they receive the image (binary data) and the desired size. The worker does it's work and returns the resized image back. Then it waits for more jobs (it loops in a smart way). I'm presuming that I have the same executable, libraries and classes loaded and instantiated 50 times. Am I correct? Because this does not sound very effective.
What I'd like to have now is one process that handles all this work and being able to use all available CPU cores while having everything loaded only once (to be more efficient). I presume a new thread would be started for each job and after it finishes, the thread would stop. More jobs would be accepted if there are less than 50 threads doing the work. If all 50 threads are busy, no additional jobs are accepted.
I am using a lot of libraries (for Memcached, Redis, MogileFS, ...) to have access to all the various components that the system uses and Python is pretty much the only language apart from PHP that has support for all of them.
Can Python do what I want and will it be faster and more efficient that the current PHP solution? | From PHP workers to Python threads | 1.2 | 0 | 0 | 958 |
1,591,579 | 2009-10-19T22:52:00.000 | 2 | 0 | 0 | 0 | python,xml,io | 1,591,732 | 9 | false | 0 | 0 | To make this process more robust, you could consider using the SAX parser (that way you don't have to hold the whole file in memory), read & write till the end of tree and then start appending. | 1 | 49 | 0 | I have an XML document that I would like to update after it already contains data.
I thought about opening the XML file in "a" (append) mode. The problem is that the new data will be written after the root closing tag.
How can I delete the last line of a file, then start writing data from that point, and then close the root tag?
Of course I could read the whole file and do some string manipulations, but I don't think that's the best idea.. | How to update/modify an XML file in python? | 0.044415 | 0 | 1 | 153,237 |
1,591,762 | 2009-10-19T23:51:00.000 | 1 | 0 | 1 | 0 | python | 1,592,512 | 3 | false | 0 | 0 | You haven't made it completely clear what you need. It sounds like itertools should have what you need. Perhaps what you wish is an itertools.combinations of the itertools.product of the lists in your big list.
@fortran: you can't have a set of sets. You can have a set of frozensets, but depending on what it really means to have duplicates here, that might not be what is needed. | 2 | 2 | 1 | I have a list. It contains x lists, each with y elements.
I want to pair each element with all the other elements, just once, (a,b = b,a)
EDIT: this has been criticized as being too vague.So I'll describe the history.
My function produces random equations and using genetic techniques, mutates and crossbreeds them, selecting for fitness.
After a number of iterations, it returns a list of 12 objects, sorted by fitness of their 'equation' attribute.
Using the 'parallel python' module to run this function 8 times, a list containing 8 lists of 12 objects (each with an equation attribute) each is returned.
Now, within each list, the 12 objects have already been cross-bread with each other.
I want to cross-breed each object in a list with all the other objects in all the other lists, but not with the objects within it's own list with which it has already been cross-bread. (whew!) | How to make all combinations of the elements in an array? | 0.066568 | 0 | 0 | 598 |
1,591,762 | 2009-10-19T23:51:00.000 | 0 | 0 | 1 | 0 | python | 1,591,802 | 3 | false | 0 | 0 | First of all, please don't refer to this as an "array". You are using a list of lists. In Python, an array is a different type of data structure, provided by the array module.
Also, your application sounds suspiciously like a matrix. If you are really doing matrix manipulations, you should investigate the Numpy package.
At first glance your problem sounded like something that the zip() function would solve or itertools.izip(). You should definitely read through the docs for the itertools module because it has various list manipulations and they will run faster than anything you could write yourself in Python. | 2 | 2 | 1 | I have a list. It contains x lists, each with y elements.
I want to pair each element with all the other elements, just once, (a,b = b,a)
EDIT: this has been criticized as being too vague.So I'll describe the history.
My function produces random equations and using genetic techniques, mutates and crossbreeds them, selecting for fitness.
After a number of iterations, it returns a list of 12 objects, sorted by fitness of their 'equation' attribute.
Using the 'parallel python' module to run this function 8 times, a list containing 8 lists of 12 objects (each with an equation attribute) each is returned.
Now, within each list, the 12 objects have already been cross-bread with each other.
I want to cross-breed each object in a list with all the other objects in all the other lists, but not with the objects within it's own list with which it has already been cross-bread. (whew!) | How to make all combinations of the elements in an array? | 0 | 0 | 0 | 598 |
1,593,019 | 2009-10-20T07:40:00.000 | 12 | 1 | 1 | 1 | python,unix,shell,benchmarking | 5,544,739 | 12 | false | 0 | 0 | I usually do a quick time ./script.py to see how long it takes. That does not show you the memory though, at least not as a default. You can use /usr/bin/time -v ./script.py to get a lot of information, including memory usage. | 1 | 105 | 0 | Usually I use shell command time. My purpose is to test if data is small, medium, large or very large set, how much time and memory usage will be.
Any tools for Linux or just Python to do this? | Is there any simple way to benchmark Python script? | 1 | 0 | 0 | 101,454 |
1,593,483 | 2009-10-20T09:33:00.000 | 2 | 0 | 0 | 1 | python,google-app-engine | 1,593,985 | 2 | true | 1 | 0 | You should generally be doing everything within some sort of RequestHandler or the equivalent in your non-WebApp framework. However, if you really insist on being stuck in the early 1990s and writing plain CGI scripts, the environment variables SERVER_NAME and PATH_INFO may be what you want; see a CGI reference for more info. | 1 | 1 | 0 | So, within a webapp.RequestHandler subclass I would use self.request.uri to get the request URI. But, I can't access this outside of a RequestHandler and so no go. Any ideas?
I'm running Python and I'm new at it as well as GAE. | Get the request uri outside of a RequestHandler in Google App Engine (Python) | 1.2 | 0 | 1 | 786 |
1,594,604 | 2009-10-20T13:27:00.000 | 3 | 0 | 0 | 0 | python,performance,optimization,file-io | 1,594,704 | 7 | false | 0 | 0 | If you are I/O bound, the best way I have found to optimize is to read or write the entire file into/out of memory at once, then operate out of RAM from there on.
With extensive testing I found that my runtime eded up bound not by the amount of data I read from/wrote to disk, but by the number of I/O operations I used to do it. That is what you need to optimize.
I don't know Python, but if there is a way to tell it to write the whole file out of RAM in one go, rather than issuing a separate I/O for each byte, that's what you need to do.
Of course the drawback to this is that files can be considerably larger than available RAM. There are lots of ways to deal with that, but that is another question for another time. | 5 | 3 | 1 | I have a python program that does something like this:
Read a row from a csv file.
Do some transformations on it.
Break it up into the actual rows as they would be written to the database.
Write those rows to individual csv files.
Go back to step 1 unless the file has been totally read.
Run SQL*Loader and load those files into the database.
Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.
There are a few ideas that I have to solve this:
Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?
Parallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.
Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.
Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice? | How should I optimize this filesystem I/O bound program? | 0.085505 | 1 | 0 | 2,504 |
1,594,604 | 2009-10-20T13:27:00.000 | 1 | 0 | 0 | 0 | python,performance,optimization,file-io | 1,595,358 | 7 | false | 0 | 0 | Use buffered writes for step 4.
Write a simple function that simply appends the output onto a string, checks the string length, and only writes when you have enough which should be some multiple of 4k bytes. I would say start with 32k buffers and time it.
You would have one buffer per file, so that most "writes" won't actually hit the disk. | 5 | 3 | 1 | I have a python program that does something like this:
Read a row from a csv file.
Do some transformations on it.
Break it up into the actual rows as they would be written to the database.
Write those rows to individual csv files.
Go back to step 1 unless the file has been totally read.
Run SQL*Loader and load those files into the database.
Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.
There are a few ideas that I have to solve this:
Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?
Parallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.
Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.
Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice? | How should I optimize this filesystem I/O bound program? | 0.028564 | 1 | 0 | 2,504 |
1,594,604 | 2009-10-20T13:27:00.000 | 3 | 0 | 0 | 0 | python,performance,optimization,file-io | 1,595,626 | 7 | true | 0 | 0 | Python already does IO buffering and the OS should handle both prefetching the input file and delaying writes until it needs the RAM for something else or just gets uneasy about having dirty data in RAM for too long. Unless you force the OS to write them immediately, like closing the file after each write or opening the file in O_SYNC mode.
If the OS isn't doing the right thing, you can try raising the buffer size (third parameter to open()). For some guidance on appropriate values given a 100MB/s 10ms latency IO system a 1MB IO size will result in approximately 50% latency overhead, while a 10MB IO size will result in 9% overhead. If its still IO bound, you probably just need more bandwidth. Use your OS specific tools to check what kind of bandwidth you are getting to/from the disks.
Also useful is to check if step 4 is taking a lot of time executing or waiting on IO. If it's executing you'll need to spend more time checking which part is the culprit and optimize that, or split out the work to different processes. | 5 | 3 | 1 | I have a python program that does something like this:
Read a row from a csv file.
Do some transformations on it.
Break it up into the actual rows as they would be written to the database.
Write those rows to individual csv files.
Go back to step 1 unless the file has been totally read.
Run SQL*Loader and load those files into the database.
Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.
There are a few ideas that I have to solve this:
Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?
Parallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.
Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.
Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice? | How should I optimize this filesystem I/O bound program? | 1.2 | 1 | 0 | 2,504 |
1,594,604 | 2009-10-20T13:27:00.000 | 2 | 0 | 0 | 0 | python,performance,optimization,file-io | 1,597,062 | 7 | false | 0 | 0 | Can you use a ramdisk for step 4? Low millions sounds doable if the rows are less than a couple of kB or so. | 5 | 3 | 1 | I have a python program that does something like this:
Read a row from a csv file.
Do some transformations on it.
Break it up into the actual rows as they would be written to the database.
Write those rows to individual csv files.
Go back to step 1 unless the file has been totally read.
Run SQL*Loader and load those files into the database.
Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.
There are a few ideas that I have to solve this:
Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?
Parallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.
Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.
Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice? | How should I optimize this filesystem I/O bound program? | 0.057081 | 1 | 0 | 2,504 |
1,594,604 | 2009-10-20T13:27:00.000 | 1 | 0 | 0 | 0 | python,performance,optimization,file-io | 1,597,281 | 7 | false | 0 | 0 | Isn't it possible to collect a few thousand rows in ram, then go directly to the database server and execute them?
This would remove the save to and load from the disk that step 4 entails.
If the database server is transactional, this is also a safe way to do it - just have the database begin before your first row and commit after the last. | 5 | 3 | 1 | I have a python program that does something like this:
Read a row from a csv file.
Do some transformations on it.
Break it up into the actual rows as they would be written to the database.
Write those rows to individual csv files.
Go back to step 1 unless the file has been totally read.
Run SQL*Loader and load those files into the database.
Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.
There are a few ideas that I have to solve this:
Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?
Parallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.
Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.
Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice? | How should I optimize this filesystem I/O bound program? | 0.028564 | 1 | 0 | 2,504 |
1,594,827 | 2009-10-20T14:01:00.000 | 15 | 0 | 1 | 0 | python,build,distutils | 1,594,902 | 4 | true | 0 | 0 | For pre-deletion, just delete it with distutils.dir_util.remove_tree before calling setup.
For post-delete, I assume you only want to post-delete after selected commands. Subclass the respective command, override its run method (to invoke remove_tree after calling the base run), and pass the new command into the cmdclass dictionary of setup. | 1 | 63 | 0 | How could I make my setup.py pre-delete and post-delete the build directory? | Cleaning build directory in setup.py | 1.2 | 0 | 0 | 61,211 |
1,596,911 | 2009-10-20T19:52:00.000 | 2 | 0 | 0 | 0 | python,html,django,ms-word | 1,597,120 | 6 | false | 1 | 0 | It depends how much formatting and images you're dealing with. I do one of a couple things:
Google Docs: Probably the closest you'll get to the original formatting and usable HTML.
Markdown: Abandon formatting. Paste it into a plain text editor, run it through Markdown and fix the rest by hand. | 2 | 13 | 0 | Every now and then I receive a Word Document that I have to display as a web page. I'm currently using Django's flatpages to achieve this by grabbing the html content generated by MS Word. The generated html is quite messy. Is there a better way that can generate very simple html to solve this issue using Python? | How do you convert a Word Document into very simple html in Python? | 0.066568 | 0 | 0 | 26,350 |
1,596,911 | 2009-10-20T19:52:00.000 | 2 | 0 | 0 | 0 | python,html,django,ms-word | 8,174,432 | 6 | false | 1 | 0 | Word 2010 has the ability to "save as filtered web page". This will eliminate the overwhelming majority of the HTML that Word inserts. | 2 | 13 | 0 | Every now and then I receive a Word Document that I have to display as a web page. I'm currently using Django's flatpages to achieve this by grabbing the html content generated by MS Word. The generated html is quite messy. Is there a better way that can generate very simple html to solve this issue using Python? | How do you convert a Word Document into very simple html in Python? | 0.066568 | 0 | 0 | 26,350 |
1,597,093 | 2009-10-20T20:27:00.000 | 0 | 0 | 1 | 0 | python,proxy,download,multithreading,harvest | 1,597,142 | 3 | false | 0 | 1 | Is this something you can't just do by passing a URL to newly spawned threads and calling urllib2.urlopen in each one, or is there a more specific requirement? | 1 | 0 | 0 | What would be the best library for multithreaded harvesting/downloading with multiple proxy support? I've looked at Tkinter, it looks good but there are so many, does anyone have a specific recommendation? Many thanks! | Multithreaded Downloading Through Proxies In Python | 0 | 0 | 0 | 940 |
1,597,833 | 2009-10-20T23:18:00.000 | 6 | 0 | 0 | 0 | python,bots | 1,597,878 | 4 | false | 0 | 0 | It doesn't have to be Python, I've seen it done in PHP and Perl, and you can probably do it in many other languages.
The general approach is:
1) You give your app a URL and it makes an HTTP request to that URL. I think I have seen this done with php/wget. Probably many other ways to do it.
2) Scan the HTTP response for other URLs that you want to "click" (really, sending HTTP requests to them), and then send requests to those. Parsing the links usually requires some understanding of regular expressions (if you are not familiar with regular expressions, brush up on it - it's important stuff ;)). | 1 | 17 | 0 | I simply want to create an automatic script that can run (preferably) on a web-server, and simply 'clicks' on an object of a web page. I am new to Python or whatever language this would be used for so I thought I would go here to ask where to start! This may seem like I want the script to scam advertisements or do something illegal, but it's simply to interact with another website. | Where do I start with a web bot? | 1 | 0 | 1 | 23,157 |
1,598,445 | 2009-10-21T02:32:00.000 | 0 | 0 | 1 | 0 | python,visual-studio,visual-studio-2008,ironpython | 39,940,667 | 3 | false | 0 | 0 | Only the print command is able to read such functions properly in python . so, a tab command can be used along with the print command. To get a space when one uses a tab function. | 3 | 3 | 0 | I have some IronPython scripts that are embedded in a C# project, and it would be convenient to be able to edit them in the VS editor. VS evidently knows about Python because it provides syntax coloring for it. Unfortunately however the editor uses tab characters for indentation, whereas I want spaces. Is there a setting to change this? I don't see a heading for Python under Tools/Options/TextEditor. | How to use tabs-as-spaces in Python in Visual Studio 2008? | 0 | 0 | 0 | 1,336 |
1,598,445 | 2009-10-21T02:32:00.000 | 5 | 0 | 1 | 0 | python,visual-studio,visual-studio-2008,ironpython | 1,598,477 | 3 | true | 0 | 0 | Here is one way to do it, probably not the best. On the Tools -> Text Editor -> File extension part of the Options menu add a .py extension, and set an type of editor. You don't get a python editor type, but you can pick one of the ones you use less often (for me this would be VB.net), and then make sure that the tab settings for that language fit your needs. Syntax highlighting didn't seem to be affected for me. | 3 | 3 | 0 | I have some IronPython scripts that are embedded in a C# project, and it would be convenient to be able to edit them in the VS editor. VS evidently knows about Python because it provides syntax coloring for it. Unfortunately however the editor uses tab characters for indentation, whereas I want spaces. Is there a setting to change this? I don't see a heading for Python under Tools/Options/TextEditor. | How to use tabs-as-spaces in Python in Visual Studio 2008? | 1.2 | 0 | 0 | 1,336 |
1,598,445 | 2009-10-21T02:32:00.000 | 0 | 0 | 1 | 0 | python,visual-studio,visual-studio-2008,ironpython | 1,598,460 | 3 | false | 0 | 0 | Tools -> Options -> Text Editor -> Choose Language ( you might need to choose All Languages )
Click on Tabs and set it how you want it.
Edit: Looks like you can add your own file types in the same area and set Tab setting specifically for them. | 3 | 3 | 0 | I have some IronPython scripts that are embedded in a C# project, and it would be convenient to be able to edit them in the VS editor. VS evidently knows about Python because it provides syntax coloring for it. Unfortunately however the editor uses tab characters for indentation, whereas I want spaces. Is there a setting to change this? I don't see a heading for Python under Tools/Options/TextEditor. | How to use tabs-as-spaces in Python in Visual Studio 2008? | 0 | 0 | 0 | 1,336 |
1,599,060 | 2009-10-21T06:34:00.000 | 7 | 0 | 1 | 0 | python,datetime,timestamp,utc | 1,715,510 | 2 | false | 0 | 0 | Actually, ntplib computes this offset accounting for round-trip delay.
It's available through the "offset" attribute of the NTP response. Therefore the result should not very wildly. | 1 | 13 | 0 | I wrote a desktop application and was using datetime.datetime.utcnow() for timestamping, however I've recently noticed that some people using the application get wildly different results than I do when we run the program at the same time. Is there any way to get the UTC time locally without using urllib to fetch it from a website? | How can I get an accurate UTC time with Python? | 1 | 0 | 0 | 12,794 |
1,599,962 | 2009-10-21T10:29:00.000 | 26 | 0 | 1 | 0 | python | 1,599,973 | 1 | true | 0 | 0 | You are looking for sys.excepthook:
sys.excepthook(type, value, traceback)
This function prints out a given traceback and exception to sys.stderr.
When an exception is raised and uncaught, the interpreter calls sys.excepthook with three arguments, the exception class, exception instance, and a traceback object. In an interactive session this happens just before control is returned to the prompt; in a Python program this happens just before the program exits. The handling of such top-level exceptions can be customized by assigning another three-argument function to sys.excepthook. | 1 | 19 | 0 | For an uncaught exception, Python by default prints a stack trace, the exception itself, and terminates. Is anybody aware of a way to tailor this behaviour on the program level (other than establishing my own global, catch-all exception handler), so that the stack trace is omitted? I would like to toggle in my app whether the stack trace is printed or not. | Configuring Python's default exception handling | 1.2 | 0 | 0 | 3,327 |
1,601,308 | 2009-10-21T14:43:00.000 | 1 | 1 | 0 | 0 | python,unit-testing,sqlite | 1,601,338 | 5 | false | 0 | 0 | Use some sort of database configuration and configure which database to use, and configure the in-memory database during unit tests. | 2 | 1 | 0 | I'm using a config file to get the info for my database. It always gets the hostname and then figures out what database options to use from this config file. I want to be able to tell if I'm inside a unittest here and use the in memory sqlite database instead. Is there a way to tell at that point whether I'm inside a unittest, or will I have to find a different way? | Is there a way to tell whether a function is getting executed in a unittest? | 0.039979 | 0 | 0 | 108 |
1,601,308 | 2009-10-21T14:43:00.000 | 0 | 1 | 0 | 0 | python,unit-testing,sqlite | 1,601,336 | 5 | false | 0 | 0 | This is kind of brute force but it works. Have an environmental variable UNIT_TEST that your code checks, and set it inside your unit test driver. | 2 | 1 | 0 | I'm using a config file to get the info for my database. It always gets the hostname and then figures out what database options to use from this config file. I want to be able to tell if I'm inside a unittest here and use the in memory sqlite database instead. Is there a way to tell at that point whether I'm inside a unittest, or will I have to find a different way? | Is there a way to tell whether a function is getting executed in a unittest? | 0 | 0 | 0 | 108 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.