content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
py2exe - generated executable freezes when connecting to socket
Pardon my ignorance as I'm still a beginner in coding.
I'm trying to convert a python script I wrote to a Windows executable program using py2exe. However, though I am able to successfully convert the script, the executable doesn't seem to be fully functional.
After much debugging, I have isolated the cause and the following code seems to be the problem
host = str(raw_input('Enter Host IP Address: '))
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client_socket.connect((host, 5000))
The problem does not occur when the script is executed from Pydev itself and the script is able to run without problems. The windows executable which is a console application just hangs when trying to connect to another host.
Is this a known issue or am I doing something wrong? Any help is much appreciated.
A:
Are you able to input the IP address? Reading that thread it seems that py2exe requires a special windows argument to launch a console. Otherwise, raw_input tries to read from the standard input, and hangs/crashes because it does not find anything.
Given the age of the thread, I checked py2exe doc: you might want to try to put your script in the console attribute.
I really think that the behavior is related to raw_input, and that it is not caused by the socket operation.
|
py2exe - generated executable freezes when connecting to socket
|
Pardon my ignorance as I'm still a beginner in coding.
I'm trying to convert a python script I wrote to a Windows executable program using py2exe. However, though I am able to successfully convert the script, the executable doesn't seem to be fully functional.
After much debugging, I have isolated the cause and the following code seems to be the problem
host = str(raw_input('Enter Host IP Address: '))
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client_socket.connect((host, 5000))
The problem does not occur when the script is executed from Pydev itself and the script is able to run without problems. The windows executable which is a console application just hangs when trying to connect to another host.
Is this a known issue or am I doing something wrong? Any help is much appreciated.
|
[
"Are you able to input the IP address? Reading that thread it seems that py2exe requires a special windows argument to launch a console. Otherwise, raw_input tries to read from the standard input, and hangs/crashes because it does not find anything.\nGiven the age of the thread, I checked py2exe doc: you might want to try to put your script in the console attribute.\nI really think that the behavior is related to raw_input, and that it is not caused by the socket operation.\n"
] |
[
1
] |
[] |
[] |
[
"py2exe",
"python"
] |
stackoverflow_0000931851_py2exe_python.txt
|
Q:
AppEngine: Maintaining DataStore Consistency When Creating Records
I've hit a small dilemma! I have a handler called vote; when it is invoked it sets a user's vote to whatever they have picked. To remember what options they previously picked, I store a VoteRecord options which details what their current vote is set to.
Of course, the first time they vote, I have to create the object and store it. But successive votes should just change the value of the existing VoteRecord. But he comes the problem: under some circumstances two VoteRecords can be created. It's rare (only happened once in all 500 votes we've seen so far) but still bad when it does.
The issue happens because two separate handlers both do essentially this:
query = VoteRecord.all().filter('user =', session.user).filter('poll =', poll)
if query.count(1) > 0:
vote = query[0]
poll.votes[vote.option] -= 1
poll.votes[option] += 1
poll.put()
vote.option = option
vote.updated = datetime.now()
vote.put()
else:
vote = VoteRecord()
vote.user = session.user
vote.poll = poll
vote.option = option
vote.put()
poll.votes[option] += 1
poll.put()
session.user.votes += 1
session.user.xp += 3
session.user.put()
incr('votes')
My question is: what is the most effective and fastest way to handle these requests while ensuring that no request is lost and no request creates two VoteRecord objects?
A:
The easiest way to do this is to use key names for your vote objects, and use Model.get_or_insert. First, come up with a naming scheme for your key names - naming it after the poll is a good idea - and then do a get_or_insert to fetch or create the relevant entity:
vote = VoteRecord.get_or_insert(pollname, parent=session.user, user=session.user, poll=poll, option=option)
if vote.option != option:
# Record already existed; we need to update it
vote.option = option
vote.put()
A:
The issue is this part:
if vote.count(1) == 0:
obj = VoteRecord()
obj.user = user
obj.option = option
obj.put()
Without a transaction, your code could run in this order in two interpreter instances:
if vote.count(1) == 0:
obj = VoteRecord()
obj.user = user
if vote.count(1) == 0:
obj = VoteRecord()
obj.user = user
obj.option = option
obj.put()
obj.option = option
obj.put()
Or any weird combination thereof. The problem is the count test runs again before the put has occured, so the second thread goes through the first part of the conditional instead of the second.
You can fix this by putting the code in a function and then using
db.run_in_transaction()
to run the function.
Problem is you seem to be relying on the count of objects returned by a query for your decision logic that needs to be put in the transaction. If you read the Google I/O talks or look at the group you'll see that this is not recommended. That's because you can't transactionalize a query. Instead, you should store the count as an entity value somewhere, query for it outside of the transaction function, and then pass the key for that entity to your transaction function.
Here's an example of a transaction function that checks an entity property. It's passed the key as a parameter:
def checkAndLockPage(pageKey):
page = db.get(pageKey)
if page.locked:
return False
else:
page.locked = True
page.put()
return True
Only one user at a time can lock this entity, and there will never be any duplicate locks.
|
AppEngine: Maintaining DataStore Consistency When Creating Records
|
I've hit a small dilemma! I have a handler called vote; when it is invoked it sets a user's vote to whatever they have picked. To remember what options they previously picked, I store a VoteRecord options which details what their current vote is set to.
Of course, the first time they vote, I have to create the object and store it. But successive votes should just change the value of the existing VoteRecord. But he comes the problem: under some circumstances two VoteRecords can be created. It's rare (only happened once in all 500 votes we've seen so far) but still bad when it does.
The issue happens because two separate handlers both do essentially this:
query = VoteRecord.all().filter('user =', session.user).filter('poll =', poll)
if query.count(1) > 0:
vote = query[0]
poll.votes[vote.option] -= 1
poll.votes[option] += 1
poll.put()
vote.option = option
vote.updated = datetime.now()
vote.put()
else:
vote = VoteRecord()
vote.user = session.user
vote.poll = poll
vote.option = option
vote.put()
poll.votes[option] += 1
poll.put()
session.user.votes += 1
session.user.xp += 3
session.user.put()
incr('votes')
My question is: what is the most effective and fastest way to handle these requests while ensuring that no request is lost and no request creates two VoteRecord objects?
|
[
"The easiest way to do this is to use key names for your vote objects, and use Model.get_or_insert. First, come up with a naming scheme for your key names - naming it after the poll is a good idea - and then do a get_or_insert to fetch or create the relevant entity:\nvote = VoteRecord.get_or_insert(pollname, parent=session.user, user=session.user, poll=poll, option=option)\nif vote.option != option:\n # Record already existed; we need to update it\n vote.option = option\n vote.put()\n\n",
"The issue is this part: \nif vote.count(1) == 0:\n obj = VoteRecord()\n obj.user = user\n obj.option = option\n obj.put()\n\nWithout a transaction, your code could run in this order in two interpreter instances:\nif vote.count(1) == 0:\n obj = VoteRecord()\n obj.user = user\n\n\nif vote.count(1) == 0:\n obj = VoteRecord()\n obj.user = user\n obj.option = option\n obj.put()\n\n\n obj.option = option\n obj.put()\n\nOr any weird combination thereof. The problem is the count test runs again before the put has occured, so the second thread goes through the first part of the conditional instead of the second.\nYou can fix this by putting the code in a function and then using\ndb.run_in_transaction()\n\nto run the function.\nProblem is you seem to be relying on the count of objects returned by a query for your decision logic that needs to be put in the transaction. If you read the Google I/O talks or look at the group you'll see that this is not recommended. That's because you can't transactionalize a query. Instead, you should store the count as an entity value somewhere, query for it outside of the transaction function, and then pass the key for that entity to your transaction function.\nHere's an example of a transaction function that checks an entity property. It's passed the key as a parameter:\ndef checkAndLockPage(pageKey):\n page = db.get(pageKey)\n if page.locked:\n return False\n else:\n page.locked = True\n page.put()\n return True\n\nOnly one user at a time can lock this entity, and there will never be any duplicate locks.\n"
] |
[
3,
1
] |
[] |
[] |
[
"consistency",
"google_app_engine",
"google_cloud_datastore",
"python",
"transactions"
] |
stackoverflow_0000522586_consistency_google_app_engine_google_cloud_datastore_python_transactions.txt
|
Q:
Search functionality for Django
I'm developing a web app using Django, and I'll need to add search functionality soon. Search will be implemented for two models, one being an extension of the auth user class and another one with the fields name, tags, and description. So I guess nothing too scary here in context of searching text.
For development I am using SQLite and as no database specific work has been done, I am at liberty to use any database in production. I'm thinking of choosing between PostgreSQL or MySQL.
I have gone through several posts on Internet about search solutions, nevertheless I'd like to get opinions for my simple case. Here are my questions:
is full-text search an overkill in my case?
is it better to rely on the database's full-text search support? If so, which database should I use?
should I use an external search library, such as Whoosh, Sphinx, or Xapian? If so, which one?
EDIT:
tags is a Tagfield (from the django-tagging app) that sits on a m2m relationship. description is a field that holds HTML and has a max_length of 1024 bytes.
A:
If that field tags means what I think it means, i.e. you plan to store a string which concatenates multiple tags for an item, then you might need full-text search on it... but it's a bad design; rather, you should have a many-many relationship between items and a tags table (in another table, ItemTag or something, with 2 foreign keys that are the primary keys of the items table and tags table).
I can't tell whether you need full-text search on description as I have no indication of what it is -- nor whether you need the reasonable but somewhat rudimentary full-text search that MySQL 5.1 and PostgreSQL 8.3 provide, or the more powerful one in e.g. sphinx... maybe talk a bit more about the context of your app and why you're considering full-text search?
Edit: so it seems the only possible need for full-text search might be on description, and that looks like it's probably limited enough that either MySQL 5.1 or PostgreSQL 8.3 will serve it well. Me, I have a sweet spot for PostgreSQL (even though I'm reasonably expert at MySQL too), but that's a general preference, not specifically connected to full-text search issues. This blog does provide one reason to prefer PostgreSQL: you can have full-text search and still be transactional, while in MySQL full-text indexing only work on MyISAM tables, not InnoDB [[except if you add sphinx, of course]] (also see this follow-on for a bit more on full-text search in PostgreSQL and Lucene). Still, there are of course other considerations involved in picking a DB, and I don't think you'll be doing terribly with either (unless having to add sphinx for full-text plus transaction is a big problem).
A:
Django has full text searching support in its QuerySet filters. Right now, if you only have two models that need searching, just make a view that searches the fields on both:
search_string = "+Django -jazz Python"
first_models = FirstModel.objects.filter(headline__search=search_string)
second_models = SecondModel.objects.filter(headline__search=search_string)
You could further filter them to make sure the results are unique, if necessary.
Additionally, there is a regex filter that may be even better for dealing with your html fields and tags since the regex can instruct the filter on exactly how to process any delimiters or markup.
A:
Whether you need an external library depends on your needs. How much traffic are we talking about? The external libraries are generally better when it comes to performance, but as always there are advantages and disadvantages. I am using Sphinx with django-sphinx plugin, and I would recommend it if you will be doing a lot of searching.
A:
Haystack looks promising. And it supports Whoosh on the back end.
|
Search functionality for Django
|
I'm developing a web app using Django, and I'll need to add search functionality soon. Search will be implemented for two models, one being an extension of the auth user class and another one with the fields name, tags, and description. So I guess nothing too scary here in context of searching text.
For development I am using SQLite and as no database specific work has been done, I am at liberty to use any database in production. I'm thinking of choosing between PostgreSQL or MySQL.
I have gone through several posts on Internet about search solutions, nevertheless I'd like to get opinions for my simple case. Here are my questions:
is full-text search an overkill in my case?
is it better to rely on the database's full-text search support? If so, which database should I use?
should I use an external search library, such as Whoosh, Sphinx, or Xapian? If so, which one?
EDIT:
tags is a Tagfield (from the django-tagging app) that sits on a m2m relationship. description is a field that holds HTML and has a max_length of 1024 bytes.
|
[
"If that field tags means what I think it means, i.e. you plan to store a string which concatenates multiple tags for an item, then you might need full-text search on it... but it's a bad design; rather, you should have a many-many relationship between items and a tags table (in another table, ItemTag or something, with 2 foreign keys that are the primary keys of the items table and tags table).\nI can't tell whether you need full-text search on description as I have no indication of what it is -- nor whether you need the reasonable but somewhat rudimentary full-text search that MySQL 5.1 and PostgreSQL 8.3 provide, or the more powerful one in e.g. sphinx... maybe talk a bit more about the context of your app and why you're considering full-text search?\nEdit: so it seems the only possible need for full-text search might be on description, and that looks like it's probably limited enough that either MySQL 5.1 or PostgreSQL 8.3 will serve it well. Me, I have a sweet spot for PostgreSQL (even though I'm reasonably expert at MySQL too), but that's a general preference, not specifically connected to full-text search issues. This blog does provide one reason to prefer PostgreSQL: you can have full-text search and still be transactional, while in MySQL full-text indexing only work on MyISAM tables, not InnoDB [[except if you add sphinx, of course]] (also see this follow-on for a bit more on full-text search in PostgreSQL and Lucene). Still, there are of course other considerations involved in picking a DB, and I don't think you'll be doing terribly with either (unless having to add sphinx for full-text plus transaction is a big problem).\n",
"Django has full text searching support in its QuerySet filters. Right now, if you only have two models that need searching, just make a view that searches the fields on both:\nsearch_string = \"+Django -jazz Python\"\nfirst_models = FirstModel.objects.filter(headline__search=search_string)\nsecond_models = SecondModel.objects.filter(headline__search=search_string)\n\nYou could further filter them to make sure the results are unique, if necessary.\nAdditionally, there is a regex filter that may be even better for dealing with your html fields and tags since the regex can instruct the filter on exactly how to process any delimiters or markup.\n",
"Whether you need an external library depends on your needs. How much traffic are we talking about? The external libraries are generally better when it comes to performance, but as always there are advantages and disadvantages. I am using Sphinx with django-sphinx plugin, and I would recommend it if you will be doing a lot of searching.\n",
"Haystack looks promising. And it supports Whoosh on the back end.\n"
] |
[
5,
1,
0,
0
] |
[] |
[] |
[
"database",
"django",
"full_text_search",
"python",
"search"
] |
stackoverflow_0000932255_database_django_full_text_search_python_search.txt
|
Q:
How to parse angular values using regular expressions
I have very little experience using regular expressions and I need to parse an angle value expressed as bearings, using regular expressions, example:
"N45°20'15.3"E"
Which represents:
45 degrees, 20 minutes with 15.3 seconds, located at the NE quadrant.
The restrictions are:
The first character can be "N" or "S"
The last character can be "E" or "W"
0 <= degrees <= 59
0 <= minutes <= 59
0 <= second < 60, this can be ommited.
Python preferably or any other language.
Thanks
A:
Try this regular expression:
^([NS])([0-5]?\d)°([0-5]?\d)'(?:([0-5]?\d)(?:\.\d)?")?([EW])$
It matches any string that …
^([NS]) begins with N or S
([0-5]?\d)° followed by a degree value, either a single digit between 0 and 9 (\d) or two digits with the first bewteen 0 and 5 ([0-5]) and the second 0 and 9, thus between 0 and 59, followed by °
([0-5]?\d)' followed by a minutes value (again between 0 and 59) and '
(?:([0-5]?\d)(?:\.\d)?")? optionally followed by a seconds value and " sign, seconds value between 0 and 59 with an optional additional decimal point, and
([EW])$ ends with either E or W.
If you don’t want to allow the values under ten to have preceeding zeros, change the [0-5] to [1-5].
A:
A pattern you could use:
pat = r"^([NS])(\d+)°(\d+)'([\d.]*)\"?([EW])$"
one way to use it:
import re
r = re.compile(pat)
m = r.match(thestring)
if m is None:
print "%r does not match!" % thestring
else:
print "%r matches: %s" % (thestring, m.groups())
as you'll notice, upon a match, m.groups() gives you the various parts of thestring matching each parentheses-enclosed "group" in pat -- a letter that's N or S, then one or more digits for the degrees, etc. I imagine that's what you mean by "parsing" here.
|
How to parse angular values using regular expressions
|
I have very little experience using regular expressions and I need to parse an angle value expressed as bearings, using regular expressions, example:
"N45°20'15.3"E"
Which represents:
45 degrees, 20 minutes with 15.3 seconds, located at the NE quadrant.
The restrictions are:
The first character can be "N" or "S"
The last character can be "E" or "W"
0 <= degrees <= 59
0 <= minutes <= 59
0 <= second < 60, this can be ommited.
Python preferably or any other language.
Thanks
|
[
"Try this regular expression:\n^([NS])([0-5]?\\d)°([0-5]?\\d)'(?:([0-5]?\\d)(?:\\.\\d)?\")?([EW])$\n\nIt matches any string that …\n\n^([NS]) begins with N or S\n([0-5]?\\d)° followed by a degree value, either a single digit between 0 and 9 (\\d) or two digits with the first bewteen 0 and 5 ([0-5]) and the second 0 and 9, thus between 0 and 59, followed by °\n([0-5]?\\d)' followed by a minutes value (again between 0 and 59) and '\n(?:([0-5]?\\d)(?:\\.\\d)?\")? optionally followed by a seconds value and \" sign, seconds value between 0 and 59 with an optional additional decimal point, and\n([EW])$ ends with either E or W.\n\nIf you don’t want to allow the values under ten to have preceeding zeros, change the [0-5] to [1-5].\n",
"A pattern you could use:\npat = r\"^([NS])(\\d+)°(\\d+)'([\\d.]*)\\\"?([EW])$\"\n\none way to use it:\nimport re\nr = re.compile(pat)\nm = r.match(thestring)\nif m is None:\n print \"%r does not match!\" % thestring\nelse:\n print \"%r matches: %s\" % (thestring, m.groups())\n\nas you'll notice, upon a match, m.groups() gives you the various parts of thestring matching each parentheses-enclosed \"group\" in pat -- a letter that's N or S, then one or more digits for the degrees, etc. I imagine that's what you mean by \"parsing\" here.\n"
] |
[
8,
4
] |
[] |
[] |
[
"angle",
"python",
"regex"
] |
stackoverflow_0000932796_angle_python_regex.txt
|
Q:
Expression up to comment or end of line
Although this question is similar to this thread
I think I might be doing something wrong at the time of constructing the code with the Regular Expression.
I want to match anything in a line up to a comment ("#") or the end of the line (if it doesn't have a comment).
The regex I am using is: (.*)(#|$)
(.*) = Everything
(#|$) = comment or end of line
The code:
OPTION = re.compile(r'(?P<value>.*)(#|$)')
file = open('file.txt')
lines = file.read()
for line in lines.split('\n'):
get_match = OPTION.match(line)
if get_match:
line_value = get_match.group('value')
print "Match= %s" % line_value
The above works but does not strip out the comment.
If the file has a line like:
this is a line # and this is a comment
I still get the whole line when running the code.
Am I missing additional values/information in the regular expression or do I need to have a change on the code?
A:
The * is greedy (consumes as much of the string as it can) and is thus consuming the entire line (past the # and to the end-of-line). Change ".*" to ".*?" and it will work.
See the Regular Expression HOWTO for more information.
A:
Here's the correct regex to do something like this:
([^#]*)(#.*)?
Also, why don't you just use
file = open('file.txt')
for line in file:
A:
@Can, @Benji and @ ΤΖΩΤΖΙΟΥ give three excellent solutions, and it's fun to time them to see how fast they match (that's what timeit is for -- fun meaningless micro-benchmarks;-). E.g.:
$ python -mtimeit -s'import re; r=re.compile(r"([^#]*)(#.*)?"); s="this is a line # and this is a comment"' 'm=r.match(s); g=m.group(1)'
100000 loops, best of 3: 2.02 usec per loop
vs
$ python -mtimeit -s'import re; r=re.compile(r"^(.*?)(?:#|$)"); s="this is a line # and this is a comment"' 'm=r.match(s); g=m.group(1)'
100000 loops, best of 3: 4.19 usec per loop
vs
$ python -mtimeit -s'import re; r=re.compile(r"(.*?)(#|$)"); s="this is a line # and this is a comment"' 'm=r.match(s); g=m.group(1)'
100000 loops, best of 3: 4.37 usec per loop
and the winner is... a mix of the patterns!-)
$ python -mtimeit -s'import re; r=re.compile(r"(.*?)(#.*)?"); s="this is a line # and this is a comment"' 'm=r.match(s); g=m.group(1)'
1000000 loops, best of 3: 1.73 usec per loop
Disclaimer: of course if this were a real benchmarking exercise and speed did truly matter, one would try on many different and relevant values for s, on tests beyond such a microbenchmark, etc, etc. But, I still find timeit an inexhaustible source of fun!-)
A:
Use this regular expression:
^(.*?)(?:#|$)
With the non-greedy modifier (?), the .* expression will match as soon as either a hash sign or end-of-line is reached. The default is to match as much as possible, and that is why you always got the whole line.
|
Expression up to comment or end of line
|
Although this question is similar to this thread
I think I might be doing something wrong at the time of constructing the code with the Regular Expression.
I want to match anything in a line up to a comment ("#") or the end of the line (if it doesn't have a comment).
The regex I am using is: (.*)(#|$)
(.*) = Everything
(#|$) = comment or end of line
The code:
OPTION = re.compile(r'(?P<value>.*)(#|$)')
file = open('file.txt')
lines = file.read()
for line in lines.split('\n'):
get_match = OPTION.match(line)
if get_match:
line_value = get_match.group('value')
print "Match= %s" % line_value
The above works but does not strip out the comment.
If the file has a line like:
this is a line # and this is a comment
I still get the whole line when running the code.
Am I missing additional values/information in the regular expression or do I need to have a change on the code?
|
[
"The * is greedy (consumes as much of the string as it can) and is thus consuming the entire line (past the # and to the end-of-line). Change \".*\" to \".*?\" and it will work.\nSee the Regular Expression HOWTO for more information.\n",
"Here's the correct regex to do something like this:\n([^#]*)(#.*)?\n\nAlso, why don't you just use\nfile = open('file.txt')\nfor line in file:\n\n",
"@Can, @Benji and @ ΤΖΩΤΖΙΟΥ give three excellent solutions, and it's fun to time them to see how fast they match (that's what timeit is for -- fun meaningless micro-benchmarks;-). E.g.:\n$ python -mtimeit -s'import re; r=re.compile(r\"([^#]*)(#.*)?\"); s=\"this is a line # and this is a comment\"' 'm=r.match(s); g=m.group(1)'\n100000 loops, best of 3: 2.02 usec per loop\n\nvs\n$ python -mtimeit -s'import re; r=re.compile(r\"^(.*?)(?:#|$)\"); s=\"this is a line # and this is a comment\"' 'm=r.match(s); g=m.group(1)'\n100000 loops, best of 3: 4.19 usec per loop\n\nvs\n$ python -mtimeit -s'import re; r=re.compile(r\"(.*?)(#|$)\"); s=\"this is a line # and this is a comment\"' 'm=r.match(s); g=m.group(1)'\n100000 loops, best of 3: 4.37 usec per loop\n\nand the winner is... a mix of the patterns!-)\n$ python -mtimeit -s'import re; r=re.compile(r\"(.*?)(#.*)?\"); s=\"this is a line # and this is a comment\"' 'm=r.match(s); g=m.group(1)'\n1000000 loops, best of 3: 1.73 usec per loop\n\nDisclaimer: of course if this were a real benchmarking exercise and speed did truly matter, one would try on many different and relevant values for s, on tests beyond such a microbenchmark, etc, etc. But, I still find timeit an inexhaustible source of fun!-)\n",
"Use this regular expression:\n^(.*?)(?:#|$)\n\nWith the non-greedy modifier (?), the .* expression will match as soon as either a hash sign or end-of-line is reached. The default is to match as much as possible, and that is why you always got the whole line.\n"
] |
[
7,
3,
1,
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0000932783_python_regex.txt
|
Q:
skip over HTML tags in Regular Expression patterns
I'm trying to write a regular expression pattern (in python) for reformatting these template engine files.
Basically the scheme looks like this:
[$$price$$]
{
<h3 class="price">
$12.99
</h3>
}
I'm trying to make it remove any extra tabs\spaces\new lines so it should look like this:
[$$price$$]{<h3 class="price">$12.99</h3>}
I wrote this: (\t|\s)+? which works except it matches within the html tags, so h3 becomes h3class and I am unable to figure out how to make it ignore anything inside the tags.
A:
Using regular expressions to deal with HTML is extremely error-prone; they're simply not the right tool.
Instead, use a HTML/XML-aware library (such as lxml) to build a DOM-style object tree; modify the text segments within the tree in-place, and generate your output again using said library.
A:
Try this:
\r?\n[ \t]*
EDIT: The idea is to remove all newlines (either Unix: "\n", or Windows: "\r\n") plus any horizontal whitespace (TABs or spaces) that immediately follow them.
A:
Alan,
I have to agree with Charles that the safest way is to parse the HTML, then work on the Text nodes only. Sounds overkill but that's the safest.
On the other hand, there is a way in regex to do that as long as you trust that the HTML code is correct (i.e. does not include invalid < and > in the tags as in: <a title="<this is a test>" href="look here">...)
Then, you know that any text has to be between > and < except at the very beginning and end (if you just get a snapshot of the page, otherwise there is the HTML tag minimum.)
So... You still need two regex's: find the text '>[^<]+<', then apply the other regex as you mentioned.
The other way, is to have an or with something like this (not tested!):
'(<[^>]*>)|([\r\n\f ]+)'
This will either find a tag or spaces. When you find a tag, do not replace, if you don't find a tag, replace with an empty string.
|
skip over HTML tags in Regular Expression patterns
|
I'm trying to write a regular expression pattern (in python) for reformatting these template engine files.
Basically the scheme looks like this:
[$$price$$]
{
<h3 class="price">
$12.99
</h3>
}
I'm trying to make it remove any extra tabs\spaces\new lines so it should look like this:
[$$price$$]{<h3 class="price">$12.99</h3>}
I wrote this: (\t|\s)+? which works except it matches within the html tags, so h3 becomes h3class and I am unable to figure out how to make it ignore anything inside the tags.
|
[
"Using regular expressions to deal with HTML is extremely error-prone; they're simply not the right tool.\nInstead, use a HTML/XML-aware library (such as lxml) to build a DOM-style object tree; modify the text segments within the tree in-place, and generate your output again using said library.\n",
"Try this:\n\\r?\\n[ \\t]*\n\nEDIT: The idea is to remove all newlines (either Unix: \"\\n\", or Windows: \"\\r\\n\") plus any horizontal whitespace (TABs or spaces) that immediately follow them.\n",
"Alan,\nI have to agree with Charles that the safest way is to parse the HTML, then work on the Text nodes only. Sounds overkill but that's the safest.\nOn the other hand, there is a way in regex to do that as long as you trust that the HTML code is correct (i.e. does not include invalid < and > in the tags as in: <a title=\"<this is a test>\" href=\"look here\">...)\nThen, you know that any text has to be between > and < except at the very beginning and end (if you just get a snapshot of the page, otherwise there is the HTML tag minimum.)\nSo... You still need two regex's: find the text '>[^<]+<', then apply the other regex as you mentioned.\nThe other way, is to have an or with something like this (not tested!):\n'(<[^>]*>)|([\\r\\n\\f ]+)'\nThis will either find a tag or spaces. When you find a tag, do not replace, if you don't find a tag, replace with an empty string.\n"
] |
[
5,
0,
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0000732465_python_regex.txt
|
Q:
Python Regex combined with string substitution?
I'm wondering if its possible to use string substitution along with the python re module?
For example I'm using optparse and have a variable named options.hostname which will change each time the user executes the script.
I have the following regex matching 3 strings in each line of the log file.
match = re.search (r'^\[(\d+)\] (SERVICE NOTIFICATION:).*(\bCRITICAL)', line)
I want to be able to perform string substitution by matching options.hostname as the last match group however I can't get any variations to work. Is this possible?
match = re.search (r'^\[(\d+)\] (SERVICE NOTIFICATION:).*(\bCRITICAL).*(s%), line) % options.hostname
A:
match = re.search (r'^\[(\d+)\] (SERVICE NOTIFICATION:).*(\bCRITICAL).*(%s)'
% options.hostname, line)
|
Python Regex combined with string substitution?
|
I'm wondering if its possible to use string substitution along with the python re module?
For example I'm using optparse and have a variable named options.hostname which will change each time the user executes the script.
I have the following regex matching 3 strings in each line of the log file.
match = re.search (r'^\[(\d+)\] (SERVICE NOTIFICATION:).*(\bCRITICAL)', line)
I want to be able to perform string substitution by matching options.hostname as the last match group however I can't get any variations to work. Is this possible?
match = re.search (r'^\[(\d+)\] (SERVICE NOTIFICATION:).*(\bCRITICAL).*(s%), line) % options.hostname
|
[
" match = re.search (r'^\\[(\\d+)\\] (SERVICE NOTIFICATION:).*(\\bCRITICAL).*(%s)'\n % options.hostname, line)\n\n"
] |
[
2
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0000933046_python_regex.txt
|
Q:
Python module globals versus __init__ globals
Apologies, somewhat confused Python newbie question. Let's say I have a module called animals.py.......
globvar = 1
class dog:
def bark(self):
print globvar
class cat:
def miaow(self):
print globvar
What is the difference between this and
class dog:
def __init__(self):
global globvar
def bark(self):
print globvar
class cat:
def miaow(self):
print globvar
Assuming I always instantiate a dog first?
I guess my question is, is there any difference? In the second example, does initiating the dog create a module level globvar just like in the first example, that will behave the same and have the same scope?
A:
global doesn't create a new variable, it just states that this name should refer to a global variable instead of a local one. Usually assignments to variables in a function/class/... refer to local variables. For example take a function like this:
def increment(n)
# this creates a new local m
m = n+1
return m
Here a new local variable m is created, even if there might be a global variable m already existing. This is what you usually want since some function call shouldn't unexpectedly modify variables in the surrounding scopes. If you indeed want to modify a global variable and not create a new local one, you can use the global keyword:
def increment(n)
global increment_calls
increment_calls += 1
return n+1
In your case global in the constructor doesn't create any variables, further attempts to access globvar fail:
>>> import animals
>>> d = animals.dog()
>>> d.bark()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "animals.py", line 7, in bark
print globvar
NameError: global name 'globvar' is not defined
But if you would actually assign a value to globvar in the constructor, a module-global variable would be created when you create a dog:
class dog:
def __init__(self):
global globvar
globvar = 1
...
Execution:
>>> import animals
>>> d = animals.dog()
>>> d.bark()
1
>>> print animals.globvar
1
A:
No, the global statement only matters when you're assigning to a global variable within a method or function. So that __init__ is irrelevant -- it does not create the global, because it's not assigning anything to it.
|
Python module globals versus __init__ globals
|
Apologies, somewhat confused Python newbie question. Let's say I have a module called animals.py.......
globvar = 1
class dog:
def bark(self):
print globvar
class cat:
def miaow(self):
print globvar
What is the difference between this and
class dog:
def __init__(self):
global globvar
def bark(self):
print globvar
class cat:
def miaow(self):
print globvar
Assuming I always instantiate a dog first?
I guess my question is, is there any difference? In the second example, does initiating the dog create a module level globvar just like in the first example, that will behave the same and have the same scope?
|
[
"global doesn't create a new variable, it just states that this name should refer to a global variable instead of a local one. Usually assignments to variables in a function/class/... refer to local variables. For example take a function like this:\ndef increment(n)\n # this creates a new local m\n m = n+1\n return m\n\nHere a new local variable m is created, even if there might be a global variable m already existing. This is what you usually want since some function call shouldn't unexpectedly modify variables in the surrounding scopes. If you indeed want to modify a global variable and not create a new local one, you can use the global keyword:\ndef increment(n)\n global increment_calls\n increment_calls += 1\n return n+1\n\nIn your case global in the constructor doesn't create any variables, further attempts to access globvar fail:\n>>> import animals\n>>> d = animals.dog()\n>>> d.bark()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"animals.py\", line 7, in bark\n print globvar\nNameError: global name 'globvar' is not defined\n\nBut if you would actually assign a value to globvar in the constructor, a module-global variable would be created when you create a dog:\nclass dog:\n def __init__(self):\n global globvar\n globvar = 1\n...\n\nExecution:\n>>> import animals\n>>> d = animals.dog()\n>>> d.bark()\n1\n>>> print animals.globvar\n1\n\n",
"No, the global statement only matters when you're assigning to a global variable within a method or function. So that __init__ is irrelevant -- it does not create the global, because it's not assigning anything to it.\n"
] |
[
9,
4
] |
[] |
[] |
[
"global",
"python"
] |
stackoverflow_0000933042_global_python.txt
|
Q:
XML characters in python xml.dom
I am working on producing an xml document from python. We are using the xml.dom package to create the xml document. We are having a problem where we want to produce the character φ which is a φ. However, when we put that string in a text node and call toxml() on it we get &#x03c6;. Our current solution is to use saxutils.unescape() on the result of toxml() but this is not ideal because we will have to parse the xml twice.
Is there someway to get the dom package to recognize "φ" as an xml character?
A:
I think you need to use a Unicode string with \u03c6 in it, because the .data field of a text node is supposed (as far as I understand) to be "parsed" data, not including XML entities (whence the & when made back into XML). If you want to ensure that, on output, non-ascii characters are expressed as entities, you could do:
import codecs
def ent_replace(exc):
if isinstance(exc, (UnicodeEncodeError, UnicodeTranslateError)):
s = []
for c in exc.object[exc.start:exc.end]:
s.append(u'&#x%4.4x;' % ord(c))
return (''.join(s), exc.end)
else:
raise TypeError("can't handle %s" % exc.__name__)
codecs.register_error('ent_replace', ent_replace)
and use x.toxml().encode('ascii', 'ent_replace').
|
XML characters in python xml.dom
|
I am working on producing an xml document from python. We are using the xml.dom package to create the xml document. We are having a problem where we want to produce the character φ which is a φ. However, when we put that string in a text node and call toxml() on it we get &#x03c6;. Our current solution is to use saxutils.unescape() on the result of toxml() but this is not ideal because we will have to parse the xml twice.
Is there someway to get the dom package to recognize "φ" as an xml character?
|
[
"I think you need to use a Unicode string with \\u03c6 in it, because the .data field of a text node is supposed (as far as I understand) to be \"parsed\" data, not including XML entities (whence the & when made back into XML). If you want to ensure that, on output, non-ascii characters are expressed as entities, you could do:\nimport codecs\n\ndef ent_replace(exc):\n if isinstance(exc, (UnicodeEncodeError, UnicodeTranslateError)):\n s = []\n for c in exc.object[exc.start:exc.end]:\n s.append(u'&#x%4.4x;' % ord(c))\n return (''.join(s), exc.end)\n else:\n raise TypeError(\"can't handle %s\" % exc.__name__)\n\ncodecs.register_error('ent_replace', ent_replace)\n\nand use x.toxml().encode('ascii', 'ent_replace').\n"
] |
[
1
] |
[] |
[] |
[
"dom",
"python",
"xml"
] |
stackoverflow_0000933004_dom_python_xml.txt
|
Q:
Scope, using functions in current module
I know this must be a trivial question, but I've tried many different ways, and searched quie a bit for a solution, but how do I create and reference subfunctions in the current module?
For example, I am writing a program to parse through a text file, and for each of the 300 different names in it, I want to assign to a category.
There are 300 of these, and I have a list of these structured to create a dict, so of the form lookup[key]=value (bonus question; any more efficient or sensible way to do this than a massive dict?).
I would like to keep all of this in the same module, but with the functions (dict initialisation, etc) at the
end of the file, so I dont have to scroll down 300 lines to see the code, i.e. as laid out as in the example below.
When I run it as below, I get the error 'initlookups is not defined'. When I structure is so that it is initialisation, then function definition, then function use, no problem.
I'm sure there must be an obvious way to initialise the functions and associated dict without keeping the code inline, but have tried quite a few so far without success. I can put it in an external module and import this, but would prefer not to for simplicity.
What should I be doing in terms of module structure? Is there any better way than using a dict to store this lookup table (It is 300 unique text keys mapping on to approx 10 categories?
Thanks,
Brendan
import ..... (initialisation code,etc )
initLookups() # **Should create the dict - How should this be referenced?**
print getlookup(KEY) # **How should this be referenced?**
def initLookups():
global lookup
lookup={}
lookup["A"]="AA"
lookup["B"]="BB"
(etc etc etc....)
def getlookup(value)
if name in lookup.keys():
getlookup=lookup[name]
else:
getlookup=""
return getlookup
A:
A function needs to be defined before it can be called. If you want to have the code that needs to be executed at the top of the file, just define a main function and call it from the bottom:
import sys
def main(args):
pass
# All your other function definitions here
if __name__ == '__main__':
exit(main(sys.argv[1:]))
This way, whatever you reference in main will have been parsed and is hence known already. The reason for testing __name__ is that in this way the main method will only be run when the script is executed directly, not when it is imported by another file.
Side note: a dict with 300 keys is by no means massive, but you may want to either move the code that fills the dict to a separate module, or (perhaps more fancy) store the key/value pairs in a format like JSON and load it when the program starts.
A:
Here's a more pythonic ways to do this. There aren't a lot of choices, BTW.
A function must be defined before it can be used. Period.
However, you don't have to strictly order all functions for the compiler's benefit. You merely have to put your execution of the functions last.
import # (initialisation code,etc )
def initLookups(): # Definitions must come before actual use
lookup={}
lookup["A"]="AA"
lookup["B"]="BB"
(etc etc etc....)
return lookup
# Any functions initLookups uses, can be define here.
# As long as they're findable in the same module.
if __name__ == "__main__": # Use comes last
lookup= initLookups()
print lookup.get("Key","")
Note that you don't need the getlookup function, it's a built-in feature of a dict, named get.
Also, "initialisation code" is suspicious. An import should not "do" anything. It should define functions and classes, but not actually provide any executable code. In the long run, executable code that is processed by an import can become a maintenance nightmare.
The most notable exception is a module-level Singleton object that gets created by default. Even then, be sure that the mystery object which makes a module work is clearly identified in the documentation.
A:
If your lookup dict is unchanging, the simplest way is to just make it a module scope variable. ie:
lookup = {
'A' : 'AA',
'B' : 'BB',
...
}
If you may need to make changes, and later re-initialise it, you can do this in an initialisation function:
def initLookups():
global lookup
lookup = {
'A' : 'AA',
'B' : 'BB',
...
}
(Alternatively, lookup.update({'A':'AA', ...}) to change the dict in-place, affecting all callers with access to the old binding.)
However, if you've got these lookups in some standard format, it may be simpler simply to load it from a file and create the dictionary from that.
You can arrange your functions as you wish. The only rule about ordering is that the accessed variables must exist at the time the function is called - it's fine if the function has references to variables in the body that don't exist yet, so long as nothing actually tries to use that function. ie:
def foo():
print greeting, "World" # Note that greeting is not yet defined when foo() is created
greeting = "Hello"
foo() # Prints "Hello World"
But:
def foo():
print greeting, "World"
foo() # Gives an error - greeting not yet defined.
greeting = "Hello"
One further thing to note: your getlookup function is very inefficient. Using "if name in lookup.keys()" is actually getting a list of the keys from the dict, and then iterating over this list to find the item. This loses all the performance benefit the dict gives. Instead, "if name in lookup" would avoid this, or even better, use the fact that .get can be given a default to return if the key is not in the dictionary:
def getlookup(name)
return lookup.get(name, "")
A:
I think that keeping the names in a flat text file, and loading them at runtime would be a good alternative. I try to stick to the lowest level of complexity possible with my data, starting with plain text and working up to a RDMS (I lifted this idea from The Pragmatic Programmer).
Dictionaries are very efficient in python. It's essentially what the whole language is built on. 300 items is well within the bounds of sane dict usage.
names.txt:
A = AAA
B = BBB
C = CCC
getname.py:
import sys
FILENAME = "names.txt"
def main(key):
pairs = (line.split("=") for line in open(FILENAME))
names = dict((x.strip(), y.strip()) for x,y in pairs)
return names.get(key, "Not found")
if __name__ == "__main__":
print main(sys.argv[-1])
If you really want to keep it all in one module for some reason, you could just stick a string at the top of the module. I think that a big swath of text is less distracting than a huge mess of dict initialization code (and easier to edit later):
import sys
LINES = """
A = AAA
B = BBB
C = CCC
D = DDD
E = EEE""".strip().splitlines()
PAIRS = (line.split("=") for line in LINES)
NAMES = dict((x.strip(), y.strip()) for x,y in PAIRS)
def main(key):
return NAMES.get(key, "Not found")
if __name__ == "__main__":
print main(sys.argv[-1])
|
Scope, using functions in current module
|
I know this must be a trivial question, but I've tried many different ways, and searched quie a bit for a solution, but how do I create and reference subfunctions in the current module?
For example, I am writing a program to parse through a text file, and for each of the 300 different names in it, I want to assign to a category.
There are 300 of these, and I have a list of these structured to create a dict, so of the form lookup[key]=value (bonus question; any more efficient or sensible way to do this than a massive dict?).
I would like to keep all of this in the same module, but with the functions (dict initialisation, etc) at the
end of the file, so I dont have to scroll down 300 lines to see the code, i.e. as laid out as in the example below.
When I run it as below, I get the error 'initlookups is not defined'. When I structure is so that it is initialisation, then function definition, then function use, no problem.
I'm sure there must be an obvious way to initialise the functions and associated dict without keeping the code inline, but have tried quite a few so far without success. I can put it in an external module and import this, but would prefer not to for simplicity.
What should I be doing in terms of module structure? Is there any better way than using a dict to store this lookup table (It is 300 unique text keys mapping on to approx 10 categories?
Thanks,
Brendan
import ..... (initialisation code,etc )
initLookups() # **Should create the dict - How should this be referenced?**
print getlookup(KEY) # **How should this be referenced?**
def initLookups():
global lookup
lookup={}
lookup["A"]="AA"
lookup["B"]="BB"
(etc etc etc....)
def getlookup(value)
if name in lookup.keys():
getlookup=lookup[name]
else:
getlookup=""
return getlookup
|
[
"A function needs to be defined before it can be called. If you want to have the code that needs to be executed at the top of the file, just define a main function and call it from the bottom:\nimport sys\n\ndef main(args):\n pass\n\n# All your other function definitions here\n\nif __name__ == '__main__':\n exit(main(sys.argv[1:]))\n\nThis way, whatever you reference in main will have been parsed and is hence known already. The reason for testing __name__ is that in this way the main method will only be run when the script is executed directly, not when it is imported by another file.\n\nSide note: a dict with 300 keys is by no means massive, but you may want to either move the code that fills the dict to a separate module, or (perhaps more fancy) store the key/value pairs in a format like JSON and load it when the program starts.\n",
"Here's a more pythonic ways to do this. There aren't a lot of choices, BTW.\nA function must be defined before it can be used. Period. \nHowever, you don't have to strictly order all functions for the compiler's benefit. You merely have to put your execution of the functions last. \nimport # (initialisation code,etc )\n\ndef initLookups(): # Definitions must come before actual use\n lookup={}\n lookup[\"A\"]=\"AA\"\n lookup[\"B\"]=\"BB\"\n (etc etc etc....)\n return lookup\n\n# Any functions initLookups uses, can be define here.\n# As long as they're findable in the same module.\n\nif __name__ == \"__main__\": # Use comes last\n lookup= initLookups() \n print lookup.get(\"Key\",\"\")\n\nNote that you don't need the getlookup function, it's a built-in feature of a dict, named get.\nAlso, \"initialisation code\" is suspicious. An import should not \"do\" anything. It should define functions and classes, but not actually provide any executable code. In the long run, executable code that is processed by an import can become a maintenance nightmare. \nThe most notable exception is a module-level Singleton object that gets created by default. Even then, be sure that the mystery object which makes a module work is clearly identified in the documentation.\n",
"If your lookup dict is unchanging, the simplest way is to just make it a module scope variable. ie:\nlookup = {\n 'A' : 'AA',\n 'B' : 'BB',\n ...\n}\n\nIf you may need to make changes, and later re-initialise it, you can do this in an initialisation function:\ndef initLookups():\n global lookup\n lookup = {\n 'A' : 'AA',\n 'B' : 'BB',\n ...\n }\n\n(Alternatively, lookup.update({'A':'AA', ...}) to change the dict in-place, affecting all callers with access to the old binding.)\nHowever, if you've got these lookups in some standard format, it may be simpler simply to load it from a file and create the dictionary from that.\nYou can arrange your functions as you wish. The only rule about ordering is that the accessed variables must exist at the time the function is called - it's fine if the function has references to variables in the body that don't exist yet, so long as nothing actually tries to use that function. ie:\ndef foo():\n print greeting, \"World\" # Note that greeting is not yet defined when foo() is created\n\ngreeting = \"Hello\"\n\nfoo() # Prints \"Hello World\"\n\nBut:\ndef foo():\n print greeting, \"World\"\n\nfoo() # Gives an error - greeting not yet defined.\ngreeting = \"Hello\"\n\nOne further thing to note: your getlookup function is very inefficient. Using \"if name in lookup.keys()\" is actually getting a list of the keys from the dict, and then iterating over this list to find the item. This loses all the performance benefit the dict gives. Instead, \"if name in lookup\" would avoid this, or even better, use the fact that .get can be given a default to return if the key is not in the dictionary:\ndef getlookup(name)\n return lookup.get(name, \"\")\n\n",
"I think that keeping the names in a flat text file, and loading them at runtime would be a good alternative. I try to stick to the lowest level of complexity possible with my data, starting with plain text and working up to a RDMS (I lifted this idea from The Pragmatic Programmer).\nDictionaries are very efficient in python. It's essentially what the whole language is built on. 300 items is well within the bounds of sane dict usage.\nnames.txt:\nA = AAA\nB = BBB\nC = CCC\n\ngetname.py:\nimport sys\n\nFILENAME = \"names.txt\"\n\ndef main(key):\n pairs = (line.split(\"=\") for line in open(FILENAME))\n names = dict((x.strip(), y.strip()) for x,y in pairs)\n return names.get(key, \"Not found\")\n\nif __name__ == \"__main__\":\n print main(sys.argv[-1])\n\nIf you really want to keep it all in one module for some reason, you could just stick a string at the top of the module. I think that a big swath of text is less distracting than a huge mess of dict initialization code (and easier to edit later):\nimport sys\n\nLINES = \"\"\"\nA = AAA\nB = BBB\nC = CCC\nD = DDD\nE = EEE\"\"\".strip().splitlines()\n\nPAIRS = (line.split(\"=\") for line in LINES)\nNAMES = dict((x.strip(), y.strip()) for x,y in PAIRS)\n\ndef main(key):\n return NAMES.get(key, \"Not found\")\n\nif __name__ == \"__main__\":\n print main(sys.argv[-1])\n\n"
] |
[
5,
1,
0,
0
] |
[] |
[] |
[
"function",
"module",
"python",
"scope",
"structure"
] |
stackoverflow_0000925075_function_module_python_scope_structure.txt
|
Q:
Control an embedded into website flash player with Python?
I am trying to write few simple python scripts, which will allow me to control one of the Internet radio (which I listen) with an keybinded python scripts.
I am now able to connect and log into the website, I am able to get out the song data ( that is - all the data which are passed to the player).
I noticed, that the player is controlled with javascript, lets assume, that it's address is http://www.sitesite.com/player.swf
If the player can be controlled with javascript, then I think that there should be an way, to control it with python. If I am right, can someone please give me an example how can this be done?
A:
No you can't control the player with Python, flash and javascript can talk to each other because of how the Flash player works when embedded in a web page. Sounds like you're circumventing the flash player anyhow, so why do you need to control a player you're not using?
|
Control an embedded into website flash player with Python?
|
I am trying to write few simple python scripts, which will allow me to control one of the Internet radio (which I listen) with an keybinded python scripts.
I am now able to connect and log into the website, I am able to get out the song data ( that is - all the data which are passed to the player).
I noticed, that the player is controlled with javascript, lets assume, that it's address is http://www.sitesite.com/player.swf
If the player can be controlled with javascript, then I think that there should be an way, to control it with python. If I am right, can someone please give me an example how can this be done?
|
[
"No you can't control the player with Python, flash and javascript can talk to each other because of how the Flash player works when embedded in a web page. Sounds like you're circumventing the flash player anyhow, so why do you need to control a player you're not using?\n"
] |
[
1
] |
[] |
[] |
[
"controls",
"embedded_resource",
"flash",
"javascript",
"python"
] |
stackoverflow_0000933441_controls_embedded_resource_flash_javascript_python.txt
|
Q:
Is there a way to invoke a Python function with the wrong number of arguments without invoking a TypeError?
When you invoke a function with the wrong number of arguments, or with a keyword argument that isn't in its definition, you get a TypeError. I'd like a piece of code to take a callback and invoke it with variable arguments, based on what the callback supports. One way of doing it would be to, for a callback cb, use cb.__code__.cb_argcount and cb.__code__.co_varnames, but I would rather abstract that into something like apply, but that only applies the arguments which "fit".
For example:
def foo(x,y,z):
pass
cleanvoke(foo, 1) # should call foo(1, None, None)
cleanvoke(foo, y=2) # should call foo(None, 2, None)
cleanvoke(foo, 1,2,3,4,5) # should call foo(1, 2, 3)
# etc.
Is there anything like this already in Python, or is it something I should write from scratch?
A:
Rather than digging down into the details yourself, you can inspect the function's signature -- you probably want inspect.getargspec(cb).
Exactly how you want to use that info, and the args you have, to call the function "properly", is not completely clear to me. Assuming for simplicity that you only care about simple named args, and the values you'd like to pass are in dict d...
args = inspect.getargspec(cb)[0]
cb( **dict((a,d.get(a)) for a in args) )
Maybe you want something fancier, and can elaborate on exactly what?
A:
This maybe?
def fnVariableArgLength(*args, **kwargs):
"""
- args is a list of non keywords arguments
- kwargs is a dict of keywords arguments (keyword, arg) pairs
"""
print args, kwargs
fnVariableArgLength() # () {}
fnVariableArgLength(1, 2, 3) # (1, 2, 3) {}
fnVariableArgLength(foo='bar') # () {'foo': 'bar'}
fnVariableArgLength(1, 2, 3, foo='bar') # (1, 2, 3) {'foo': 'bar'}
Edit Your use cases
def foo(*args,*kw):
x= kw.get('x',None if len(args) < 1 else args[0])
y= kw.get('y',None if len(args) < 2 else args[1])
z= kw.get('z',None if len(args) < 3 else args[2])
# the rest of foo
foo(1) # should call foo(1, None, None)
foo(y=2) # should call foo(None, 2, None)
foo(1,2,3,4,5) # should call foo(1, 2, 3)
|
Is there a way to invoke a Python function with the wrong number of arguments without invoking a TypeError?
|
When you invoke a function with the wrong number of arguments, or with a keyword argument that isn't in its definition, you get a TypeError. I'd like a piece of code to take a callback and invoke it with variable arguments, based on what the callback supports. One way of doing it would be to, for a callback cb, use cb.__code__.cb_argcount and cb.__code__.co_varnames, but I would rather abstract that into something like apply, but that only applies the arguments which "fit".
For example:
def foo(x,y,z):
pass
cleanvoke(foo, 1) # should call foo(1, None, None)
cleanvoke(foo, y=2) # should call foo(None, 2, None)
cleanvoke(foo, 1,2,3,4,5) # should call foo(1, 2, 3)
# etc.
Is there anything like this already in Python, or is it something I should write from scratch?
|
[
"Rather than digging down into the details yourself, you can inspect the function's signature -- you probably want inspect.getargspec(cb).\nExactly how you want to use that info, and the args you have, to call the function \"properly\", is not completely clear to me. Assuming for simplicity that you only care about simple named args, and the values you'd like to pass are in dict d...\nargs = inspect.getargspec(cb)[0]\ncb( **dict((a,d.get(a)) for a in args) )\n\nMaybe you want something fancier, and can elaborate on exactly what?\n",
"This maybe?\ndef fnVariableArgLength(*args, **kwargs):\n \"\"\"\n - args is a list of non keywords arguments\n - kwargs is a dict of keywords arguments (keyword, arg) pairs\n \"\"\"\n print args, kwargs\n\n\nfnVariableArgLength() # () {}\nfnVariableArgLength(1, 2, 3) # (1, 2, 3) {}\nfnVariableArgLength(foo='bar') # () {'foo': 'bar'}\nfnVariableArgLength(1, 2, 3, foo='bar') # (1, 2, 3) {'foo': 'bar'}\n\n\nEdit Your use cases\ndef foo(*args,*kw):\n x= kw.get('x',None if len(args) < 1 else args[0])\n y= kw.get('y',None if len(args) < 2 else args[1])\n z= kw.get('z',None if len(args) < 3 else args[2])\n # the rest of foo\n\nfoo(1) # should call foo(1, None, None)\nfoo(y=2) # should call foo(None, 2, None)\nfoo(1,2,3,4,5) # should call foo(1, 2, 3)\n\n"
] |
[
7,
3
] |
[] |
[] |
[
"apply",
"invocation",
"python"
] |
stackoverflow_0000933484_apply_invocation_python.txt
|
Q:
What is the best way to fetch/render one-to-many relationships?
I have 2 models which look like that:
class Entry(models.Model):
user = models.ForeignKey(User)
dataname = models.TextField()
datadesc = models.TextField()
timestamp = models.DateTimeField(auto_now=True)
class EntryFile(models.Model):
entry = models.ForeignKey(Entry)
datafile = models.FileField(upload_to="uploads/%Y/%m/%d/%H-%M-%S")
I want to render all the entries with their related files for a specific user.
Now I am doing it that way in my view to get the values:
entries = Entry.objects.filter(user=request.user).order_by("-timestamp")
files = {}
for entry in entries:
entryfiles = EntryFile.objects.filter(entry=entry)
files[entry] = entryfiles
return render_to_response("index.html", {'user': request.user, 'entries': entries, 'files': files, 'message': message})
But I am not able/don't know how to work with these data in my template. This what I do now, but isn't working:
{% for entry in entries %}
<td>{{ entry.datadesc }}</td>
<td><table>
{{ files.entry }}
{% for file in files.entry %}
<td>{{ file.datafile.name|split:"/"|last }}</td>
<td>{{ file.datafile.size|filesizeformat }}</td>
<td><a href="{{ object.datafile.url }}">download</a></td>
<td><a href="{% url main.views.delete object.id %}">delete</a></td>
{% endfor %}
</table></td>
{% endfor %}
Anyone can tell me if I am doing it the right way in view and then how to access these data in the template?
Thank you!
A:
Just cut your view code to this line:
entries = Entry.objects.filter(user=request.user).order_by("-timestamp")
And do this in the template:
{% for entry in entries %}
<td>{{ entry.datadesc }}</td>
<td><table>
{% for file in entry.entryfile_set.all %}
<td>{{ file.datafile.name|split:"/"|last }}</td>
<td>{{ file.datafile.size|filesizeformat }}</td>
<td><a href="{{ object.datafile.url }}">download</a></td>
<td><a href="{% url main.views.delete object.id %}">delete</a></td>
{% endfor %}
</table></td>
{% endfor %}
I am a big fan of using related_name in Models, however, so you could change this line:
entry = models.ForeignKey(Entry)
To this:
entry = models.ForeignKey(Entry, related_name='files')
And then you can access all the files for a particular entry by changing this:
{% for file in files.entryfile_set.all %}
To the more readable/obvious:
{% for file in entry.files.all %}
|
What is the best way to fetch/render one-to-many relationships?
|
I have 2 models which look like that:
class Entry(models.Model):
user = models.ForeignKey(User)
dataname = models.TextField()
datadesc = models.TextField()
timestamp = models.DateTimeField(auto_now=True)
class EntryFile(models.Model):
entry = models.ForeignKey(Entry)
datafile = models.FileField(upload_to="uploads/%Y/%m/%d/%H-%M-%S")
I want to render all the entries with their related files for a specific user.
Now I am doing it that way in my view to get the values:
entries = Entry.objects.filter(user=request.user).order_by("-timestamp")
files = {}
for entry in entries:
entryfiles = EntryFile.objects.filter(entry=entry)
files[entry] = entryfiles
return render_to_response("index.html", {'user': request.user, 'entries': entries, 'files': files, 'message': message})
But I am not able/don't know how to work with these data in my template. This what I do now, but isn't working:
{% for entry in entries %}
<td>{{ entry.datadesc }}</td>
<td><table>
{{ files.entry }}
{% for file in files.entry %}
<td>{{ file.datafile.name|split:"/"|last }}</td>
<td>{{ file.datafile.size|filesizeformat }}</td>
<td><a href="{{ object.datafile.url }}">download</a></td>
<td><a href="{% url main.views.delete object.id %}">delete</a></td>
{% endfor %}
</table></td>
{% endfor %}
Anyone can tell me if I am doing it the right way in view and then how to access these data in the template?
Thank you!
|
[
"Just cut your view code to this line:\nentries = Entry.objects.filter(user=request.user).order_by(\"-timestamp\")\n\nAnd do this in the template:\n{% for entry in entries %}\n <td>{{ entry.datadesc }}</td>\n <td><table>\n {% for file in entry.entryfile_set.all %}\n <td>{{ file.datafile.name|split:\"/\"|last }}</td>\n <td>{{ file.datafile.size|filesizeformat }}</td>\n <td><a href=\"{{ object.datafile.url }}\">download</a></td>\n <td><a href=\"{% url main.views.delete object.id %}\">delete</a></td>\n {% endfor %}\n </table></td>\n{% endfor %}\n\nI am a big fan of using related_name in Models, however, so you could change this line:\nentry = models.ForeignKey(Entry)\n\nTo this:\nentry = models.ForeignKey(Entry, related_name='files')\n\nAnd then you can access all the files for a particular entry by changing this:\n{% for file in files.entryfile_set.all %}\n\nTo the more readable/obvious:\n{% for file in entry.files.all %}\n\n"
] |
[
5
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0000933612_django_python.txt
|
Q:
Python Regex Search And Replace
I'm not new to Python but a complete newbie with regular expressions (on my to do list)
I am trying to use python re to convert a string such as
[Hollywood Holt](http://www.hollywoodholt.com)
to
<a href="http://www.hollywoodholt.com">Hollywood Holt</a>
and a string like
*Hello world*
to
<strong>Hello world</strong>
A:
Why are you bothering to use a regex? Your content is Markdown, why not simply take the string and run it through the markdown module?
First, make sure Markdown is installed. It has a dependancy on ElementTree so easy_install the two of them as follows. If you're running Windows, you can use the Windows installer instead.
easy_install ElementTree
easy_install Markdown
To use the Markdown module and convert your string to html simply do the following (tripple quotes are used for literal strings):
import markdown
markdown_text = """[Hollywood Holt](http://www.hollywoodholt.com)"""
html = markdown.markdown(markdown_text)
|
Python Regex Search And Replace
|
I'm not new to Python but a complete newbie with regular expressions (on my to do list)
I am trying to use python re to convert a string such as
[Hollywood Holt](http://www.hollywoodholt.com)
to
<a href="http://www.hollywoodholt.com">Hollywood Holt</a>
and a string like
*Hello world*
to
<strong>Hello world</strong>
|
[
"Why are you bothering to use a regex? Your content is Markdown, why not simply take the string and run it through the markdown module?\nFirst, make sure Markdown is installed. It has a dependancy on ElementTree so easy_install the two of them as follows. If you're running Windows, you can use the Windows installer instead.\neasy_install ElementTree\neasy_install Markdown\n\nTo use the Markdown module and convert your string to html simply do the following (tripple quotes are used for literal strings):\nimport markdown\nmarkdown_text = \"\"\"[Hollywood Holt](http://www.hollywoodholt.com)\"\"\"\nhtml = markdown.markdown(markdown_text)\n\n"
] |
[
12
] |
[] |
[] |
[
"markdown",
"python",
"regex",
"string"
] |
stackoverflow_0000933824_markdown_python_regex_string.txt
|
Q:
Django-like abstract database API for non-Django projects
I love the abstract database API that comes with Django, I was wondering if I could use this (or something similar) to model, access, and manage my (postgres) database for my non-Django Python projects.
A:
What you're looking for is an object-relational mapper (ORM). Django has its own, built-in.
To use Django's ORM by itself:
Using the Django ORM as a standalone component
Use Django ORM as standalone
Using settings without setting DJANGO_SETTINGS_MODULE
If you want to use something else:
What are some good Python ORM solutions?
A:
Popular stand-alone ORMs for Python:
SQLAlchemy
SQLObject
Storm
They all support MySQL and PostgreSQL (among others).
A:
I especially like SQLAlchemy with following tools:
Elixir (declarative syntax)
Migrate (schema migration)
They really remind me of ActiveRecord.
|
Django-like abstract database API for non-Django projects
|
I love the abstract database API that comes with Django, I was wondering if I could use this (or something similar) to model, access, and manage my (postgres) database for my non-Django Python projects.
|
[
"What you're looking for is an object-relational mapper (ORM). Django has its own, built-in.\nTo use Django's ORM by itself:\n\nUsing the Django ORM as a standalone component\nUse Django ORM as standalone\nUsing settings without setting DJANGO_SETTINGS_MODULE\n\nIf you want to use something else:\n\nWhat are some good Python ORM solutions?\n\n",
"Popular stand-alone ORMs for Python:\n\nSQLAlchemy\nSQLObject\nStorm\n\nThey all support MySQL and PostgreSQL (among others).\n",
"I especially like SQLAlchemy with following tools:\n\nElixir (declarative syntax)\nMigrate (schema migration)\n\nThey really remind me of ActiveRecord.\n"
] |
[
17,
6,
2
] |
[] |
[] |
[
"database",
"django",
"django_models",
"orm",
"python"
] |
stackoverflow_0000933232_database_django_django_models_orm_python.txt
|
Q:
Getting response from bluetooth device
I'm trying to write a simple module that will enable sending SMS. I using bluetooth to connect to the mobile using the below example:
file: bt-sendsms.py
import bluetooth
target = '00:32:AC:32:36:E8' # Mobile address
print "Trying to send SMS on %s" % target
BTSocket = bluetooth.BluetoothSocket(bluetooth.RFCOMM)
BTSocket.connect((target, 2)) # BT Address
BTSocket.send('ATZ\r')
BTSocket.send('AT+CMGF=1\r')
#sockfd.send('AT+CSCA="+972547716507"\r') # This line changes the SMSC address - do not modify unless required
BTSocket.send('AT+CMGS="+972547877763"\r') # TO Phone Number
BTSocket.send('This is a test message - port 2.\n')
BTSocket.send(chr(26)) # CTRL+Z
print "SMS sent"
sockfd.close()
print "Closed"
My problem is that I'm unable to verify or get an error code for the SMS sending or any of the socket operation.
Any referral to the right direction will be appreciated
A:
From the Python you look like you are opening any old RFCOMM channel and hoping it will magically take the AT commands and do the messaging.
I think (and I could be wrong) that you need to connect to a specific profile/sevice channel and I think for SMS it is the the Messaging Access Profile (MAP), which is not yet standardised so finding a phone with it on, well I won't say impossible but very, very unlikely. Otherwise, some phones will support AT commands for messaging but this is outside the specs e.g. I have it on authority that Sony-Ericson phones will support it though the Dial-Up Networking profile (DUN).
So, first of all, does your mobile device support some out of spec AT commands for SMS and if so, on a certain profile or on an ad-hoc proprietary one? Next, you need to connect to that profile.
You can browse the supported services etc... using the following Python (checks all surrounding BT devices)...
import bluetooth
def whats_nearby():
name_by_addr = {}
nearby = bluetooth.discover_devices(flush_cache=True)
for bd_addr in nearby:
name = bluetooth.lookup_name( bd_addr, 5)
print bd_addr, name
name_by_addr[bd_addr] = name
return name_by_addr
def what_services( addr, name ):
print " %s - %s" % ( addr, name )
for services in bluetooth.find_service(address = addr):
print "\t Name: %s" % (services["name"])
print "\t Description: %s" % (services["description"])
print "\t Protocol: %s" % (services["protocol"])
print "\t Provider: %s" % (services["provider"])
print "\t Port: %s" % (services["port"])
print "\t service-classes %s" % (services["service-classes"])
print "\t profiles %s" % (services["profiles"])
print "\t Service id: %s" % (services["service-id"])
print ""
if __name__ == "__main__":
name_by_addr = whats_nearby()
for addr in name_by_addr.keys():
what_services(addr, name_by_addr[addr])
Once you find the correct service/profile, your next problem will be negotiating security (pin code for pairing), which I haven't worked out how to do yet!
See the www.bluetooth.org for all your Bluetooth needs!
|
Getting response from bluetooth device
|
I'm trying to write a simple module that will enable sending SMS. I using bluetooth to connect to the mobile using the below example:
file: bt-sendsms.py
import bluetooth
target = '00:32:AC:32:36:E8' # Mobile address
print "Trying to send SMS on %s" % target
BTSocket = bluetooth.BluetoothSocket(bluetooth.RFCOMM)
BTSocket.connect((target, 2)) # BT Address
BTSocket.send('ATZ\r')
BTSocket.send('AT+CMGF=1\r')
#sockfd.send('AT+CSCA="+972547716507"\r') # This line changes the SMSC address - do not modify unless required
BTSocket.send('AT+CMGS="+972547877763"\r') # TO Phone Number
BTSocket.send('This is a test message - port 2.\n')
BTSocket.send(chr(26)) # CTRL+Z
print "SMS sent"
sockfd.close()
print "Closed"
My problem is that I'm unable to verify or get an error code for the SMS sending or any of the socket operation.
Any referral to the right direction will be appreciated
|
[
"From the Python you look like you are opening any old RFCOMM channel and hoping it will magically take the AT commands and do the messaging.\nI think (and I could be wrong) that you need to connect to a specific profile/sevice channel and I think for SMS it is the the Messaging Access Profile (MAP), which is not yet standardised so finding a phone with it on, well I won't say impossible but very, very unlikely. Otherwise, some phones will support AT commands for messaging but this is outside the specs e.g. I have it on authority that Sony-Ericson phones will support it though the Dial-Up Networking profile (DUN). \nSo, first of all, does your mobile device support some out of spec AT commands for SMS and if so, on a certain profile or on an ad-hoc proprietary one? Next, you need to connect to that profile.\nYou can browse the supported services etc... using the following Python (checks all surrounding BT devices)...\nimport bluetooth\n\ndef whats_nearby():\n name_by_addr = {}\n nearby = bluetooth.discover_devices(flush_cache=True)\n for bd_addr in nearby:\n name = bluetooth.lookup_name( bd_addr, 5)\n print bd_addr, name\n name_by_addr[bd_addr] = name\n return name_by_addr\n\ndef what_services( addr, name ):\n print \" %s - %s\" % ( addr, name )\n for services in bluetooth.find_service(address = addr): \n print \"\\t Name: %s\" % (services[\"name\"]) \n print \"\\t Description: %s\" % (services[\"description\"]) \n print \"\\t Protocol: %s\" % (services[\"protocol\"]) \n print \"\\t Provider: %s\" % (services[\"provider\"]) \n print \"\\t Port: %s\" % (services[\"port\"]) \n print \"\\t service-classes %s\" % (services[\"service-classes\"])\n print \"\\t profiles %s\" % (services[\"profiles\"])\n print \"\\t Service id: %s\" % (services[\"service-id\"]) \n print \"\" \n\nif __name__ == \"__main__\":\n name_by_addr = whats_nearby()\n for addr in name_by_addr.keys():\n what_services(addr, name_by_addr[addr])\n\nOnce you find the correct service/profile, your next problem will be negotiating security (pin code for pairing), which I haven't worked out how to do yet!\nSee the www.bluetooth.org for all your Bluetooth needs!\n"
] |
[
3
] |
[] |
[] |
[
"bluetooth",
"mobile_phones",
"python",
"sms"
] |
stackoverflow_0000934460_bluetooth_mobile_phones_python_sms.txt
|
Q:
Packaging script source files in IronPython and IronRuby
Does anyone know how to add python and ruby libs as a resource in a dll for deployment? I want to host a script engine in my app, but dont want to have to deploy the entire standard libraries of the respective languages in source files. Is there a simple way to do this so that a require or import statement will find the embedded resources?
A:
You could add custom import hook that looks for embedded resources when an import is executed. This is slightly complex and probably not worth the trouble.
A better technique would be to fetch all of the embedded modules at startup time, execute them with the ScriptEngine and put the modules you have created into the sys.modules dictionary associated with the engine. This automatically makes them available for import by Python code executed by the engine.
A:
You can create StreamContentProviders for example
In the ironrubymvc project under IronRubyMVC/Core/ you will find what you need.
AssemblyStreamContentProvider
Usage of the ContentProvider
A:
IronPython 2.0 has a sample compiler called PYC on Codeplex.com/ironpython which can create DLL's (and applications if you need them too).
IronPython 2.6 has a newer version of PYC under Tools\script.
Cheers,
Davy
|
Packaging script source files in IronPython and IronRuby
|
Does anyone know how to add python and ruby libs as a resource in a dll for deployment? I want to host a script engine in my app, but dont want to have to deploy the entire standard libraries of the respective languages in source files. Is there a simple way to do this so that a require or import statement will find the embedded resources?
|
[
"You could add custom import hook that looks for embedded resources when an import is executed. This is slightly complex and probably not worth the trouble.\nA better technique would be to fetch all of the embedded modules at startup time, execute them with the ScriptEngine and put the modules you have created into the sys.modules dictionary associated with the engine. This automatically makes them available for import by Python code executed by the engine.\n",
"You can create StreamContentProviders for example\nIn the ironrubymvc project under IronRubyMVC/Core/ you will find what you need.\nAssemblyStreamContentProvider\nUsage of the ContentProvider\n",
"IronPython 2.0 has a sample compiler called PYC on Codeplex.com/ironpython which can create DLL's (and applications if you need them too).\nIronPython 2.6 has a newer version of PYC under Tools\\script.\nCheers,\nDavy\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"c#",
"ironpython",
"ironruby",
"python",
"ruby"
] |
stackoverflow_0000933822_c#_ironpython_ironruby_python_ruby.txt
|
Q:
Python3.0 - tokenize and untokenize
I am using something similar to the following simplified script to parse snippets of python from a larger file:
import io
import tokenize
src = 'foo="bar"'
src = bytes(src.encode())
src = io.BytesIO(src)
src = list(tokenize.tokenize(src.readline))
for tok in src:
print(tok)
src = tokenize.untokenize(src)
Although the code is not the same in python2.x, it uses the same idiom and works just fine. However, running the above snippet using python3.0, I get this output:
(57, 'utf-8', (0, 0), (0, 0), '')
(1, 'foo', (1, 0), (1, 3), 'foo="bar"')
(53, '=', (1, 3), (1, 4), 'foo="bar"')
(3, '"bar"', (1, 4), (1, 9), 'foo="bar"')
(0, '', (2, 0), (2, 0), '')
Traceback (most recent call last):
File "q.py", line 13, in <module>
src = tokenize.untokenize(src)
File "/usr/local/lib/python3.0/tokenize.py", line 236, in untokenize
out = ut.untokenize(iterable)
File "/usr/local/lib/python3.0/tokenize.py", line 165, in untokenize
self.add_whitespace(start)
File "/usr/local/lib/python3.0/tokenize.py", line 151, in add_whitespace
assert row <= self.prev_row
AssertionError
I have searched for references to this error and its causes, but have been unable to find any. What am I doing wrong and how can I correct it?
[edit]
After partisann's observation that appending a newline to the source causes the error to go away, I started messing with the list I was untokenizing. It seems that the EOF token causes an error if not immediately preceded by a newline so removing it gets rid of the error. The following script runs without error:
import io
import tokenize
src = 'foo="bar"'
src = bytes(src.encode())
src = io.BytesIO(src)
src = list(tokenize.tokenize(src.readline))
for tok in src:
print(tok)
src = tokenize.untokenize(src[:-1])
A:
src = 'foo="bar"\n'You forgot newline.
A:
If you limit the input to untokenize to the first 2 items of the tokens, it seems to work.
import io
import tokenize
src = 'foo="bar"'
src = bytes(src.encode())
src = io.BytesIO(src)
src = list(tokenize.tokenize(src.readline))
for tok in src:
print(tok)
src = [t[:2] for t in src]
src = tokenize.untokenize(src)
|
Python3.0 - tokenize and untokenize
|
I am using something similar to the following simplified script to parse snippets of python from a larger file:
import io
import tokenize
src = 'foo="bar"'
src = bytes(src.encode())
src = io.BytesIO(src)
src = list(tokenize.tokenize(src.readline))
for tok in src:
print(tok)
src = tokenize.untokenize(src)
Although the code is not the same in python2.x, it uses the same idiom and works just fine. However, running the above snippet using python3.0, I get this output:
(57, 'utf-8', (0, 0), (0, 0), '')
(1, 'foo', (1, 0), (1, 3), 'foo="bar"')
(53, '=', (1, 3), (1, 4), 'foo="bar"')
(3, '"bar"', (1, 4), (1, 9), 'foo="bar"')
(0, '', (2, 0), (2, 0), '')
Traceback (most recent call last):
File "q.py", line 13, in <module>
src = tokenize.untokenize(src)
File "/usr/local/lib/python3.0/tokenize.py", line 236, in untokenize
out = ut.untokenize(iterable)
File "/usr/local/lib/python3.0/tokenize.py", line 165, in untokenize
self.add_whitespace(start)
File "/usr/local/lib/python3.0/tokenize.py", line 151, in add_whitespace
assert row <= self.prev_row
AssertionError
I have searched for references to this error and its causes, but have been unable to find any. What am I doing wrong and how can I correct it?
[edit]
After partisann's observation that appending a newline to the source causes the error to go away, I started messing with the list I was untokenizing. It seems that the EOF token causes an error if not immediately preceded by a newline so removing it gets rid of the error. The following script runs without error:
import io
import tokenize
src = 'foo="bar"'
src = bytes(src.encode())
src = io.BytesIO(src)
src = list(tokenize.tokenize(src.readline))
for tok in src:
print(tok)
src = tokenize.untokenize(src[:-1])
|
[
"src = 'foo=\"bar\"\\n'You forgot newline.\n",
"If you limit the input to untokenize to the first 2 items of the tokens, it seems to work.\nimport io\nimport tokenize\n\nsrc = 'foo=\"bar\"'\nsrc = bytes(src.encode())\nsrc = io.BytesIO(src)\n\nsrc = list(tokenize.tokenize(src.readline))\n\nfor tok in src:\n print(tok)\n\nsrc = [t[:2] for t in src]\nsrc = tokenize.untokenize(src)\n\n"
] |
[
3,
0
] |
[] |
[] |
[
"lexical_analysis",
"python",
"python_3.x",
"tokenize"
] |
stackoverflow_0000934661_lexical_analysis_python_python_3.x_tokenize.txt
|
Q:
python db connection
I am having a script which makes a db connection and pereform some select operation.accroding to the fetch data i am calling different functions which also perform db operations.How can i pass db connection to the functions which are being called as i donot want to make new connection
A:
Why to pass connection itself? Maybe build a class that handles all the DB-operation and just pass this class' instance around, calling it's methods to perform selects, inserts and all that DB-specific code?
|
python db connection
|
I am having a script which makes a db connection and pereform some select operation.accroding to the fetch data i am calling different functions which also perform db operations.How can i pass db connection to the functions which are being called as i donot want to make new connection
|
[
"Why to pass connection itself? Maybe build a class that handles all the DB-operation and just pass this class' instance around, calling it's methods to perform selects, inserts and all that DB-specific code? \n"
] |
[
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000934221_python.txt
|
Q:
Clipping FFT Matrix
Audio processing is pretty new for me. And currently using Python Numpy for processing wave files. After calculating FFT matrix I am getting noisy power values for non-existent frequencies. I am interested in visualizing the data and accuracy is not a high priority. Is there a safe way to calculate the clipping value to remove these values, or should I use all FFT matrices for each sample set to come up with an average number ?
regards
Edit:
from numpy import *
import wave
import pymedia.audio.sound as sound
import time, struct
from pylab import ion, plot, draw, show
fp = wave.open("500-200f.wav", "rb")
sample_rate = fp.getframerate()
total_num_samps = fp.getnframes()
fft_length = 2048.
num_fft = (total_num_samps / fft_length ) - 2
temp = zeros((num_fft,fft_length), float)
for i in range(num_fft):
tempb = fp.readframes(fft_length);
data = struct.unpack("%dH"%(fft_length), tempb)
temp[i,:] = array(data, short)
pts = fft_length/2+1
data = (abs(fft.rfft(temp, fft_length)) / (pts))[:pts]
x_axis = arange(pts)*sample_rate*.5/pts
spec_range = pts
plot(x_axis, data[0])
show()
Here is the plot in non-logarithmic scale, for synthetic wave file containing 500hz(fading out) + 200hz sine wave created using Goldwave.
A:
Simulated waveforms shouldn't show FFTs like your figure, so something is very wrong, and probably not with the FFT, but with the input waveform. The main problem in your plot is not the ripples, but the harmonics around 1000 Hz, and the subharmonic at 500 Hz. A simulated waveform shouldn't show any of this (for example, see my plot below).
First, you probably want to just try plotting out the raw waveform, and this will likely point to an obvious problem. Also, it seems odd to have a wave unpack to unsigned shorts, i.e. "H", and especially after this to not have a large zero-frequency component.
I was able to get a pretty close duplicate to your FFT by applying clipping to the waveform, as was suggested by both the subharmonic and higher harmonics (and Trevor). You could be introducing clipping either in the simulation or the unpacking. Either way, I bypassed this by creating the waveforms in numpy to start with.
Here's what the proper FFT should look like (i.e. basically perfect, except for the broadening of the peaks due to the windowing)
Here's one from a waveform that's been clipped (and is very similar to your FFT, from the subharmonic to the precise pattern of the three higher harmonics around 1000 Hz)
Here's the code I used to generate these
from numpy import *
from pylab import ion, plot, draw, show, xlabel, ylabel, figure
sample_rate = 20000.
times = arange(0, 10., 1./sample_rate)
wfm0 = sin(2*pi*200.*times)
wfm1 = sin(2*pi*500.*times) *(10.-times)/10.
wfm = wfm0+wfm1
# int test
#wfm *= 2**8
#wfm = wfm.astype(int16)
#wfm = wfm.astype(float)
# abs test
#wfm = abs(wfm)
# clip test
#wfm = clip(wfm, -1.2, 1.2)
fft_length = 5*2048.
total_num_samps = len(times)
num_fft = (total_num_samps / fft_length ) - 2
temp = zeros((num_fft,fft_length), float)
for i in range(num_fft):
temp[i,:] = wfm[i*fft_length:(i+1)*fft_length]
pts = fft_length/2+1
data = (abs(fft.rfft(temp, fft_length)) / (pts))[:pts]
x_axis = arange(pts)*sample_rate*.5/pts
spec_range = pts
plot(x_axis, data[2], linewidth=3)
xlabel("freq (Hz)")
ylabel('abs(FFT)')
show()
A:
FFT's because they are windowed and sampled cause aliasing and sampling in the frequency domain as well. Filtering in the time domain is just multiplication in the frequency domain so you may want to just apply a filter which is just multiplying each frequency by a value for the function for the filter you are using. For example multiply by 1 in the passband and by zero every were else. The unexpected values are probably caused by aliasing where higher frequencies are being folded down to the ones you are seeing. The original signal needs to be band limited to half your sampling rate or you will get aliasing. Of more concern is aliasing that is distorting the area of interest because for this band of frequencies you want to know that the frequency is from the expected one.
The other thing to keep in mind is that when you grab a piece of data from a wave file you are mathmatically multiplying it by a square wave. This causes a sinx/x to be convolved with the frequency response to minimize this you can multiply the original windowed signal with something like a Hanning window.
A:
It's worth mentioning for a 1D FFT that the first element (index [0]) contains the DC (zero-frequency) term, the elements [1:N/2] contain the positive frequencies and the elements [N/2+1:N-1] contain the negative frequencies. Since you didn't provide a code sample or additional information about the output of your FFT, I can't rule out the possibility that the "noisy power values at non-existent frequencies" aren't just the negative frequencies of your spectrum.
EDIT: Here is an example of a radix-2 FFT implemented in pure Python with a simple test routine that finds the FFT of a rectangular pulse, [1.,1.,1.,1.,0.,0.,0.,0.]. You can run the example on codepad and see that the FFT of that sequence is
[0j, Negative frequencies
(1+0.414213562373j), ^
0j, |
(1+2.41421356237j), |
(4+0j), <= DC term
(1-2.41421356237j), |
0j, v
(1-0.414213562373j)] Positive frequencies
Note that the code prints out the Fourier coefficients in order of ascending frequency, i.e. from the highest negative frequency up to DC, and then up to the highest positive frequency.
A:
I don't know enough from your question to actually answer anything specific.
But here are a couple of things to try from my own experience writing FFTs:
Make sure you are following Nyquist rule
If you are viewing the linear output of the FFT... you will have trouble seeing your own signal and think everything is broken. Make sure you are looking at the dB of your FFT magnitude. (i.e. "plot(10*log10(abs(fft(x))))" )
Create a unitTest for your FFT() function by feeding generated data like a pure tone. Then feed the same generated data to Matlab's FFT(). Do a absolute value diff between the two output data series and make sure the max absolute value difference is something like 10^-6 (i.e. the only difference is caused by small floating point errors)
Make sure you are windowing your data
If all of those three things work, then your fft is fine. And your input data is probably the issue.
Check the input data to see if there is clipping http://www.users.globalnet.co.uk/~bunce/clip.gif
Time doamin clipping shows up as mirror images of the signal in the frequency domain at specific regular intervals with less amplitude.
|
Clipping FFT Matrix
|
Audio processing is pretty new for me. And currently using Python Numpy for processing wave files. After calculating FFT matrix I am getting noisy power values for non-existent frequencies. I am interested in visualizing the data and accuracy is not a high priority. Is there a safe way to calculate the clipping value to remove these values, or should I use all FFT matrices for each sample set to come up with an average number ?
regards
Edit:
from numpy import *
import wave
import pymedia.audio.sound as sound
import time, struct
from pylab import ion, plot, draw, show
fp = wave.open("500-200f.wav", "rb")
sample_rate = fp.getframerate()
total_num_samps = fp.getnframes()
fft_length = 2048.
num_fft = (total_num_samps / fft_length ) - 2
temp = zeros((num_fft,fft_length), float)
for i in range(num_fft):
tempb = fp.readframes(fft_length);
data = struct.unpack("%dH"%(fft_length), tempb)
temp[i,:] = array(data, short)
pts = fft_length/2+1
data = (abs(fft.rfft(temp, fft_length)) / (pts))[:pts]
x_axis = arange(pts)*sample_rate*.5/pts
spec_range = pts
plot(x_axis, data[0])
show()
Here is the plot in non-logarithmic scale, for synthetic wave file containing 500hz(fading out) + 200hz sine wave created using Goldwave.
|
[
"Simulated waveforms shouldn't show FFTs like your figure, so something is very wrong, and probably not with the FFT, but with the input waveform. The main problem in your plot is not the ripples, but the harmonics around 1000 Hz, and the subharmonic at 500 Hz. A simulated waveform shouldn't show any of this (for example, see my plot below).\nFirst, you probably want to just try plotting out the raw waveform, and this will likely point to an obvious problem. Also, it seems odd to have a wave unpack to unsigned shorts, i.e. \"H\", and especially after this to not have a large zero-frequency component.\nI was able to get a pretty close duplicate to your FFT by applying clipping to the waveform, as was suggested by both the subharmonic and higher harmonics (and Trevor). You could be introducing clipping either in the simulation or the unpacking. Either way, I bypassed this by creating the waveforms in numpy to start with.\nHere's what the proper FFT should look like (i.e. basically perfect, except for the broadening of the peaks due to the windowing)\n\n\nHere's one from a waveform that's been clipped (and is very similar to your FFT, from the subharmonic to the precise pattern of the three higher harmonics around 1000 Hz)\n\n\nHere's the code I used to generate these\nfrom numpy import *\nfrom pylab import ion, plot, draw, show, xlabel, ylabel, figure\n\nsample_rate = 20000.\ntimes = arange(0, 10., 1./sample_rate)\nwfm0 = sin(2*pi*200.*times)\nwfm1 = sin(2*pi*500.*times) *(10.-times)/10.\nwfm = wfm0+wfm1\n# int test\n#wfm *= 2**8\n#wfm = wfm.astype(int16)\n#wfm = wfm.astype(float)\n# abs test\n#wfm = abs(wfm)\n# clip test\n#wfm = clip(wfm, -1.2, 1.2)\n\nfft_length = 5*2048.\ntotal_num_samps = len(times)\nnum_fft = (total_num_samps / fft_length ) - 2\ntemp = zeros((num_fft,fft_length), float)\n\nfor i in range(num_fft):\n temp[i,:] = wfm[i*fft_length:(i+1)*fft_length] \npts = fft_length/2+1\ndata = (abs(fft.rfft(temp, fft_length)) / (pts))[:pts]\n\nx_axis = arange(pts)*sample_rate*.5/pts\nspec_range = pts\nplot(x_axis, data[2], linewidth=3)\nxlabel(\"freq (Hz)\")\nylabel('abs(FFT)')\nshow()\n\n",
"FFT's because they are windowed and sampled cause aliasing and sampling in the frequency domain as well. Filtering in the time domain is just multiplication in the frequency domain so you may want to just apply a filter which is just multiplying each frequency by a value for the function for the filter you are using. For example multiply by 1 in the passband and by zero every were else. The unexpected values are probably caused by aliasing where higher frequencies are being folded down to the ones you are seeing. The original signal needs to be band limited to half your sampling rate or you will get aliasing. Of more concern is aliasing that is distorting the area of interest because for this band of frequencies you want to know that the frequency is from the expected one. \nThe other thing to keep in mind is that when you grab a piece of data from a wave file you are mathmatically multiplying it by a square wave. This causes a sinx/x to be convolved with the frequency response to minimize this you can multiply the original windowed signal with something like a Hanning window. \n",
"It's worth mentioning for a 1D FFT that the first element (index [0]) contains the DC (zero-frequency) term, the elements [1:N/2] contain the positive frequencies and the elements [N/2+1:N-1] contain the negative frequencies. Since you didn't provide a code sample or additional information about the output of your FFT, I can't rule out the possibility that the \"noisy power values at non-existent frequencies\" aren't just the negative frequencies of your spectrum.\n\nEDIT: Here is an example of a radix-2 FFT implemented in pure Python with a simple test routine that finds the FFT of a rectangular pulse, [1.,1.,1.,1.,0.,0.,0.,0.]. You can run the example on codepad and see that the FFT of that sequence is\n[0j, Negative frequencies\n(1+0.414213562373j), ^\n0j, |\n(1+2.41421356237j), |\n(4+0j), <= DC term\n(1-2.41421356237j), |\n0j, v\n(1-0.414213562373j)] Positive frequencies\n\nNote that the code prints out the Fourier coefficients in order of ascending frequency, i.e. from the highest negative frequency up to DC, and then up to the highest positive frequency.\n",
"I don't know enough from your question to actually answer anything specific.\nBut here are a couple of things to try from my own experience writing FFTs:\n\nMake sure you are following Nyquist rule\nIf you are viewing the linear output of the FFT... you will have trouble seeing your own signal and think everything is broken. Make sure you are looking at the dB of your FFT magnitude. (i.e. \"plot(10*log10(abs(fft(x))))\" )\nCreate a unitTest for your FFT() function by feeding generated data like a pure tone. Then feed the same generated data to Matlab's FFT(). Do a absolute value diff between the two output data series and make sure the max absolute value difference is something like 10^-6 (i.e. the only difference is caused by small floating point errors)\nMake sure you are windowing your data \n\nIf all of those three things work, then your fft is fine. And your input data is probably the issue.\n\nCheck the input data to see if there is clipping http://www.users.globalnet.co.uk/~bunce/clip.gif\n\nTime doamin clipping shows up as mirror images of the signal in the frequency domain at specific regular intervals with less amplitude.\n"
] |
[
3,
2,
1,
1
] |
[] |
[] |
[
"audio",
"fft",
"python",
"signal_processing"
] |
stackoverflow_0000933088_audio_fft_python_signal_processing.txt
|
Q:
WxPython differences between Windows and Linux
The tutorials I've found on WxPython all use examples from Linux, but there seem to be differences in some details.
For example, in Windows a Panel behind the widgets is mandatory to show the background properly. Additionally, some examples that look fine in the tutorials don't work in my computer.
So, do you know what important differences are there, or maybe a good tutorial that is focused on Windows?
EDIT: I just remembered this: Does anybody know why when subclassing wx.App an OnInit() method is required, rather than the more logical __init__()?
A:
I've noticed odd peculiarities in a small GUI I wrote a while back, but it's been a long time since I tried to the specifics are a rather distant memory. Do you have some specific examples which fail? Maybe we can improve them and fix the bugs?
Have you tried the official wxPython tutorials? ...or were you after something more specific?
r.e. your edit - You should use OnInit() because you're subclassing wx.App (i.e. it's a requirement for wxWidgets rather than Python) and the wxPython implementation is wherever possible, just a wrapper for wxWidgets.
[Edit] Zetcode has a fairly lengthy tutorial on wxPython. I've not looked through it all myself, but it might be of some help?
The wxWidgets::wxApp::OnInit() documentation is fairly clear:
This must be provided by the application, and will usually create the application's main window, optionally calling wxApp::SetTopWindow. You may use OnExit to clean up anything initialized here, provided that the function returns true.
If wxWidgets didn't provide a common interface then you'd have to do different things in C++ (using a constructor) compared to Python's __init__(self,...). Using a language-independent on-initialisation allows wxWidgets ports to other languages look more alike which should be a good thing right? :-)
A:
EDIT: I just remembered this: Does anybody know why when subclassing wx.App an OnInit() method is required, rather than the more logical __init__()?
I use OnInit() for symmetry: there's also an OnExit() method.
Edit: I may be wrong, but I don't think using OnInit() is required.
A:
I find a number of small differences, but don't remember all of them. Here are two:
1) The layout can be slightly different, for example, causing things to not completely fit in the window in one OS when the do in the other. I haven't investigated the reasons for this, but it happens most often when I use positions rather than sizers to arrange things.
2) I have to explicitly call Refresh more in Windows. For example, if you place one panel over another, you won't see it the top panel in Windows until you call Refresh.
I general, I write apps in Linux and run them in Windows, and things work similarly enough so this is a reasonable approach, but it's rare for me when something runs perfectly straight out of the gate after an OS switch.
|
WxPython differences between Windows and Linux
|
The tutorials I've found on WxPython all use examples from Linux, but there seem to be differences in some details.
For example, in Windows a Panel behind the widgets is mandatory to show the background properly. Additionally, some examples that look fine in the tutorials don't work in my computer.
So, do you know what important differences are there, or maybe a good tutorial that is focused on Windows?
EDIT: I just remembered this: Does anybody know why when subclassing wx.App an OnInit() method is required, rather than the more logical __init__()?
|
[
"I've noticed odd peculiarities in a small GUI I wrote a while back, but it's been a long time since I tried to the specifics are a rather distant memory. Do you have some specific examples which fail? Maybe we can improve them and fix the bugs?\nHave you tried the official wxPython tutorials? ...or were you after something more specific?\nr.e. your edit - You should use OnInit() because you're subclassing wx.App (i.e. it's a requirement for wxWidgets rather than Python) and the wxPython implementation is wherever possible, just a wrapper for wxWidgets.\n[Edit] Zetcode has a fairly lengthy tutorial on wxPython. I've not looked through it all myself, but it might be of some help?\nThe wxWidgets::wxApp::OnInit() documentation is fairly clear:\n\nThis must be provided by the application, and will usually create the application's main window, optionally calling wxApp::SetTopWindow. You may use OnExit to clean up anything initialized here, provided that the function returns true.\n\nIf wxWidgets didn't provide a common interface then you'd have to do different things in C++ (using a constructor) compared to Python's __init__(self,...). Using a language-independent on-initialisation allows wxWidgets ports to other languages look more alike which should be a good thing right? :-)\n",
"\nEDIT: I just remembered this: Does anybody know why when subclassing wx.App an OnInit() method is required, rather than the more logical __init__()?\n\nI use OnInit() for symmetry: there's also an OnExit() method.\nEdit: I may be wrong, but I don't think using OnInit() is required.\n",
"I find a number of small differences, but don't remember all of them. Here are two:\n1) The layout can be slightly different, for example, causing things to not completely fit in the window in one OS when the do in the other. I haven't investigated the reasons for this, but it happens most often when I use positions rather than sizers to arrange things.\n2) I have to explicitly call Refresh more in Windows. For example, if you place one panel over another, you won't see it the top panel in Windows until you call Refresh.\nI general, I write apps in Linux and run them in Windows, and things work similarly enough so this is a reasonable approach, but it's rare for me when something runs perfectly straight out of the gate after an OS switch.\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"linux",
"python",
"user_interface",
"windows",
"wxpython"
] |
stackoverflow_0000916987_linux_python_user_interface_windows_wxpython.txt
|
Q:
Insert/Delete performance
DB Table:
id int(6)
message char(5)
I have to add a record (message) to the DB table. In case of duplicate message(this message already exists with different id) I want to delete (or inactivate somehow) the both of the messages and get their ID's in reply.
Is it possible to perform with only one query? Any performance tips ?...
P.S.
I use PostgreSQL.
The main my problem I worried about, is a need to use locks when performing this with two or more queries...
Many thanks!
A:
If you really want to worry about locking do this.
UPDATE table SET status='INACTIVE' WHERE id = 'key';
If this succeeds, there was a duplicate.
INSERT the additional inactive record. Do whatever else you want with your duplicates.
If this fails, there was no duplicate.
INSERT the new active record.
Commit.
This seizes an exclusive lock right away. The alternatives aren't quite as nice.
Start with an INSERT and check for duplicates doesn't seize a lock until you start updating. It's not clear if this is a problem or not.
Start with a SELECT would need to add a LOCK TABLE to assure that the select held the row found so it could be updated. If no row is found, the insert will work fine.
If you have multiple concurrent writers and two writers could attempt access at the same time, you may not be able to tolerate row-level locking.
Consider this.
Process A does a LOCK ROW and a SELECT but finds no row.
Process B does a LOCK ROW and a SELECT but finds no row.
Process A does an INSERT and a COMMIT.
Process B does an INSERT and a COMMIT. You now have duplicate active records.
Multiple concurrent insert/update transactions will only work with table-level locking. Yes, it's a potential slow-down. Three rules: (1) Keep your transactions as short as possible, (2) release the locks as quickly as possible, (3) handle deadlocks by retrying.
A:
You could write a procedure with both of those commands in it, but it may make more sense to use an insert trigger to check for duplicates (or a nightly job, if it's not time-sensitive).
A:
It is a little difficult to understand your exact requirement. Let me rephrase it two ways:
You want both the entries with same messages in the table (with different IDs), and want to know the IDs for some further processing (marking them as inactive, etc.). For this, You could write a procedure with the separate queries. I don't think you can achieve this with one query.
You do not want either of the entries in the table (i got this from 'i want to delete'). For this, you only have to check if the message already exists and then delete the row if it does, else insert it. I don't think this too can be achieved with one query.
If performance is a constraint during insert, you could insert without any checks and then periodically, sanitize the database.
|
Insert/Delete performance
|
DB Table:
id int(6)
message char(5)
I have to add a record (message) to the DB table. In case of duplicate message(this message already exists with different id) I want to delete (or inactivate somehow) the both of the messages and get their ID's in reply.
Is it possible to perform with only one query? Any performance tips ?...
P.S.
I use PostgreSQL.
The main my problem I worried about, is a need to use locks when performing this with two or more queries...
Many thanks!
|
[
"If you really want to worry about locking do this.\n\nUPDATE table SET status='INACTIVE' WHERE id = 'key';\nIf this succeeds, there was a duplicate.\n\nINSERT the additional inactive record. Do whatever else you want with your duplicates.\n\nIf this fails, there was no duplicate.\n\nINSERT the new active record. \n\nCommit.\n\nThis seizes an exclusive lock right away. The alternatives aren't quite as nice.\n\nStart with an INSERT and check for duplicates doesn't seize a lock until you start updating. It's not clear if this is a problem or not.\nStart with a SELECT would need to add a LOCK TABLE to assure that the select held the row found so it could be updated. If no row is found, the insert will work fine.\n\nIf you have multiple concurrent writers and two writers could attempt access at the same time, you may not be able to tolerate row-level locking.\nConsider this.\n\nProcess A does a LOCK ROW and a SELECT but finds no row.\nProcess B does a LOCK ROW and a SELECT but finds no row.\nProcess A does an INSERT and a COMMIT.\nProcess B does an INSERT and a COMMIT. You now have duplicate active records.\n\nMultiple concurrent insert/update transactions will only work with table-level locking. Yes, it's a potential slow-down. Three rules: (1) Keep your transactions as short as possible, (2) release the locks as quickly as possible, (3) handle deadlocks by retrying.\n",
"You could write a procedure with both of those commands in it, but it may make more sense to use an insert trigger to check for duplicates (or a nightly job, if it's not time-sensitive).\n",
"It is a little difficult to understand your exact requirement. Let me rephrase it two ways:\n\nYou want both the entries with same messages in the table (with different IDs), and want to know the IDs for some further processing (marking them as inactive, etc.). For this, You could write a procedure with the separate queries. I don't think you can achieve this with one query.\nYou do not want either of the entries in the table (i got this from 'i want to delete'). For this, you only have to check if the message already exists and then delete the row if it does, else insert it. I don't think this too can be achieved with one query.\n\nIf performance is a constraint during insert, you could insert without any checks and then periodically, sanitize the database.\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"database",
"database_design",
"performance",
"python"
] |
stackoverflow_0000934602_database_database_design_performance_python.txt
|
Q:
Difference between "__method__" and "method"
What is the difference between __method__, method and _method__?
Is there any or for some random reason people thought that __doc__ should be right like that instead of doc. What makes a method more special than the other?
A:
__method: private method.
__method__: special Python method. They are named like this to prevent name collisions. Check this page for a list of these special methods.
_method: This is the recommended naming convention for protected methods in the Python style guide.
From the style guide:
_single_leading_underscore: weak "internal use" indicator. E.g. from M
import * does not import objects whose name starts with an underscore.
single_trailing_underscore_: used by convention to avoid conflicts with
Python keyword, e.g.
Tkinter.Toplevel(master, class_='ClassName')
__double_leading_underscore: when naming a class attribute, invokes name
mangling (inside class FooBar, __boo becomes _FooBar__boo; see below).
__double_leading_and_trailing_underscore__: "magic" objects or
attributes that live in user-controlled namespaces. E.g. __init__,
__import__ or __file__. Never invent such names; only use them
as documented.
A:
method is just a normal method
_method should not be called unless you know what you are doing, which normally means that you have written the method yourself.
__method the 2 underscores are used to prevent name mangeling. Attributes or methods like this are accessible over instance._ClassName__method. Although a lot of people call this "private" it is not. You should never use this to prevent someone from accessing this method, use _method instead.
__method__ is used for special methods which modify the behavior of the instance. Do not name your own methods like this.
A:
These are all conventions, so they are not enforced in anyway. Still, you can normally expect:
__somename__
Something defined in the python language specification itself. Don't use this in your own naming.
_somename
This is normally supposed to be called via some different mechanism rather than directly. Similar to declaring something private in most other languages, but not enforced in any way.
__somename
This is really not supposed to be called directly, and is mangled internally to stop you doing so accidently. If you really need to call it for some reason, check the documentation to find out how.
Any of the above can apply equally to function, variable or class names.
A:
These methods were named as such to reduce the possibility of naming collisions.
A:
Methods prefaced and prefixed with the double underscore are generally so marked to indicate that they are part of the Python language specification.
A:
Some methods with a double underscore prefix and suffix are special. For example, __init__ is called whenever an instance of that class is created, and __str__ is called when the object is to be printed. Basically, they can be called in special ways. You can use them like any other method, or you can invoke them through the special way associated to them.
I don't know about double-underscore global functions (not belonging to any class), but I think there aren't any.
A:
The pattern of __name__ indicate "magic" methods. These are called by various functions like
str(x) -> x.__str__()
repr(x) -> x.__repr__()
x[0] -> x.__getitem__(0)
etc
A single underscore prefix is to indicate a private attribute, and is only followed through convention.
a double underscore prefix initiates name-mangling, where the attribute named __attr is changed to __Class_attr upon instantiation.
The pattern you have of _method__ isn't really used for anything.
|
Difference between "__method__" and "method"
|
What is the difference between __method__, method and _method__?
Is there any or for some random reason people thought that __doc__ should be right like that instead of doc. What makes a method more special than the other?
|
[
"\n__method: private method.\n__method__: special Python method. They are named like this to prevent name collisions. Check this page for a list of these special methods.\n_method: This is the recommended naming convention for protected methods in the Python style guide.\n\nFrom the style guide:\n\n\n_single_leading_underscore: weak \"internal use\" indicator. E.g. from M\n import * does not import objects whose name starts with an underscore.\nsingle_trailing_underscore_: used by convention to avoid conflicts with\n Python keyword, e.g.\nTkinter.Toplevel(master, class_='ClassName')\n\n__double_leading_underscore: when naming a class attribute, invokes name\n mangling (inside class FooBar, __boo becomes _FooBar__boo; see below).\n__double_leading_and_trailing_underscore__: \"magic\" objects or\n attributes that live in user-controlled namespaces. E.g. __init__,\n __import__ or __file__. Never invent such names; only use them\n as documented.\n\n\n",
"\nmethod is just a normal method\n_method should not be called unless you know what you are doing, which normally means that you have written the method yourself.\n__method the 2 underscores are used to prevent name mangeling. Attributes or methods like this are accessible over instance._ClassName__method. Although a lot of people call this \"private\" it is not. You should never use this to prevent someone from accessing this method, use _method instead.\n__method__ is used for special methods which modify the behavior of the instance. Do not name your own methods like this.\n\n",
"These are all conventions, so they are not enforced in anyway. Still, you can normally expect:\n__somename__\n\nSomething defined in the python language specification itself. Don't use this in your own naming.\n_somename\n\nThis is normally supposed to be called via some different mechanism rather than directly. Similar to declaring something private in most other languages, but not enforced in any way.\n__somename\n\nThis is really not supposed to be called directly, and is mangled internally to stop you doing so accidently. If you really need to call it for some reason, check the documentation to find out how.\nAny of the above can apply equally to function, variable or class names.\n",
"These methods were named as such to reduce the possibility of naming collisions.\n",
"Methods prefaced and prefixed with the double underscore are generally so marked to indicate that they are part of the Python language specification.\n",
"Some methods with a double underscore prefix and suffix are special. For example, __init__ is called whenever an instance of that class is created, and __str__ is called when the object is to be printed. Basically, they can be called in special ways. You can use them like any other method, or you can invoke them through the special way associated to them.\nI don't know about double-underscore global functions (not belonging to any class), but I think there aren't any.\n",
"The pattern of __name__ indicate \"magic\" methods. These are called by various functions like\nstr(x) -> x.__str__()\nrepr(x) -> x.__repr__()\nx[0] -> x.__getitem__(0)\netc\n\nA single underscore prefix is to indicate a private attribute, and is only followed through convention.\na double underscore prefix initiates name-mangling, where the attribute named __attr is changed to __Class_attr upon instantiation.\nThe pattern you have of _method__ isn't really used for anything.\n"
] |
[
74,
23,
4,
1,
0,
0,
0
] |
[] |
[] |
[
"methods",
"python"
] |
stackoverflow_0000935378_methods_python.txt
|
Q:
Implementing NSText delegate methods in PyObjc and Cocoa
In the project that I'm building, I'd like to have a method called when I paste some text into a specific text field. I can't seem to get this to work, but here's what I've tried
I implimented a custom class (based on NSObject) to be a delegate for my textfield, then gave it the method: textDidChange:
class textFieldDelegate(NSObject):
def textDidChange_(self, notification):
NSLog("textdidchange")
I then instantiated an object of this class in interface builder, and set it to be the delegate of the NSTextField. This however, doesn't seem to do anything. However, when I build the example code from http://www.programmish.com/?p=30, everything seems to work perfectly fine. How do I impliment this delegate code so that it actually works?
A:
The reason this isn't working for you is that textDidChange_ isn't a delegate method. It's a method on the NSTextField that posts the notification of the change. If you have peek at the docs for textDidChange, you'll see that it mentions the actual name of the delegate method:
This method causes the receiver’s delegate to receive a controlTextDidChange: message. See the NSControl class specification for more information on the text delegate method.
The delegate method is actually called controlTextDidChange_ and is declared on the NSTextField superclass, NSControl.
Change your delegate method to:
def controlTextDidChange_(self, notification):
NSLog("textdidchange")
and it should work for you.
|
Implementing NSText delegate methods in PyObjc and Cocoa
|
In the project that I'm building, I'd like to have a method called when I paste some text into a specific text field. I can't seem to get this to work, but here's what I've tried
I implimented a custom class (based on NSObject) to be a delegate for my textfield, then gave it the method: textDidChange:
class textFieldDelegate(NSObject):
def textDidChange_(self, notification):
NSLog("textdidchange")
I then instantiated an object of this class in interface builder, and set it to be the delegate of the NSTextField. This however, doesn't seem to do anything. However, when I build the example code from http://www.programmish.com/?p=30, everything seems to work perfectly fine. How do I impliment this delegate code so that it actually works?
|
[
"The reason this isn't working for you is that textDidChange_ isn't a delegate method. It's a method on the NSTextField that posts the notification of the change. If you have peek at the docs for textDidChange, you'll see that it mentions the actual name of the delegate method:\n\nThis method causes the receiver’s delegate to receive a controlTextDidChange: message. See the NSControl class specification for more information on the text delegate method.\n\nThe delegate method is actually called controlTextDidChange_ and is declared on the NSTextField superclass, NSControl.\nChange your delegate method to:\ndef controlTextDidChange_(self, notification):\n NSLog(\"textdidchange\")\n\nand it should work for you.\n"
] |
[
3
] |
[] |
[] |
[
"cocoa",
"pyobjc",
"python"
] |
stackoverflow_0000934628_cocoa_pyobjc_python.txt
|
Q:
Java equivalent of function mapping in Python
In python, if I have a few functions that I would like to call based on an input, i can do this:
lookup = {'function1':function1, 'function2':function2, 'function3':function3}
lookup[input]()
That is I have a dictionary of function name mapped to the function, and call the function by a dictionary lookup.
How to do this in java?
A:
Java doesn't have first-class methods, so the command pattern is your friend...
disclamer: code not tested!
public interface Command
{
void invoke();
}
Map<String, Command> commands = new HashMap<String, Command>();
commands.put("function1", new Command()
{
public void invoke() { System.out.println("hello world"); }
});
commands.get("function1").invoke();
A:
There are several ways to approach this problem. Most of these were posted already:
Commands - Keep a bunch of objects that have an execute() or invoke() method in a map; lookup the command by name, then invoke the method.
Polymorphism - More generally than commands, you can invoke methods on any related set of objects.
Finally there is Reflection - You can use reflection to get references to java.lang.Method objects. For a set of known classes/methods, this works fairly well and there isn't too much overhead once you load the Method objects. You could use this to, for example, allow a user to type java code into a command line, which you execute in real time.
Personally I would use the Command approach. Commands combine well with Template Methods, allowing you to enforce certain patterns on all your command objects. Example:
public abstract class Command {
public final Object execute(Map<String, Object> args) {
// do permission checking here or transaction management
Object retval = doExecute(args);
// do logging, cleanup, caching, etc here
return retval;
}
// subclasses override this to do the real work
protected abstract Object doExecute(Map<String, Object> args);
}
I would resort to reflection only when you need to use this kind of mapping for classes whose design you don't control, and for which it's not practical to make commands. For example, you couldn't expose the Java API in a command-shell by making commands for each method.
A:
You could use a Map<String,Method> or Map<String,Callable> etc,and then use map.get("function1").invoke(...). But usually these kinds of problems are tackled more cleanly by using polymorphism instead of a lookup.
A:
Polymorphic example..
public interface Animal {public void speak();};
public class Dog implements Animal {public void speak(){System.out.println("treat? treat? treat?");}}
public class Cat implements Animal {public void speak(){System.out.println("leave me alone");}}
public class Hamster implements Animal {public void speak(){System.out.println("I run, run, run, but never get anywhere");}}
Map<String,Animal> animals = new HashMap<String,Animal>();
animals.put("dog",new Dog());
animals.put("cat",new Cat());
animals.put("hamster",new Hamster());
for(Animal animal : animals){animal.speak();}
A:
Unfortunately, Java does not have first-class functions, but consider the following interface:
public interface F<A, B> {
public B f(A a);
}
This models the type for functions from type A to type B, as first-class values that you can pass around. What you want is a Map<String, F<A, B>>.
Functional Java is a fairly complete library centered around first-class functions.
A:
As mentioned in other questions, a Map<String,MyCommandType> with anonymous inner classes is one verbose way to do it.
A variation is to use enums in place of the anonymous inner classes. Each constant of the enum can implement/override methods of the enum or implemented interface, much the same as the anonymous inner class technique but with a little less mess. I believe Effective Java 2nd Ed deals with how to initialise a map of enums. To map from the enum name merely requires calling MyEnumType.valueOf(name).
A:
As everyone else said, Java doesn't support functions as first-level objects. To achieve this, you use a Functor, which is a class that wraps a function. Steve Yegge has a nice rant about that.
To help you with this limitation, people write functor libraries: jga, Commons Functor
|
Java equivalent of function mapping in Python
|
In python, if I have a few functions that I would like to call based on an input, i can do this:
lookup = {'function1':function1, 'function2':function2, 'function3':function3}
lookup[input]()
That is I have a dictionary of function name mapped to the function, and call the function by a dictionary lookup.
How to do this in java?
|
[
"Java doesn't have first-class methods, so the command pattern is your friend...\ndisclamer: code not tested!\npublic interface Command \n{\n void invoke();\n}\n\nMap<String, Command> commands = new HashMap<String, Command>();\ncommands.put(\"function1\", new Command() \n{\n public void invoke() { System.out.println(\"hello world\"); }\n});\n\ncommands.get(\"function1\").invoke();\n\n",
"There are several ways to approach this problem. Most of these were posted already:\n\nCommands - Keep a bunch of objects that have an execute() or invoke() method in a map; lookup the command by name, then invoke the method.\nPolymorphism - More generally than commands, you can invoke methods on any related set of objects.\nFinally there is Reflection - You can use reflection to get references to java.lang.Method objects. For a set of known classes/methods, this works fairly well and there isn't too much overhead once you load the Method objects. You could use this to, for example, allow a user to type java code into a command line, which you execute in real time.\n\nPersonally I would use the Command approach. Commands combine well with Template Methods, allowing you to enforce certain patterns on all your command objects. Example:\npublic abstract class Command {\n public final Object execute(Map<String, Object> args) {\n // do permission checking here or transaction management\n Object retval = doExecute(args);\n // do logging, cleanup, caching, etc here\n return retval;\n }\n // subclasses override this to do the real work\n protected abstract Object doExecute(Map<String, Object> args);\n}\n\nI would resort to reflection only when you need to use this kind of mapping for classes whose design you don't control, and for which it's not practical to make commands. For example, you couldn't expose the Java API in a command-shell by making commands for each method.\n",
"You could use a Map<String,Method> or Map<String,Callable> etc,and then use map.get(\"function1\").invoke(...). But usually these kinds of problems are tackled more cleanly by using polymorphism instead of a lookup.\n",
"Polymorphic example..\npublic interface Animal {public void speak();};\npublic class Dog implements Animal {public void speak(){System.out.println(\"treat? treat? treat?\");}}\npublic class Cat implements Animal {public void speak(){System.out.println(\"leave me alone\");}}\npublic class Hamster implements Animal {public void speak(){System.out.println(\"I run, run, run, but never get anywhere\");}}\n\nMap<String,Animal> animals = new HashMap<String,Animal>();\nanimals.put(\"dog\",new Dog());\nanimals.put(\"cat\",new Cat());\nanimals.put(\"hamster\",new Hamster());\nfor(Animal animal : animals){animal.speak();}\n\n",
"Unfortunately, Java does not have first-class functions, but consider the following interface:\npublic interface F<A, B> {\n public B f(A a);\n}\n\nThis models the type for functions from type A to type B, as first-class values that you can pass around. What you want is a Map<String, F<A, B>>.\nFunctional Java is a fairly complete library centered around first-class functions.\n",
"As mentioned in other questions, a Map<String,MyCommandType> with anonymous inner classes is one verbose way to do it.\nA variation is to use enums in place of the anonymous inner classes. Each constant of the enum can implement/override methods of the enum or implemented interface, much the same as the anonymous inner class technique but with a little less mess. I believe Effective Java 2nd Ed deals with how to initialise a map of enums. To map from the enum name merely requires calling MyEnumType.valueOf(name).\n",
"As everyone else said, Java doesn't support functions as first-level objects. To achieve this, you use a Functor, which is a class that wraps a function. Steve Yegge has a nice rant about that.\nTo help you with this limitation, people write functor libraries: jga, Commons Functor\n"
] |
[
15,
4,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"function",
"java",
"python"
] |
stackoverflow_0000934509_function_java_python.txt
|
Q:
Cast a class instance to a subclass
I'm using boto to manage some EC2 instances. It provides an Instance class. I'd like to subclass it to meet my particular needs. Since boto provides a query interface to get your instances, I need something to convert between classes. This solution seems to work, but changing the class attribute seems dodgy. Is there a better way?
from boto.ec2.instance import Instance as _Instance
class Instance(_Instance):
@classmethod
def from_instance(cls, instance):
instance.__class__ = cls
# set other attributes that this subclass cares about
return instance
A:
I wouldn't subclass and cast. I don't think casting is ever a good policy.
Instead, consider a Wrapper or Façade.
class MyThing( object ):
def __init__( self, theInstance ):
self.ec2_instance = theInstance
Now, you can subclass MyThing as much as you want and you shouldn't need to be casting your boto.ec2.instance.Instance at all. It remains as a more-or-less opaque element in your object.
|
Cast a class instance to a subclass
|
I'm using boto to manage some EC2 instances. It provides an Instance class. I'd like to subclass it to meet my particular needs. Since boto provides a query interface to get your instances, I need something to convert between classes. This solution seems to work, but changing the class attribute seems dodgy. Is there a better way?
from boto.ec2.instance import Instance as _Instance
class Instance(_Instance):
@classmethod
def from_instance(cls, instance):
instance.__class__ = cls
# set other attributes that this subclass cares about
return instance
|
[
"I wouldn't subclass and cast. I don't think casting is ever a good policy. \nInstead, consider a Wrapper or Façade.\nclass MyThing( object ):\n def __init__( self, theInstance ):\n self.ec2_instance = theInstance \n\nNow, you can subclass MyThing as much as you want and you shouldn't need to be casting your boto.ec2.instance.Instance at all. It remains as a more-or-less opaque element in your object.\n"
] |
[
7
] |
[] |
[] |
[
"boto",
"class",
"python",
"subclass"
] |
stackoverflow_0000935448_boto_class_python_subclass.txt
|
Q:
Scientific Plotting in Python
I have a large data set of tuples containing (time of event, latitude, longitude) that I need to visualize. I was hoping to generate a 'movie'-like xy-plot, but was wondering if anyone has a better idea or if there is an easy way to do this in Python?
Thanks in advance for the help,
--Leo
A:
get matplotlib
A:
The easiest option is matplotlib. Two particular solutions that might work for you are:
1) You can generate a series of plots, each a snapshot at a given time. These can either be displayed as a dynamic plot in matplotlib, where the axes stay the same and the data moves around; or you can save the series of plots to separate files and later combine them to make a movie (using a separate application). There a number of examples in the official examples for doing these things.
2) A simple scatter plot, where the colors of the circles changes with time might work well for your data. This is super easy. See this, for example, which produces this figure
alt text http://matplotlib.sourceforge.net/plot_directive/mpl_examples/pylab_examples/ellipse_collection.hires.png
A:
I'd try rpy. All the power of R, from within python.
http://rpy.sourceforge.net/
rpy is awesome.
Check out the CRAN library for animations,
http://cran.r-project.org/web/packages/animation/index.html
Of course, you have to learn a bit about R to do this, but if you're planning to do this kind of thing routinely in future it will be well worth your while to learn.
A:
If you are interested in scientific plotting using Python then have a look at Mlab: http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/mlab.html
It allows you to plot 2d / 3d and animate your data and the quality of the charts is really high.
A:
Enthought's Chaco is designed for interactive/updating plots. the api and such takes a little while to get use to, but once you're there it's a fantastic framework to work with.
A:
I have had reasonable success with Python applications generating SVG with animation features embedded, but this was with a smaller set of elements than what you probably have. For example, if your data is about a seismic event, show a circle that shows up when the event happened and grows in size matching the magnitude of the event. A moving indicator over a timeline is really simple to add.
Kaleidoscope (Opera, others maybe, Safari not) shows lots of pieces moving around and I found inspirational. Lots of other good SVG tutorial content on the site too.
A:
You might want to look at PyQwt. It's a plotting library which works with Qt/PyQt.
Several of the PyQwt examples (in the qt4examples directory) show how to create "moving" / dynamically changing plots -- look at CPUplot.py, MapDemo.py, DataDemo.py.
|
Scientific Plotting in Python
|
I have a large data set of tuples containing (time of event, latitude, longitude) that I need to visualize. I was hoping to generate a 'movie'-like xy-plot, but was wondering if anyone has a better idea or if there is an easy way to do this in Python?
Thanks in advance for the help,
--Leo
|
[
"get matplotlib\n",
"The easiest option is matplotlib. Two particular solutions that might work for you are:\n1) You can generate a series of plots, each a snapshot at a given time. These can either be displayed as a dynamic plot in matplotlib, where the axes stay the same and the data moves around; or you can save the series of plots to separate files and later combine them to make a movie (using a separate application). There a number of examples in the official examples for doing these things.\n2) A simple scatter plot, where the colors of the circles changes with time might work well for your data. This is super easy. See this, for example, which produces this figure\nalt text http://matplotlib.sourceforge.net/plot_directive/mpl_examples/pylab_examples/ellipse_collection.hires.png\n",
"I'd try rpy. All the power of R, from within python.\nhttp://rpy.sourceforge.net/\nrpy is awesome.\nCheck out the CRAN library for animations,\nhttp://cran.r-project.org/web/packages/animation/index.html\nOf course, you have to learn a bit about R to do this, but if you're planning to do this kind of thing routinely in future it will be well worth your while to learn.\n",
"If you are interested in scientific plotting using Python then have a look at Mlab: http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/mlab.html\nIt allows you to plot 2d / 3d and animate your data and the quality of the charts is really high. \n",
"Enthought's Chaco is designed for interactive/updating plots. the api and such takes a little while to get use to, but once you're there it's a fantastic framework to work with.\n",
"I have had reasonable success with Python applications generating SVG with animation features embedded, but this was with a smaller set of elements than what you probably have. For example, if your data is about a seismic event, show a circle that shows up when the event happened and grows in size matching the magnitude of the event. A moving indicator over a timeline is really simple to add.\nKaleidoscope (Opera, others maybe, Safari not) shows lots of pieces moving around and I found inspirational. Lots of other good SVG tutorial content on the site too.\n",
"You might want to look at PyQwt. It's a plotting library which works with Qt/PyQt. \nSeveral of the PyQwt examples (in the qt4examples directory) show how to create \"moving\" / dynamically changing plots -- look at CPUplot.py, MapDemo.py, DataDemo.py. \n"
] |
[
16,
8,
4,
3,
2,
0,
0
] |
[] |
[] |
[
"plot",
"python",
"scientific_computing",
"visualization"
] |
stackoverflow_0000816086_plot_python_scientific_computing_visualization.txt
|
Q:
How do I pass an exception between threads in python
I need to pass exceptions across a thread boundary.
I'm using python embedded in a non thread safe app which has one thread safe call, post_event(callable), which calls callable from its main thread.
I am running a pygtk gui in a seperate thread, so when a button is clicked I post an event with post_event, and wait for it to finish before continuing. But I need the caller to know if the callee threw an exception, and raise it if so. I'm not too worried about the traceback, just the exception itself.
My code is roughly:
class Callback():
def __init__(self,func,*args):
self.func=func
self.args=args
self.event=threading.Event()
self.result=None
self.exception=None
def __call__(self):
gtk.gdk.threads_enter()
try:
self.result=self.func(*self.args)
except:
#what do I do here? How do I store the exception?
pass
finally:
gtk.gdk.threads_leave()
self.event.set()
def post(self):
post_event(self)
gtk.gdk.threads_leave()
self.event.wait()
gtk.gdk.threads_enter()
if self.exception:
raise self.exception
return self.result
Any help appreciated, thanks.
A:
#what do I do here? How do I store the exception?
Use sys.exc_info()[:2], see this wiki
Best way to communicate among threads is Queue. Have the main thread instantiate a Queue.Queue instance and pass it to subthreads; when a subthread has something to communicate back to the master it uses .put on that queue (e.g. a tuple with thread id, exception type, exception value -- or, other useful info, not necessarily exception-related, just make sure the first item of a tuple identifies the kind of info;-). The master can .get those info-units back when it wants, with various choices for synchronization and so on.
|
How do I pass an exception between threads in python
|
I need to pass exceptions across a thread boundary.
I'm using python embedded in a non thread safe app which has one thread safe call, post_event(callable), which calls callable from its main thread.
I am running a pygtk gui in a seperate thread, so when a button is clicked I post an event with post_event, and wait for it to finish before continuing. But I need the caller to know if the callee threw an exception, and raise it if so. I'm not too worried about the traceback, just the exception itself.
My code is roughly:
class Callback():
def __init__(self,func,*args):
self.func=func
self.args=args
self.event=threading.Event()
self.result=None
self.exception=None
def __call__(self):
gtk.gdk.threads_enter()
try:
self.result=self.func(*self.args)
except:
#what do I do here? How do I store the exception?
pass
finally:
gtk.gdk.threads_leave()
self.event.set()
def post(self):
post_event(self)
gtk.gdk.threads_leave()
self.event.wait()
gtk.gdk.threads_enter()
if self.exception:
raise self.exception
return self.result
Any help appreciated, thanks.
|
[
"\n#what do I do here? How do I store the exception?\n\nUse sys.exc_info()[:2], see this wiki\nBest way to communicate among threads is Queue. Have the main thread instantiate a Queue.Queue instance and pass it to subthreads; when a subthread has something to communicate back to the master it uses .put on that queue (e.g. a tuple with thread id, exception type, exception value -- or, other useful info, not necessarily exception-related, just make sure the first item of a tuple identifies the kind of info;-). The master can .get those info-units back when it wants, with various choices for synchronization and so on.\n"
] |
[
13
] |
[] |
[] |
[
"exception",
"multithreading",
"python"
] |
stackoverflow_0000936556_exception_multithreading_python.txt
|
Q:
Python and subprocess
This is for a script I'm working on. It's supposed to run an .exe file for the loop below. (By the way not sure if it's visible but for el in ('90','52.6223',...) is outside the loop and makes a nested loop with the rest) I'm not sure if the ordering is correct or what not. Also when the .exe file is ran, it spits some stuff out and I need a certain line printed to the screen (hence where you see AspecificLinfe= ... ). Any helpful answers would be great!
for el in ('90.','52.62263.','26.5651.','10.8123.'):
if el == '90.':
z = ('0.')
elif el == '52.62263.':
z = ('0.', '72.', '144.', '216.', '288.')
elif el == '26.5651':
z = ('324.', '36.', '108.', '180.', '252.')
else el == '10.8123':
z = ('288.', '0.', '72.', '144.', '216.')
for az in z:
comstring = os.path.join('Path where .exe file is')
comstring = os.path.normpath(comstring)
comstring = '"' + comstring + '"'
comstringfull = comstring + ' -el ' + str(el) + ' -n ' + str(z)
print 'The python program is running this command with the shell:'
print comstring + '\n'
process = Popen(comstring,shell=True,stderr=STDOUT,stdout=PIPE)
outputstring = myprocesscommunicate()
print 'The command shell returned the following back to python:'
print outputstring[0]
AspecificLine=linecache.getline(' ??filename?? ', # ??
sys.stderr.write('az', 'el', 'AREA' ) # ??
A:
Using shell=True is wrong because that needlessy invokes the shell.
Instead, do this:
for el in ('90.','52.62263.','26.5651.','10.8123.'):
if el == '90.':
z = ('0.')
elif el == '52.62263.':
z = ('0.', '72.', '144.', '216.', '288.')
elif el == '26.5651':
z = ('324.', '36.', '108.', '180.', '252.')
else el == '10.8123':
z = ('288.', '0.', '72.', '144.', '216.')
for az in z:
exepath = os.path.join('Path where .exe file is')
exepath = os.path.normpath(comstring)
cmd = [exepath, '-el', str(el), '-n', str(z)]
print 'The python program is running this command:'
print cmd
process = Popen(cmd, stderr=STDOUT, stdout=PIPE)
outputstring = process.communicate()[0]
print 'The command returned the following back to python:'
print outputstring
outputlist = outputstring.splitlines()
AspecificLine = outputlist[22] # get some specific line. 23?
print AspecificLine
|
Python and subprocess
|
This is for a script I'm working on. It's supposed to run an .exe file for the loop below. (By the way not sure if it's visible but for el in ('90','52.6223',...) is outside the loop and makes a nested loop with the rest) I'm not sure if the ordering is correct or what not. Also when the .exe file is ran, it spits some stuff out and I need a certain line printed to the screen (hence where you see AspecificLinfe= ... ). Any helpful answers would be great!
for el in ('90.','52.62263.','26.5651.','10.8123.'):
if el == '90.':
z = ('0.')
elif el == '52.62263.':
z = ('0.', '72.', '144.', '216.', '288.')
elif el == '26.5651':
z = ('324.', '36.', '108.', '180.', '252.')
else el == '10.8123':
z = ('288.', '0.', '72.', '144.', '216.')
for az in z:
comstring = os.path.join('Path where .exe file is')
comstring = os.path.normpath(comstring)
comstring = '"' + comstring + '"'
comstringfull = comstring + ' -el ' + str(el) + ' -n ' + str(z)
print 'The python program is running this command with the shell:'
print comstring + '\n'
process = Popen(comstring,shell=True,stderr=STDOUT,stdout=PIPE)
outputstring = myprocesscommunicate()
print 'The command shell returned the following back to python:'
print outputstring[0]
AspecificLine=linecache.getline(' ??filename?? ', # ??
sys.stderr.write('az', 'el', 'AREA' ) # ??
|
[
"Using shell=True is wrong because that needlessy invokes the shell.\nInstead, do this:\nfor el in ('90.','52.62263.','26.5651.','10.8123.'):\n if el == '90.':\n z = ('0.')\n elif el == '52.62263.':\n z = ('0.', '72.', '144.', '216.', '288.')\n elif el == '26.5651':\n z = ('324.', '36.', '108.', '180.', '252.')\n else el == '10.8123':\n z = ('288.', '0.', '72.', '144.', '216.')\n\n for az in z:\n\n exepath = os.path.join('Path where .exe file is')\n exepath = os.path.normpath(comstring) \n cmd = [exepath, '-el', str(el), '-n', str(z)]\n\n print 'The python program is running this command:'\n print cmd\n\n process = Popen(cmd, stderr=STDOUT, stdout=PIPE)\n outputstring = process.communicate()[0]\n\n print 'The command returned the following back to python:'\n print outputstring\n outputlist = outputstring.splitlines()\n AspecificLine = outputlist[22] # get some specific line. 23?\n print AspecificLine\n\n"
] |
[
1
] |
[] |
[] |
[
"loops",
"popen",
"python",
"subprocess"
] |
stackoverflow_0000936505_loops_popen_python_subprocess.txt
|
Q:
Comparison of data in SQL through Python
I have to parse a very complex dump (whatever it is). I have done the parsing through Python. Since the parsed data is very huge in amount, I have to feed it in the database (SQL). I have also done this. Now the thing is I have to compare the data now present in the SQL.
Actually I have to compare the data of 1st dump with the data of the 2nd dump. Both dumps have the same fields (attributes) but the values of their fields may be different. So I have to detect this change. For this, I have to do the comparison. But I don't have the idea how to do this all using Python as my front end.
A:
If you don't have MINUS or EXCEPT, there is also this, which will show all non-matching rows using a UNION/GROUP BY trick
SELECT MAX(table), data1, data2
FROM (
SELECT 'foo1' AS table, foo1.data1, foo1.data2 FROM foo1
UNION ALL
SELECT 'foo2' AS table, foo2.data1, foo2.data2 FROM foo2
) AS X
GROUP BY data1, data2
HAVING COUNT(*) = 1
ORDER BY data1, data2
I have a general purpose table compare SP which also can do a more complex table compare with left and right and inner joins and monetary threshold (or threshold percentage) and subset criteria.
A:
Why not do the 'dectect change' in SQL? Something like:
select foo.data1, foo.data2 from foo where foo.id = 'dump1'
minus
select foo.data1, foo.data2 from foo where foo.id = 'dump2'
|
Comparison of data in SQL through Python
|
I have to parse a very complex dump (whatever it is). I have done the parsing through Python. Since the parsed data is very huge in amount, I have to feed it in the database (SQL). I have also done this. Now the thing is I have to compare the data now present in the SQL.
Actually I have to compare the data of 1st dump with the data of the 2nd dump. Both dumps have the same fields (attributes) but the values of their fields may be different. So I have to detect this change. For this, I have to do the comparison. But I don't have the idea how to do this all using Python as my front end.
|
[
"If you don't have MINUS or EXCEPT, there is also this, which will show all non-matching rows using a UNION/GROUP BY trick\nSELECT MAX(table), data1, data2\nFROM (\n SELECT 'foo1' AS table, foo1.data1, foo1.data2 FROM foo1\n UNION ALL\n SELECT 'foo2' AS table, foo2.data1, foo2.data2 FROM foo2\n) AS X\nGROUP BY data1, data2\nHAVING COUNT(*) = 1\nORDER BY data1, data2\n\nI have a general purpose table compare SP which also can do a more complex table compare with left and right and inner joins and monetary threshold (or threshold percentage) and subset criteria.\n",
"Why not do the 'dectect change' in SQL? Something like:\nselect foo.data1, foo.data2 from foo where foo.id = 'dump1'\nminus\nselect foo.data1, foo.data2 from foo where foo.id = 'dump2'\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"python",
"sql"
] |
stackoverflow_0000934117_python_sql.txt
|
Q:
Why does "**" bind more tightly than negation?
I was just bitten by the following scenario:
>>> -1 ** 2
-1
Now, digging through the Python docs, it's clear that this is intended behavior, but why? I don't work with any other languages with power as a builtin operator, but not having unary negation bind as tightly as possible seems dangerously counter-intuitive to me.
Is there a reason it was done this way? Do other languages with power operators behave similarly?
A:
That behaviour is the same as in math formulas, so I am not sure what the problem is, or why it is counter-intuitive. Can you explain where have you seen something different? "**" always bind more than "-": -x^2 is not the same as (-x)^2
Just use (-1) ** 2, exactly as you'd do in math.
A:
Short answer: it's the standard way precedence works in math.
Let's say I want to evaluate the polynomial 3x3 - x2 + 5.
def polynomial(x):
return 3*x**3 - x**2 + 5
It looks better than...
def polynomial
return 3*x**3 - (x**2) + 5
And the first way is the way mathematicians do it. Other languages with exponentiation work the same way. Note that the negation operator also binds more loosely than multiplication, so
-x*y === -(x*y)
Which is also the way they do it in math.
A:
If I had to guess, it would be because having an exponentiation operator allows programmers to easily raise numbers to fractional powers. Negative numbers raised to fractional powers end up with an imaginary component (usually), so that can be avoided by binding ** more tightly than unary -. Most languages don't like imaginary numbers.
Ultimately, of course, it's just a convention - and to make your code readable by yourself and others down the line, you'll probably want to explicitly group your (-1) so no one else gets caught by the same trap :) Good luck!
A:
It seems intuitive to me.
Fist, because it's consistent with mathematical notaiton: -2^2 = -4.
Second, the operator ** was widely introduced by FORTRAN long time ago. In FORTRAN, -2**2 is -4, as well.
|
Why does "**" bind more tightly than negation?
|
I was just bitten by the following scenario:
>>> -1 ** 2
-1
Now, digging through the Python docs, it's clear that this is intended behavior, but why? I don't work with any other languages with power as a builtin operator, but not having unary negation bind as tightly as possible seems dangerously counter-intuitive to me.
Is there a reason it was done this way? Do other languages with power operators behave similarly?
|
[
"That behaviour is the same as in math formulas, so I am not sure what the problem is, or why it is counter-intuitive. Can you explain where have you seen something different? \"**\" always bind more than \"-\": -x^2 is not the same as (-x)^2\nJust use (-1) ** 2, exactly as you'd do in math.\n",
"Short answer: it's the standard way precedence works in math.\nLet's say I want to evaluate the polynomial 3x3 - x2 + 5.\ndef polynomial(x):\n return 3*x**3 - x**2 + 5\n\nIt looks better than...\ndef polynomial\n return 3*x**3 - (x**2) + 5\n\nAnd the first way is the way mathematicians do it. Other languages with exponentiation work the same way. Note that the negation operator also binds more loosely than multiplication, so\n-x*y === -(x*y)\n\nWhich is also the way they do it in math.\n",
"If I had to guess, it would be because having an exponentiation operator allows programmers to easily raise numbers to fractional powers. Negative numbers raised to fractional powers end up with an imaginary component (usually), so that can be avoided by binding ** more tightly than unary -. Most languages don't like imaginary numbers.\nUltimately, of course, it's just a convention - and to make your code readable by yourself and others down the line, you'll probably want to explicitly group your (-1) so no one else gets caught by the same trap :) Good luck!\n",
"It seems intuitive to me.\nFist, because it's consistent with mathematical notaiton: -2^2 = -4.\nSecond, the operator ** was widely introduced by FORTRAN long time ago. In FORTRAN, -2**2 is -4, as well. \n"
] |
[
23,
5,
3,
1
] |
[
"Ocaml doesn't do the same\n# -12.0**2.0\n ;;\n- : float = 144.\n\nThat's kind of weird...\n# -12.0**0.5;;\n- : float = nan\n\nLook at that link though...\norder of operations\n"
] |
[
-1
] |
[
"operator_precedence",
"python"
] |
stackoverflow_0000936904_operator_precedence_python.txt
|
Q:
Python Mod_WSGI Output Buffer
This is a bit of a tricky question;
I'm working with mod_wsgi in python and want to make an output buffer that yields HTML on an ongoing basis (until the page is done loading).
Right now I have my script set up so that the Application() function creates a separate 'Page' thread for the page code, then immediately after, it runs a continuous loop for the Output Buffer using python's Queue lib.
Is there a better way to have this set up? I thought about making the Output Buffer be the thread (instead of Page), but the problem with that, is the Application() function is the only function that can be yielding the HTML to Apache, which (as far as I can tell, makes this idea impossible).
The downside I'm seeing with my current setup, is in the event of an error, I can't easily interrupt the buffer and exit without the Page thread continuing on for a bit.
(It kinda sucks that mod_wsgi doesn't have a build in output buffer to handle this, I hate loading the entire page then sending output just once, it results in a much slower page load).
A:
mod_wsgi should have built in support for Generators. So if your using a Framework like CherryPy you just need to do:
def index():
yield "Some output"
#Do Somemore work
yield "Some more output"
Where each yield will return to the user a chunk of the page.
Here is some basics from CherrPy on there implementation and how it works http://www.cherrypy.org/wiki/ReturnVsYield
A:
(It kinda sucks that mod_wsgi doesn't have a build in output buffer to handle this, I hate loading the entire page then sending output just once, it results in a much slower page load).
Unless you're doing some kind of streaming or asynchronous application, you want to send the entire page all at once 99.9% of the time. The only exception that I can think of is if you're sending a big webpage (and by big, I mean in the hundreds of Megabytes).
The reason why I mention this is to point out that if you're having performance problems, it's likely not because you're buffering output. The simplest way to handle this is to do something like this:
def Application(environ, start_response):
start_response('200 Ok', [('Content-type','text/plain')])
response = []
response.append('<h1>')
response.append('hello, world!')
response.append('</h1>')
return [''.join(response)] #returns ['<h1>hello, world!</h1>']
Your best bet is to use a mutable data structure like a list to hold the chunks of the message and then join them together in a string as I did above. Unless you have some kind of special need, this is probably the best general approach.
|
Python Mod_WSGI Output Buffer
|
This is a bit of a tricky question;
I'm working with mod_wsgi in python and want to make an output buffer that yields HTML on an ongoing basis (until the page is done loading).
Right now I have my script set up so that the Application() function creates a separate 'Page' thread for the page code, then immediately after, it runs a continuous loop for the Output Buffer using python's Queue lib.
Is there a better way to have this set up? I thought about making the Output Buffer be the thread (instead of Page), but the problem with that, is the Application() function is the only function that can be yielding the HTML to Apache, which (as far as I can tell, makes this idea impossible).
The downside I'm seeing with my current setup, is in the event of an error, I can't easily interrupt the buffer and exit without the Page thread continuing on for a bit.
(It kinda sucks that mod_wsgi doesn't have a build in output buffer to handle this, I hate loading the entire page then sending output just once, it results in a much slower page load).
|
[
"mod_wsgi should have built in support for Generators. So if your using a Framework like CherryPy you just need to do:\ndef index():\n yield \"Some output\"\n #Do Somemore work\n yield \"Some more output\"\n\nWhere each yield will return to the user a chunk of the page. \nHere is some basics from CherrPy on there implementation and how it works http://www.cherrypy.org/wiki/ReturnVsYield\n",
"\n(It kinda sucks that mod_wsgi doesn't have a build in output buffer to handle this, I hate loading the entire page then sending output just once, it results in a much slower page load).\n\nUnless you're doing some kind of streaming or asynchronous application, you want to send the entire page all at once 99.9% of the time. The only exception that I can think of is if you're sending a big webpage (and by big, I mean in the hundreds of Megabytes).\nThe reason why I mention this is to point out that if you're having performance problems, it's likely not because you're buffering output. The simplest way to handle this is to do something like this:\ndef Application(environ, start_response):\n start_response('200 Ok', [('Content-type','text/plain')])\n response = []\n response.append('<h1>')\n response.append('hello, world!')\n response.append('</h1>')\n return [''.join(response)] #returns ['<h1>hello, world!</h1>']\n\nYour best bet is to use a mutable data structure like a list to hold the chunks of the message and then join them together in a string as I did above. Unless you have some kind of special need, this is probably the best general approach.\n"
] |
[
2,
2
] |
[] |
[] |
[
"buffer",
"mod_wsgi",
"output_buffering",
"python"
] |
stackoverflow_0000935978_buffer_mod_wsgi_output_buffering_python.txt
|
Q:
Keep code from running during syncdb
I have some code that throws causes syncdb to throw an error (because it tries to access the model before the tables are created).
Is there a way to keep the code from running on syncdb? something like:
if not syncdb:
run_some_code()
Thanks :)
edit: PS - I thought about using the post_init signal... for the code that accesses the db, is that a good idea?
More info
Here is some more info as requested :)
I've run into this a couple times, for instance... I was hacking on django-cron and determined it necessary to make sure there are not existing jobs when you load django (because it searches all the installed apps for jobs and adds them on load anyway).
So I added the following code to the top of the __init__.py file:
import sqlite3
try:
# Delete all the old jobs from the database so they don't interfere with this instance of django
oldJobs = models.Job.objects.all()
for oldJob in oldJobs:
oldJob.delete()
except sqlite3.OperationalError:
# When you do syncdb for the first time, the table isn't
# there yet and throws a nasty error... until now
pass
For obvious reasons this is crap. it's tied to sqlite and I'm there are better places to put this code (this is just how I happened upon the issue) but it works.
As you can see the error you get is Operational Error (in sqlite) and the stack trace says something along the lines of "table django_cron_job not found"
Solution
In the end, the goal was to run some code before any pages were loaded.
This can be accomplished by executing it in the urls.py file, since it has to be imported before a page can be served (obviously).
And I was able to remove that ugly try/except block :) Thank god (and S. Lott)
A:
"edit: PS - I thought about using the post_init signal... for the code that accesses the db, is that a good idea?"
Never.
If you have code that's accessing the model before the tables are created, you have big, big problems. You're probably doing something seriously wrong.
Normally, you run syncdb approximately once. The database is created. And your web application uses the database.
Sometimes, you made a design change, drop and recreate the database. And then your web application uses that database for a long time.
You (generally) don't need code in an __init__.py module. You should (almost) never have executable code that does real work in an __init__.py module. It's very, very rare, and inappropriate for Django.
I'm not sure why you're messing with __init__.py when Django Cron says that you make your scheduling arrangements in urls.py.
Edit
Clearing records is one thing.
Messing around with __init__.py and Django-cron's base.py are clearly completely wrong ways to do this. If it's that complicated, you're doing it wrong.
It's impossible to tell what you're trying to do, but it should be trivial.
Your urls.py can only run after syncdb and after all of the ORM material has been configured and bound correctly.
Your urls.py could, for example, delete some rows and then add some rows to a table. At this point, all syncdb issues are out of the way.
Why don't you have your logic in urls.py?
A:
Code that tries to access the models before they're created can pretty much exist only at the module level; it would have to be executable code run when the module is imported, as your example indicates. This is, as you've guessed, the reason by syncdb fails. It tries to import the module, but the act of importing the module causes application-level code to execute; a "side-effect" if you will.
The desire to avoid module imports that cause side-effects is so strong in Python that the if __name__ == '__main__': convention for executable python scripts has become commonplace. When just loading a code library causes an application to start executing, headaches ensue :-)
For Django apps, this becomes more than a headache. Consider the effect of having oldJob.delete() executed every time the module is imported. It may seem like it's executing only once when you run with the Django development server, but in a production environment it will get executed quite often. If you use Apache, for example, Apache will frequently fire up several child processes waiting around to handle requests. As a long-running server progresses, your Django app will get bootstrapped every time a handler is forked for your web server, meaning that the module will be imported and delete() will be called several times, often unpredictably. A signal won't help, unfortunately, as the signal could be fired every time an Apache process is initialized as well.
It isn't, btw, just a webserver that could cause your code to execute inadvertently. If you use tools like epydoc, for example they will import your code to generate API documentation. This in turn would cause your application logic to start executing, which is obviously an undesired side-effect of just running a documentation parser.
For this reason, cleanup code like this is either best handled by a cron job, which looks for stale jobs on a periodic basis and cleans up the DB. This custom script can also be run manually, or by any process (for example during a deployment, or as part of your unit test setUp() function to ensure a clean test run). No matter how you do it, the important point is that code like this should always be executed explicitly, rather than implicitly as a result of opening the source file.
I hope that helps. I know it doesn't provide a way to determine if syncdb is running, but the syncdb issue will magically vanish if you design your Django app with production deployment in mind.
|
Keep code from running during syncdb
|
I have some code that throws causes syncdb to throw an error (because it tries to access the model before the tables are created).
Is there a way to keep the code from running on syncdb? something like:
if not syncdb:
run_some_code()
Thanks :)
edit: PS - I thought about using the post_init signal... for the code that accesses the db, is that a good idea?
More info
Here is some more info as requested :)
I've run into this a couple times, for instance... I was hacking on django-cron and determined it necessary to make sure there are not existing jobs when you load django (because it searches all the installed apps for jobs and adds them on load anyway).
So I added the following code to the top of the __init__.py file:
import sqlite3
try:
# Delete all the old jobs from the database so they don't interfere with this instance of django
oldJobs = models.Job.objects.all()
for oldJob in oldJobs:
oldJob.delete()
except sqlite3.OperationalError:
# When you do syncdb for the first time, the table isn't
# there yet and throws a nasty error... until now
pass
For obvious reasons this is crap. it's tied to sqlite and I'm there are better places to put this code (this is just how I happened upon the issue) but it works.
As you can see the error you get is Operational Error (in sqlite) and the stack trace says something along the lines of "table django_cron_job not found"
Solution
In the end, the goal was to run some code before any pages were loaded.
This can be accomplished by executing it in the urls.py file, since it has to be imported before a page can be served (obviously).
And I was able to remove that ugly try/except block :) Thank god (and S. Lott)
|
[
"\"edit: PS - I thought about using the post_init signal... for the code that accesses the db, is that a good idea?\"\nNever.\nIf you have code that's accessing the model before the tables are created, you have big, big problems. You're probably doing something seriously wrong.\nNormally, you run syncdb approximately once. The database is created. And your web application uses the database.\nSometimes, you made a design change, drop and recreate the database. And then your web application uses that database for a long time.\nYou (generally) don't need code in an __init__.py module. You should (almost) never have executable code that does real work in an __init__.py module. It's very, very rare, and inappropriate for Django.\nI'm not sure why you're messing with __init__.py when Django Cron says that you make your scheduling arrangements in urls.py.\n\nEdit\nClearing records is one thing.\nMessing around with __init__.py and Django-cron's base.py are clearly completely wrong ways to do this. If it's that complicated, you're doing it wrong.\nIt's impossible to tell what you're trying to do, but it should be trivial.\nYour urls.py can only run after syncdb and after all of the ORM material has been configured and bound correctly.\nYour urls.py could, for example, delete some rows and then add some rows to a table. At this point, all syncdb issues are out of the way.\nWhy don't you have your logic in urls.py?\n",
"Code that tries to access the models before they're created can pretty much exist only at the module level; it would have to be executable code run when the module is imported, as your example indicates. This is, as you've guessed, the reason by syncdb fails. It tries to import the module, but the act of importing the module causes application-level code to execute; a \"side-effect\" if you will.\nThe desire to avoid module imports that cause side-effects is so strong in Python that the if __name__ == '__main__': convention for executable python scripts has become commonplace. When just loading a code library causes an application to start executing, headaches ensue :-)\nFor Django apps, this becomes more than a headache. Consider the effect of having oldJob.delete() executed every time the module is imported. It may seem like it's executing only once when you run with the Django development server, but in a production environment it will get executed quite often. If you use Apache, for example, Apache will frequently fire up several child processes waiting around to handle requests. As a long-running server progresses, your Django app will get bootstrapped every time a handler is forked for your web server, meaning that the module will be imported and delete() will be called several times, often unpredictably. A signal won't help, unfortunately, as the signal could be fired every time an Apache process is initialized as well.\nIt isn't, btw, just a webserver that could cause your code to execute inadvertently. If you use tools like epydoc, for example they will import your code to generate API documentation. This in turn would cause your application logic to start executing, which is obviously an undesired side-effect of just running a documentation parser. \nFor this reason, cleanup code like this is either best handled by a cron job, which looks for stale jobs on a periodic basis and cleans up the DB. This custom script can also be run manually, or by any process (for example during a deployment, or as part of your unit test setUp() function to ensure a clean test run). No matter how you do it, the important point is that code like this should always be executed explicitly, rather than implicitly as a result of opening the source file.\nI hope that helps. I know it doesn't provide a way to determine if syncdb is running, but the syncdb issue will magically vanish if you design your Django app with production deployment in mind.\n"
] |
[
4,
2
] |
[] |
[] |
[
"django",
"django_models",
"django_syncdb",
"python",
"syncdb"
] |
stackoverflow_0000937316_django_django_models_django_syncdb_python_syncdb.txt
|
Q:
Python Tkinter Tk/Tcl usage Problem
I am using Tcl from Python Tkinter Module like below
from Tkinter import *
Tcl = Tcl().eval
Tcl("info patchlevel")
'8.3.5'
You can see Tcl version 8.3 is selected by python.
But i also have tcl8.4 in my system.
Now,how do i make python select tcl8.4 in Tkinter module.
Tcl8.3 does not have Expect package,so i can not use Expect package in Python Tcl/Tk.
Thanks
A:
I think the version of Tcl/Tk is used by python is determined at compiling time. So you need to look at the code, recompile python against the version of Tcl/Tk you want to use. Maybe recompiling the _tkinter.so library is enough too, since it's loaded dynamically.
|
Python Tkinter Tk/Tcl usage Problem
|
I am using Tcl from Python Tkinter Module like below
from Tkinter import *
Tcl = Tcl().eval
Tcl("info patchlevel")
'8.3.5'
You can see Tcl version 8.3 is selected by python.
But i also have tcl8.4 in my system.
Now,how do i make python select tcl8.4 in Tkinter module.
Tcl8.3 does not have Expect package,so i can not use Expect package in Python Tcl/Tk.
Thanks
|
[
"I think the version of Tcl/Tk is used by python is determined at compiling time. So you need to look at the code, recompile python against the version of Tcl/Tk you want to use. Maybe recompiling the _tkinter.so library is enough too, since it's loaded dynamically.\n"
] |
[
2
] |
[] |
[] |
[
"expect",
"python",
"tcl",
"tkinter"
] |
stackoverflow_0000937979_expect_python_tcl_tkinter.txt
|
Q:
Recursive Relationship with Google App Engine and BigTable
In a classic relational database, I have the following table:
CREATE TABLE Person(
Id int IDENTITY(1,1) NOT NULL PRIMARY KEY,
MotherId int NOT NULL REFERENCES Person(Id),
FatherId int NOT NULL REFERENCES Person(Id),
FirstName nvarchar(255))
I am trying to convert this table into a Google App Engine table. My issue is with the fields MotherId and FatherId. I tried the code below, but no chance. Python says that it doesn't know the object type Person.
class Person(db.Model):
mother = db.ReferenceProperty(Person)
father = db.ReferenceProperty(Person)
firstName = db.StringProperty()
Does someone know how we can model a recursive relationship in a Google App Engine table? How could I work around the limitation of App Engine?
UPDATE
I want to expand the problem a little bit... What if I wanted to add a collection of children?
children = db.SelfReferenceProperty(collection_name='children_set')
dad.children.append(childrenOne)
I tried this and it doesn't work. Any idea what I am doing wrong?
Thanks!
A:
I think that you want SelfReferenceProperty here
class Person(db.Model):
mother = db.SelfReferenceProperty(collection_name='mother_set')
father = db.SelfReferenceProperty(collection_name='father_set')
firstName = db.StringProperty()
Alternatively, you can put the Mother and Father relations in separate classes.
|
Recursive Relationship with Google App Engine and BigTable
|
In a classic relational database, I have the following table:
CREATE TABLE Person(
Id int IDENTITY(1,1) NOT NULL PRIMARY KEY,
MotherId int NOT NULL REFERENCES Person(Id),
FatherId int NOT NULL REFERENCES Person(Id),
FirstName nvarchar(255))
I am trying to convert this table into a Google App Engine table. My issue is with the fields MotherId and FatherId. I tried the code below, but no chance. Python says that it doesn't know the object type Person.
class Person(db.Model):
mother = db.ReferenceProperty(Person)
father = db.ReferenceProperty(Person)
firstName = db.StringProperty()
Does someone know how we can model a recursive relationship in a Google App Engine table? How could I work around the limitation of App Engine?
UPDATE
I want to expand the problem a little bit... What if I wanted to add a collection of children?
children = db.SelfReferenceProperty(collection_name='children_set')
dad.children.append(childrenOne)
I tried this and it doesn't work. Any idea what I am doing wrong?
Thanks!
|
[
"I think that you want SelfReferenceProperty here\nclass Person(db.Model):\n mother = db.SelfReferenceProperty(collection_name='mother_set')\n father = db.SelfReferenceProperty(collection_name='father_set')\n firstName = db.StringProperty()\n\nAlternatively, you can put the Mother and Father relations in separate classes.\n"
] |
[
10
] |
[] |
[] |
[
"bigtable",
"database_design",
"google_app_engine",
"python"
] |
stackoverflow_0000938035_bigtable_database_design_google_app_engine_python.txt
|
Q:
Use Django ORM as standalone
Possible Duplicates:
Use only some parts of Django?
Using only the DB part of Django
I want to use the Django ORM as standalone. Despite an hour of searching Google, I'm still left with several questions:
Does it require me to set up my Python project with a setting.py, /myApp/ directory, and modules.py file?
Can I create a new models.py and run syncdb to have it automatically setup the tables and relationships or can I only use models from existing Django projects?
There seems to be a lot of questions regarding PYTHONPATH. If you're not calling existing models is this needed?
I guess the easiest thing would be for someone to just post a basic template or walkthrough of the process, clarifying the organization of the files e.g.:
db/
__init__.py
settings.py
myScript.py
orm/
__init__.py
models.py
And the basic essentials:
# settings.py
from django.conf import settings
settings.configure(
DATABASE_ENGINE = "postgresql_psycopg2",
DATABASE_HOST = "localhost",
DATABASE_NAME = "dbName",
DATABASE_USER = "user",
DATABASE_PASSWORD = "pass",
DATABASE_PORT = "5432"
)
# orm/models.py
# ...
# myScript.py
# import models..
And whether you need to run something like: django-admin.py inspectdb ...
(Oh, I'm running Windows if that changes anything regarding command-line arguments.).
A:
Ah ok I figured it out and will post the solutions for anyone attempting to do the same thing.
This solution assumes that you want to create new models.
First create a new folder to store your files. We'll call it "standAlone". Within "standAlone", create the following files:
__init__.py
myScript.py
settings.py
Obviously "myScript.py" can be named whatever.
Next, create a directory for your models.
We'll name our model directory "myApp", but realize that this is a normal Django application within a project, as such, name it appropriately to the collection of models you are writing.
Within this directory create 2 files:
__init__.py
models.py
Your going to need a copy of manage.py from an either an existing Django project or you can just grab a copy from your Django install path:
django\conf\project_template\manage.py
Copy the manage.py to your /standAlone directory. Ok so you should now have the following structure:
\standAlone
__init__.py
myScript.py
manage.py
settings.py
\myApp
__init__.py
models.py
Add the following to your myScript.py file:
# settings.py
from django.conf import settings
settings.configure(
DATABASE_ENGINE = "postgresql_psycopg2",
DATABASE_NAME = "myDatabase",
DATABASE_USER = "myUsername",
DATABASE_PASSWORD = "myPassword",
DATABASE_HOST = "localhost",
DATABASE_PORT = "5432",
INSTALLED_APPS = ("myApp")
)
from django.db import models
from myApp.models import *
and add this to your settings.py file:
DATABASE_ENGINE = "postgresql_psycopg2"
DATABASE_NAME = "myDatabase"
DATABASE_USER = "myUsername"
DATABASE_PASSWORD = "myPassword"
DATABASE_HOST = "localhost"
DATABASE_PORT = "5432",
INSTALLED_APPS = ("myApp")
and finally your myApp/models.py:
# myApp/models.py
from django.db import models
class MyModel(models.Model):
field = models.CharField(max_length=255)
and that's it. Now to have Django manage your database, in command prompt navigate to our /standalone directory and run:
manage.py sql MyApp
|
Use Django ORM as standalone
|
Possible Duplicates:
Use only some parts of Django?
Using only the DB part of Django
I want to use the Django ORM as standalone. Despite an hour of searching Google, I'm still left with several questions:
Does it require me to set up my Python project with a setting.py, /myApp/ directory, and modules.py file?
Can I create a new models.py and run syncdb to have it automatically setup the tables and relationships or can I only use models from existing Django projects?
There seems to be a lot of questions regarding PYTHONPATH. If you're not calling existing models is this needed?
I guess the easiest thing would be for someone to just post a basic template or walkthrough of the process, clarifying the organization of the files e.g.:
db/
__init__.py
settings.py
myScript.py
orm/
__init__.py
models.py
And the basic essentials:
# settings.py
from django.conf import settings
settings.configure(
DATABASE_ENGINE = "postgresql_psycopg2",
DATABASE_HOST = "localhost",
DATABASE_NAME = "dbName",
DATABASE_USER = "user",
DATABASE_PASSWORD = "pass",
DATABASE_PORT = "5432"
)
# orm/models.py
# ...
# myScript.py
# import models..
And whether you need to run something like: django-admin.py inspectdb ...
(Oh, I'm running Windows if that changes anything regarding command-line arguments.).
|
[
"Ah ok I figured it out and will post the solutions for anyone attempting to do the same thing.\nThis solution assumes that you want to create new models.\nFirst create a new folder to store your files. We'll call it \"standAlone\". Within \"standAlone\", create the following files:\n__init__.py\nmyScript.py\nsettings.py\n\nObviously \"myScript.py\" can be named whatever. \nNext, create a directory for your models.\nWe'll name our model directory \"myApp\", but realize that this is a normal Django application within a project, as such, name it appropriately to the collection of models you are writing. \nWithin this directory create 2 files: \n__init__.py\nmodels.py\n\nYour going to need a copy of manage.py from an either an existing Django project or you can just grab a copy from your Django install path:\ndjango\\conf\\project_template\\manage.py\n\nCopy the manage.py to your /standAlone directory. Ok so you should now have the following structure:\n\\standAlone\n __init__.py\n myScript.py\n manage.py\n settings.py\n\\myApp\n __init__.py\n models.py\n\nAdd the following to your myScript.py file:\n# settings.py\nfrom django.conf import settings\n\nsettings.configure(\n DATABASE_ENGINE = \"postgresql_psycopg2\",\n DATABASE_NAME = \"myDatabase\",\n DATABASE_USER = \"myUsername\",\n DATABASE_PASSWORD = \"myPassword\",\n DATABASE_HOST = \"localhost\",\n DATABASE_PORT = \"5432\",\n INSTALLED_APPS = (\"myApp\")\n)\n\nfrom django.db import models\nfrom myApp.models import *\n\nand add this to your settings.py file:\n DATABASE_ENGINE = \"postgresql_psycopg2\"\n DATABASE_NAME = \"myDatabase\"\n DATABASE_USER = \"myUsername\"\n DATABASE_PASSWORD = \"myPassword\"\n DATABASE_HOST = \"localhost\"\n DATABASE_PORT = \"5432\",\n INSTALLED_APPS = (\"myApp\")\n\nand finally your myApp/models.py:\n# myApp/models.py\nfrom django.db import models\n\nclass MyModel(models.Model):\n field = models.CharField(max_length=255)\n\nand that's it. Now to have Django manage your database, in command prompt navigate to our /standalone directory and run: \nmanage.py sql MyApp\n\n"
] |
[
39
] |
[] |
[] |
[
"django",
"orm",
"postgresql",
"python"
] |
stackoverflow_0000937742_django_orm_postgresql_python.txt
|
Q:
Bad Practice to run code in constructor thats likely to fail?
my question is rather a design question.
In Python, if code in your "constructor" fails, the object ends up not being defined. Thus:
someInstance = MyClass("test123") #lets say that constructor throws an exception
someInstance.doSomething() # will fail, name someInstance not defined.
I do have a situation though, where a lot of code copying would occur if i remove the error-prone code from my constructor. Basically my constructor fills a few attributes (via IO, where a lot can go wrong) that can be accessed with various getters. If I remove the code from the contructor, i'd have 10 getters with copy paste code something like :
is attribute really set?
do some IO actions to fill the attribute
return the contents of the variable in question
I dislike that, because all my getters would contain a lot of code. Instead of that I perform my IO operations in a central location, the constructor, and fill all my attributes.
Whats a proper way of doing this?
A:
There is a difference between a constructor in C++ and an __init__ method
in Python. In C++, the task of a constructor is to construct an object. If it fails,
no destructor is called. Therefore if any resources were acquired before an
exception was thrown, the cleanup should be done before exiting the constructor.
Thus, some prefer two-phase construction with most of the construction done
outside the constructor (ugh).
Python has a much cleaner two-phase construction (construct, then
initialize). However, many people confuse an __init__ method (initializer)
with a constructor. The actual constructor in Python is called __new__.
Unlike in C++, it does not take an instance, but
returns one. The task of __init__ is to initialize the created instance.
If an exception is raised in __init__, the destructor __del__ (if any)
will be called as expected, because the object was already created (even though it was not properly initialized) by the time __init__ was called.
Answering your question:
In Python, if code in your
"constructor" fails, the object ends
up not being defined.
That's not precisely true. If __init__ raises an exception, the object is
created but not initialized properly (e.g., some attributes are not
assigned). But at the time that it's raised, you probably don't have any references to
this object, so the fact that the attributes are not assigned doesn't matter. Only the destructor (if any) needs to check whether the attributes actually exist.
Whats a proper way of doing this?
In Python, initialize objects in __init__ and don't worry about exceptions.
In C++, use RAII.
Update [about resource management]:
In garbage collected languages, if you are dealing with resources, especially limited ones such as database connections, it's better not to release them in the destructor.
This is because objects are destroyed in a non-deterministic way, and if you happen
to have a loop of references (which is not always easy to tell), and at least one of the objects in the loop has a destructor defined, they will never be destroyed.
Garbage collected languages have other means of dealing with resources. In Python, it's a with statement.
A:
In C++ at least, there is nothing wrong with putting failure-prone code in the constructor - you simply throw an exception if an error occurs. If the code is needed to properly construct the object, there reallyb is no alternative (although you can abstract the code into subfunctions, or better into the constructors of subobjects). Worst practice is to half-construct the object and then expect the user to call other functions to complete the construction somehow.
A:
It is not bad practice per se.
But I think you may be after a something different here. In your example the doSomething() method will not be called when the MyClass constructor fails. Try the following code:
class MyClass:
def __init__(self, s):
print s
raise Exception("Exception")
def doSomething(self):
print "doSomething"
try:
someInstance = MyClass("test123")
someInstance.doSomething()
except:
print "except"
It should print:
test123
except
For your software design you could ask the following questions:
What should the scope of the someInstance variable be? Who are its users? What are their requirements?
Where and how should the error be handled for the case that one of your 10 values is not available?
Should all 10 values be cached at construction time or cached one-by-one when they are needed the first time?
Can the I/O code be refactored into a helper method, so that doing something similiar 10 times does not result in code repetition?
...
A:
I'm not a Python developer, but in general, it's best to avoid complex/error-prone operations in your constructor. One way around this would be to put a "LoadFromFile" or "Init" method in your class to populate the object from an external source. This load/init method must then be called separately after constructing the object.
A:
One common pattern is two-phase construction, also suggested by Andy White.
First phase: Regular constructor.
Second phase: Operations that can fail.
Integration of the two: Add a factory method to do both phases and make the constructor protected/private to prevent instantation outside the factory method.
Oh, and I'm neither a Python developer.
A:
If the code to initialise the various values is really extensive enough that copying it is undesirable (which it sounds like it is in your case) I would personally opt for putting the required initialisation into a private method, adding a flag to indicate whether the initialisation has taken place, and making all accessors call the initialisation method if it has not initialised yet.
In threaded scenarios you may have to add extra protection in case initialisation is only allowed to occur once for valid semantics (which may or may not be the case since you are dealing with a file).
A:
Again, I've got little experience with Python, however in C# its better to try and avoid having a constructor that throws an exception. An example of why that springs to mind is if you want to place your constructor at a point where its not possible to surround it with a try {} catch {} block, for example initialisation of a field in a class:
class MyClass
{
MySecondClass = new MySecondClass();
// Rest of class
}
If the constructor of MySecondClass throws an exception that you wish to handle inside MyClass then you need to refactor the above - its certainly not the end of the world, but a nice-to-have.
In this case my approach would probably be to move the failure-prone initialisation logic into an initialisation method, and have the getters call that initialisation method before returning any values.
As an optimisation you should have the getter (or the initialisation method) set some sort of "IsInitialised" boolean to true, to indicate that the (potentially costly) initialisation does not need to be done again.
In pseudo-code (C# because I'll just mess up the syntax of Python):
class MyClass
{
private bool IsInitialised = false;
private string myString;
public void Init()
{
// Put initialisation code here
this.IsInitialised = true;
}
public string MyString
{
get
{
if (!this.IsInitialised)
{
this.Init();
}
return myString;
}
}
}
This is of course not thread-safe, but I don't think multithreading is used that commonly in python so this is probably a non-issue for you.
A:
seems Neil had a good point: my friend just pointed me to this:
http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization
which is basically what Neil said...
|
Bad Practice to run code in constructor thats likely to fail?
|
my question is rather a design question.
In Python, if code in your "constructor" fails, the object ends up not being defined. Thus:
someInstance = MyClass("test123") #lets say that constructor throws an exception
someInstance.doSomething() # will fail, name someInstance not defined.
I do have a situation though, where a lot of code copying would occur if i remove the error-prone code from my constructor. Basically my constructor fills a few attributes (via IO, where a lot can go wrong) that can be accessed with various getters. If I remove the code from the contructor, i'd have 10 getters with copy paste code something like :
is attribute really set?
do some IO actions to fill the attribute
return the contents of the variable in question
I dislike that, because all my getters would contain a lot of code. Instead of that I perform my IO operations in a central location, the constructor, and fill all my attributes.
Whats a proper way of doing this?
|
[
"There is a difference between a constructor in C++ and an __init__ method\nin Python. In C++, the task of a constructor is to construct an object. If it fails,\nno destructor is called. Therefore if any resources were acquired before an \nexception was thrown, the cleanup should be done before exiting the constructor.\nThus, some prefer two-phase construction with most of the construction done\noutside the constructor (ugh).\nPython has a much cleaner two-phase construction (construct, then\ninitialize). However, many people confuse an __init__ method (initializer)\nwith a constructor. The actual constructor in Python is called __new__.\nUnlike in C++, it does not take an instance, but\nreturns one. The task of __init__ is to initialize the created instance.\nIf an exception is raised in __init__, the destructor __del__ (if any)\nwill be called as expected, because the object was already created (even though it was not properly initialized) by the time __init__ was called.\nAnswering your question:\n\nIn Python, if code in your\n \"constructor\" fails, the object ends\n up not being defined.\n\nThat's not precisely true. If __init__ raises an exception, the object is\ncreated but not initialized properly (e.g., some attributes are not\nassigned). But at the time that it's raised, you probably don't have any references to\nthis object, so the fact that the attributes are not assigned doesn't matter. Only the destructor (if any) needs to check whether the attributes actually exist.\n\nWhats a proper way of doing this?\n\nIn Python, initialize objects in __init__ and don't worry about exceptions.\nIn C++, use RAII.\n\nUpdate [about resource management]:\nIn garbage collected languages, if you are dealing with resources, especially limited ones such as database connections, it's better not to release them in the destructor.\nThis is because objects are destroyed in a non-deterministic way, and if you happen\nto have a loop of references (which is not always easy to tell), and at least one of the objects in the loop has a destructor defined, they will never be destroyed.\nGarbage collected languages have other means of dealing with resources. In Python, it's a with statement.\n",
"In C++ at least, there is nothing wrong with putting failure-prone code in the constructor - you simply throw an exception if an error occurs. If the code is needed to properly construct the object, there reallyb is no alternative (although you can abstract the code into subfunctions, or better into the constructors of subobjects). Worst practice is to half-construct the object and then expect the user to call other functions to complete the construction somehow.\n",
"It is not bad practice per se.\nBut I think you may be after a something different here. In your example the doSomething() method will not be called when the MyClass constructor fails. Try the following code:\nclass MyClass:\ndef __init__(self, s):\n print s\n raise Exception(\"Exception\")\n\ndef doSomething(self):\n print \"doSomething\"\n\ntry:\n someInstance = MyClass(\"test123\")\n someInstance.doSomething()\nexcept:\n print \"except\"\n\nIt should print:\ntest123\nexcept\n\nFor your software design you could ask the following questions:\n\nWhat should the scope of the someInstance variable be? Who are its users? What are their requirements?\nWhere and how should the error be handled for the case that one of your 10 values is not available?\nShould all 10 values be cached at construction time or cached one-by-one when they are needed the first time?\nCan the I/O code be refactored into a helper method, so that doing something similiar 10 times does not result in code repetition?\n...\n\n",
"I'm not a Python developer, but in general, it's best to avoid complex/error-prone operations in your constructor. One way around this would be to put a \"LoadFromFile\" or \"Init\" method in your class to populate the object from an external source. This load/init method must then be called separately after constructing the object.\n",
"One common pattern is two-phase construction, also suggested by Andy White.\nFirst phase: Regular constructor.\nSecond phase: Operations that can fail.\nIntegration of the two: Add a factory method to do both phases and make the constructor protected/private to prevent instantation outside the factory method.\nOh, and I'm neither a Python developer.\n",
"If the code to initialise the various values is really extensive enough that copying it is undesirable (which it sounds like it is in your case) I would personally opt for putting the required initialisation into a private method, adding a flag to indicate whether the initialisation has taken place, and making all accessors call the initialisation method if it has not initialised yet.\nIn threaded scenarios you may have to add extra protection in case initialisation is only allowed to occur once for valid semantics (which may or may not be the case since you are dealing with a file).\n",
"Again, I've got little experience with Python, however in C# its better to try and avoid having a constructor that throws an exception. An example of why that springs to mind is if you want to place your constructor at a point where its not possible to surround it with a try {} catch {} block, for example initialisation of a field in a class:\nclass MyClass\n{\n MySecondClass = new MySecondClass();\n // Rest of class\n}\n\nIf the constructor of MySecondClass throws an exception that you wish to handle inside MyClass then you need to refactor the above - its certainly not the end of the world, but a nice-to-have.\nIn this case my approach would probably be to move the failure-prone initialisation logic into an initialisation method, and have the getters call that initialisation method before returning any values.\nAs an optimisation you should have the getter (or the initialisation method) set some sort of \"IsInitialised\" boolean to true, to indicate that the (potentially costly) initialisation does not need to be done again.\nIn pseudo-code (C# because I'll just mess up the syntax of Python):\nclass MyClass\n{\n private bool IsInitialised = false;\n\n private string myString;\n\n public void Init()\n {\n // Put initialisation code here\n this.IsInitialised = true;\n }\n\n public string MyString\n {\n get\n {\n if (!this.IsInitialised)\n {\n this.Init();\n }\n\n return myString;\n }\n }\n}\n\nThis is of course not thread-safe, but I don't think multithreading is used that commonly in python so this is probably a non-issue for you.\n",
"seems Neil had a good point: my friend just pointed me to this:\nhttp://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization\nwhich is basically what Neil said...\n"
] |
[
37,
20,
4,
3,
3,
0,
0,
0
] |
[] |
[] |
[
"constructor",
"exception_handling",
"oop",
"python"
] |
stackoverflow_0000938426_constructor_exception_handling_oop_python.txt
|
Q:
Preprocessing route parameters in Python Routes
I'm using Routes for doing all the URL mapping job. Here's a typical route in my application:
map.routes('route', '/show/{title:[^/]+}', controller='generator', filter_=postprocess_title)
Quite often I have to strip some characters (like whitespace and underscore) from the {title} parameter. Currently there's one call per method in the controller to a function that does this conversion. It's not terribly convenient and I'd like Routes to do this job. Is it possible?
A:
I am not familiar with Routes, and therefore I do not know if what you're after is possible with Routes.
But perhaps you could decorate your controller methods with a decorator that strips characters from parameters as needed?
Not sure if this would be more convenient. But to me, using a decorator has a different 'feel' than doing the same thing inline inside the controller method.
For instance:
@remove_spaces_from('title')
def my_controller(...):
...
If you are not familiar with decorators, a google search for "python decorators" will get you started. A key point to remember: When arguments are needed for a decorator, you need two levels of wrapping in the decorator.
|
Preprocessing route parameters in Python Routes
|
I'm using Routes for doing all the URL mapping job. Here's a typical route in my application:
map.routes('route', '/show/{title:[^/]+}', controller='generator', filter_=postprocess_title)
Quite often I have to strip some characters (like whitespace and underscore) from the {title} parameter. Currently there's one call per method in the controller to a function that does this conversion. It's not terribly convenient and I'd like Routes to do this job. Is it possible?
|
[
"I am not familiar with Routes, and therefore I do not know if what you're after is possible with Routes.\nBut perhaps you could decorate your controller methods with a decorator that strips characters from parameters as needed?\nNot sure if this would be more convenient. But to me, using a decorator has a different 'feel' than doing the same thing inline inside the controller method.\nFor instance:\n\n@remove_spaces_from('title')\ndef my_controller(...):\n ...\n\nIf you are not familiar with decorators, a google search for \"python decorators\" will get you started. A key point to remember: When arguments are needed for a decorator, you need two levels of wrapping in the decorator.\n"
] |
[
0
] |
[] |
[] |
[
"parameters",
"python",
"routes"
] |
stackoverflow_0000938293_parameters_python_routes.txt
|
Q:
How to match a string of a certain length with a regex
For a project of mine, I'm trying to implement a small part of the BitTorrent protocol, which can be found here. Specifically, I want to use the "Bencoding" part of it, which is a way to safely encode data for transfer over a socket. The format is as follows:
8:a string => "a string"
i1234e => 1234
l1:a1:be => ['a', 'b']
d1:a1:b3:one3:twoe => {'a':'b', 'one':two}
The encoding part was easy enough, but decoding is become quite a hassle. For example, if I have a list of strings, I have no way to separate them into individual strings. I've tried several different solutions, including PyParsing and a custom token parser. I'm currently attempting to use regexes, and it seems to be going fairly well, but I'm still hung up on the string problem. My current regex is:
(?P<length>\d+):(?P<contents>.{\1})
However, i can't seem to use the first group as the length of the second group. Is there any good way to do this? Or am I approaching this all wrong, and the answer is sitting right in front of me?
A:
Any parser you use for this is going to need to be stateful (i.e. remember stuff), and regexes are, by and large, not stateful. They're the wrong tool for this job.
If those are the only data types you have to worry about, I think I'd just write custom parsers for each data type, passing control to the appropriate parser after reading the first character.
I'd actually implement one now, but it's late.
Alright I decided to write an implementation:
from StringIO import StringIO
import string
inputs = ["10:a stringly",
"i1234e" ,
"l1:a1:be",
"d1:a1:b3:one3:twoe"]
# Constants
DICT_TYPE = 'd'
LIST_TYPE = 'l'
INT_TYPE = 'i'
TOKEN_EOF = ''
TOKEN_END = 'e'
COLON = ':'
class BadTypeIndicatorException(Exception):pass
def read_int(stream):
s = ""
while True:
ch = stream.read(1)
if ch not in [TOKEN_EOF, TOKEN_END, COLON]:
s += ch
else:
break
return s
def tokenize(stream):
s = ""
while True:
ch = stream.read(1)
if ch == TOKEN_END or ch == TOKEN_EOF:
return
if ch == COLON:
length = int(s)
yield stream.read(length)
s = ""
else:
s += ch
def parse(stream):
TYPE = stream.read(1)
if TYPE in string.digits:
length = int( TYPE + read_int(stream) )
return stream.read(length)
elif TYPE is INT_TYPE:
return int( read_int(stream) )
elif TYPE is LIST_TYPE:
return list(tokenize(stream))
elif TYPE is DICT_TYPE:
tokens = list(tokenize(stream))
return dict(zip(tokens[0::2], tokens[1::2]))
else:
raise BadTypeIndicatorException
for input in inputs:
stream = StringIO(input)
print parse(stream)
A:
You can do it if you parse the string twice. Apply the first regex to get the length. Concatenate the length in your second regex to form a valid expression.
Not sure how that can be done in python, but a sample in C# would be:
string regex = "^[A-Za-z0-9_]{1," + length + "}$"
To match 1 to length no of chars which can be alpanumeric or _ where length is determined from a previous regex that retrieves only the length.
Hope this helps :)
A:
You'll want to do this in two steps. Regular expressions are actually a little overkill for such simple parsing problems as this. Here's how I'd do it:
def read_string(stream):
pos = stream.index(':')
length = int(stream[0:pos])
string = stream[pos+1:pos+1+length]
return string, stream[pos+1+length:]
It's a functional-style way of parsing, it returns the value parsed and the rest of the stream.
For lists, maybe:
def read_list(stream):
stream = stream[1:]
result = []
while stream[0] != 'e':
obj, stream = read_object(stream)
result.append(obj)
stream = stream[1:]
return result
And then you'd define a read_object that checks the first character of the stream and dispatches appropriately.
A:
You are using the wrong tool for the job... This requires some sort of state keeping, and generally speaking, regular expressions are stateless.
An example implementation of bdecoding (and bencoding) in PERL that I did can be found here.
An explanation of how that function works (since I never did get to comment it [oops]):
Basically what you need to do is setup a recursive function. This function takes a string reference (so it can be modified) and returns "something" (the nature of this means it could be an array, a hashtable, an int, or a string).
The function itself just checks the first character in the string and decides what to do based of that:
If it is an i, then parse out all the text between the i and the first e, and try to parse it as an int according to the rules of what is allowed.
If it is a digit, then read all the digits up to :, then read that many characters off the string.
Lists and dictionaries are where things start to get interesting... if there is an l or d as the first character, then you need to strip off the l/d, then pass the current string back into the function, so that it can start parsing elements in the list or dictionary. Then just store the returned values in the appropriate places in an appropriate structure till you hit an e, and return the structure you're left with.
Remember, the function as I implemented it was DESTRUCTIVE. The string passed in is empty when the function returns due to it being passed by reference, or more accurately, it will be devoid of anything it parsed and returned (which is why it can be used recursively: anything it doesn't process is left untouched). In most cases of the initial call though, this should process everything unless you've been doing something odd, so the above holds.
A:
Pseudo-code, without syntax checks:
define read-integer (stream):
let number 0, sign 1:
if string-equal ('-', (c <- read-char (stream))):
sign <- -1
else:
number <- parse-integer (c)
while number? (c <- read-char (stream)):
number <- (number * 10) + parse-integer (c)
return sign * number
define bdecode-string (stream):
let count read-integer (stream):
return read-n-chars (stream, count)
define bdecode-integer (stream):
ignore read-char (stream)
return read-integer (stream)
define bdecode-list (stream):
ignore read-char (stream)
let list []:
while not string-equal ('e', peek-char (stream)):
append (list, bdecode (stream))
return list
define bdecode-dictionary (stream):
let list bdecode-list stream:
return dictionarify (list)
define bdecode (stream):
case peek-char (stream):
number? => bdecode-string (stream)
'i' => bdecode-integer (stream)
'l' => bdecode-list (stream)
'd' => bdecode-dictionary (stream)
|
How to match a string of a certain length with a regex
|
For a project of mine, I'm trying to implement a small part of the BitTorrent protocol, which can be found here. Specifically, I want to use the "Bencoding" part of it, which is a way to safely encode data for transfer over a socket. The format is as follows:
8:a string => "a string"
i1234e => 1234
l1:a1:be => ['a', 'b']
d1:a1:b3:one3:twoe => {'a':'b', 'one':two}
The encoding part was easy enough, but decoding is become quite a hassle. For example, if I have a list of strings, I have no way to separate them into individual strings. I've tried several different solutions, including PyParsing and a custom token parser. I'm currently attempting to use regexes, and it seems to be going fairly well, but I'm still hung up on the string problem. My current regex is:
(?P<length>\d+):(?P<contents>.{\1})
However, i can't seem to use the first group as the length of the second group. Is there any good way to do this? Or am I approaching this all wrong, and the answer is sitting right in front of me?
|
[
"Any parser you use for this is going to need to be stateful (i.e. remember stuff), and regexes are, by and large, not stateful. They're the wrong tool for this job.\nIf those are the only data types you have to worry about, I think I'd just write custom parsers for each data type, passing control to the appropriate parser after reading the first character.\nI'd actually implement one now, but it's late.\nAlright I decided to write an implementation:\nfrom StringIO import StringIO\nimport string\n\ninputs = [\"10:a stringly\",\n \"i1234e\" ,\n \"l1:a1:be\",\n \"d1:a1:b3:one3:twoe\"]\n\n# Constants\nDICT_TYPE = 'd'\nLIST_TYPE = 'l'\nINT_TYPE = 'i'\nTOKEN_EOF = ''\nTOKEN_END = 'e'\nCOLON = ':'\n\n\nclass BadTypeIndicatorException(Exception):pass\n\n\ndef read_int(stream):\n\n s = \"\"\n\n while True:\n ch = stream.read(1)\n if ch not in [TOKEN_EOF, TOKEN_END, COLON]:\n s += ch\n else:\n break\n\n return s\n\n\ndef tokenize(stream):\n\n s = \"\"\n\n while True:\n\n ch = stream.read(1)\n\n if ch == TOKEN_END or ch == TOKEN_EOF:\n return \n\n if ch == COLON:\n length = int(s)\n yield stream.read(length)\n s = \"\"\n\n else:\n s += ch\n\n\ndef parse(stream):\n\n TYPE = stream.read(1)\n\n if TYPE in string.digits:\n length = int( TYPE + read_int(stream) )\n return stream.read(length)\n\n elif TYPE is INT_TYPE: \n return int( read_int(stream) )\n\n elif TYPE is LIST_TYPE: \n return list(tokenize(stream))\n\n elif TYPE is DICT_TYPE:\n tokens = list(tokenize(stream))\n return dict(zip(tokens[0::2], tokens[1::2]))\n\n else: \n raise BadTypeIndicatorException\n\n\n\nfor input in inputs:\n stream = StringIO(input)\n print parse(stream)\n\n",
"You can do it if you parse the string twice. Apply the first regex to get the length. Concatenate the length in your second regex to form a valid expression.\nNot sure how that can be done in python, but a sample in C# would be:\nstring regex = \"^[A-Za-z0-9_]{1,\" + length + \"}$\"\n\nTo match 1 to length no of chars which can be alpanumeric or _ where length is determined from a previous regex that retrieves only the length.\nHope this helps :)\n",
"You'll want to do this in two steps. Regular expressions are actually a little overkill for such simple parsing problems as this. Here's how I'd do it:\ndef read_string(stream):\n pos = stream.index(':')\n length = int(stream[0:pos])\n string = stream[pos+1:pos+1+length]\n return string, stream[pos+1+length:]\n\nIt's a functional-style way of parsing, it returns the value parsed and the rest of the stream.\nFor lists, maybe:\ndef read_list(stream):\n stream = stream[1:]\n result = []\n while stream[0] != 'e':\n obj, stream = read_object(stream)\n result.append(obj)\n stream = stream[1:]\n return result\n\nAnd then you'd define a read_object that checks the first character of the stream and dispatches appropriately.\n",
"You are using the wrong tool for the job... This requires some sort of state keeping, and generally speaking, regular expressions are stateless.\n\nAn example implementation of bdecoding (and bencoding) in PERL that I did can be found here.\nAn explanation of how that function works (since I never did get to comment it [oops]):\nBasically what you need to do is setup a recursive function. This function takes a string reference (so it can be modified) and returns \"something\" (the nature of this means it could be an array, a hashtable, an int, or a string).\nThe function itself just checks the first character in the string and decides what to do based of that:\n\nIf it is an i, then parse out all the text between the i and the first e, and try to parse it as an int according to the rules of what is allowed.\nIf it is a digit, then read all the digits up to :, then read that many characters off the string.\n\nLists and dictionaries are where things start to get interesting... if there is an l or d as the first character, then you need to strip off the l/d, then pass the current string back into the function, so that it can start parsing elements in the list or dictionary. Then just store the returned values in the appropriate places in an appropriate structure till you hit an e, and return the structure you're left with.\nRemember, the function as I implemented it was DESTRUCTIVE. The string passed in is empty when the function returns due to it being passed by reference, or more accurately, it will be devoid of anything it parsed and returned (which is why it can be used recursively: anything it doesn't process is left untouched). In most cases of the initial call though, this should process everything unless you've been doing something odd, so the above holds.\n",
"Pseudo-code, without syntax checks:\ndefine read-integer (stream):\n let number 0, sign 1:\n if string-equal ('-', (c <- read-char (stream))):\n sign <- -1\n else:\n number <- parse-integer (c)\n while number? (c <- read-char (stream)):\n number <- (number * 10) + parse-integer (c)\n return sign * number\n\ndefine bdecode-string (stream):\n let count read-integer (stream):\n return read-n-chars (stream, count)\n\ndefine bdecode-integer (stream):\n ignore read-char (stream)\n return read-integer (stream)\n\ndefine bdecode-list (stream):\n ignore read-char (stream)\n let list []:\n while not string-equal ('e', peek-char (stream)):\n append (list, bdecode (stream))\n return list\n\ndefine bdecode-dictionary (stream):\n let list bdecode-list stream:\n return dictionarify (list)\n\ndefine bdecode (stream):\n case peek-char (stream):\n number? => bdecode-string (stream)\n 'i' => bdecode-integer (stream)\n 'l' => bdecode-list (stream)\n 'd' => bdecode-dictionary (stream)\n\n"
] |
[
9,
2,
2,
1,
1
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0000938065_python_regex.txt
|
Q:
Printed representation of list
I want to format a list into a string in this way:
[1,2,3] => '1 2 3'. How to do this?
Is there any customizable formatter in Python as Common Lisp format?
A:
' '.join(str(i) for i in your_list)
A:
' '.join(str(i) for i in your_list)
First, convert any element into a string, then join them into a unique string.
|
Printed representation of list
|
I want to format a list into a string in this way:
[1,2,3] => '1 2 3'. How to do this?
Is there any customizable formatter in Python as Common Lisp format?
|
[
"' '.join(str(i) for i in your_list)\n\n",
"' '.join(str(i) for i in your_list)\n\nFirst, convert any element into a string, then join them into a unique string.\n"
] |
[
11,
7
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000939243_python.txt
|
Q:
Purpose of @ symbols in Python?
I've noticed in several examples i see things such as this:
# Comments explaining code i think
@innerclass
or:
def foo():
"""
Basic Doc String
"""
@classmethod
Googling doesn't get me very far, for just a general definition of what this is. Also i cant find anything really in the python documentation.
What do these do?
A:
They are called decorators. They are functions applied to other functions. Here is a copy of my answer to a similar question.
Python decorators add extra functionality to another function.
An italics decorator could be like
def makeitalic(fn):
def newFunc():
return "<i>" + fn() + "</i>"
return newFunc
Note that a function is defined inside a function. What it basically does is replace a function with the newly defined one. For example, I have this class
class foo:
def bar(self):
print "hi"
def foobar(self):
print "hi again"
Now say, I want both functions to print "---" after and before they are done. I could add a print "---" before and after each print statement. But because I don't like repeating myself, I will make a decorator
def addDashes(fn): # notice it takes a function as an argument
def newFunction(self): # define a new function
print "---"
fn(self) # call the original function
print "---"
return newFunction
# Return the newly defined function - it will "replace" the original
So now I can change my class to
class foo:
@addDashes
def bar(self):
print "hi"
@addDashes
def foobar(self):
print "hi again"
For more on decorators, check http://www.ibm.com/developerworks/linux/library/l-cpdecor.html
A:
They're decorators.
<shameless plug>
I have a blog post on the subject.
</shameless plug>
A:
With
@function
def f():
pass
you simply wrap function around f(). function is called a decorator.
It is just syntactic sugar for the following:
def f():
pass
f=function(f)
A:
it is a decorator syntax.
|
Purpose of @ symbols in Python?
|
I've noticed in several examples i see things such as this:
# Comments explaining code i think
@innerclass
or:
def foo():
"""
Basic Doc String
"""
@classmethod
Googling doesn't get me very far, for just a general definition of what this is. Also i cant find anything really in the python documentation.
What do these do?
|
[
"They are called decorators. They are functions applied to other functions. Here is a copy of my answer to a similar question.\nPython decorators add extra functionality to another function.\nAn italics decorator could be like\ndef makeitalic(fn):\n def newFunc():\n return \"<i>\" + fn() + \"</i>\"\n return newFunc\n\nNote that a function is defined inside a function. What it basically does is replace a function with the newly defined one. For example, I have this class\nclass foo:\n def bar(self):\n print \"hi\"\n def foobar(self):\n print \"hi again\"\n\nNow say, I want both functions to print \"---\" after and before they are done. I could add a print \"---\" before and after each print statement. But because I don't like repeating myself, I will make a decorator\ndef addDashes(fn): # notice it takes a function as an argument\n def newFunction(self): # define a new function\n print \"---\"\n fn(self) # call the original function\n print \"---\"\n return newFunction\n # Return the newly defined function - it will \"replace\" the original\n\nSo now I can change my class to \nclass foo:\n @addDashes\n def bar(self):\n print \"hi\"\n\n @addDashes\n def foobar(self):\n print \"hi again\"\n\nFor more on decorators, check http://www.ibm.com/developerworks/linux/library/l-cpdecor.html\n",
"They're decorators.\n<shameless plug>\nI have a blog post on the subject.\n</shameless plug>\n",
"With \n@function\ndef f():\n pass\n\nyou simply wrap function around f(). function is called a decorator. \nIt is just syntactic sugar for the following:\ndef f():\n pass\nf=function(f)\n\n",
"it is a decorator syntax.\n"
] |
[
26,
12,
5,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000939426_python.txt
|
Q:
Using mx:RemoteObject with web2py's @service.amfrpc decorator
I am using web2py (v1.63) and Flex 3. web2py v1.61 introduced the @service decorators, which allow you to tag a controller function with @service.amfrpc. You can then call that function remotely using http://..../app/default/call/amfrpc/[function]. See http://www.web2py.com/examples/default/tools#services. Does anybody have an example of how you would set up a Flex 3 to call a function like this? Here is what I have tried so far:
<mx:RemoteObject id="myRemote" destination="amfrpc" source="amfrpc"
endpoint="http://{mysite}/{myapp}/default/call/amfrpc/">
<mx:method name="getContacts"
result="show_results(event)"
fault="on_fault(event)" />
</mx:RemoteObject>
In my scenario, what should be the value of the destination and source attributes? I have read a couple of articles on non-web2py implementations, such as http://corlan.org/2008/10/10/flex-and-php-remoting-with-amfphp/, but they use a .../gateway.php file instead of having a URI that maps directly to the function.
Alternatively, I have been able to use flash.net.NetConnection to successfully call my remote function, but most of the documentation I have found considers this to be the old, pre-Flex 3 way of doing AMF. See http://pyamf.org/wiki/HelloWorld/Flex. Here is the NetConnection code:
gateway = new NetConnection();
gateway.connect("http://{mysite}/{myapp}/default/call/amfrpc/");
resp = new Responder(show_results, on_fault);
gateway.call("getContacts", resp);
-Rob
A:
I have not found a way to use a RemoteObject with the @service.amfrpc decorator. However, I can use the older ActionScript code using a NetConnection (similar to what I posted originally) and pair that with a @service.amfrpc function on the web2py side. This seems to work fine. The one thing that you would want to change in the NetConnection code I shared originally, is adding an event listener for connection status. You can add more listeners if you feel the need, but I found that NetStatusEvent was a must. This status will be fired if the server is not responding. You connection set up would look like:
gateway = new NetConnection();
gateway.addEventListener(NetStatusEvent.NET_STATUS, gateway_status);
gateway.connect("http://127.0.0.1:8000/robs_amf/default/call/amfrpc/");
resp = new Responder(show_results, on_fault);
gateway.call("getContacts", resp);
-Rob
|
Using mx:RemoteObject with web2py's @service.amfrpc decorator
|
I am using web2py (v1.63) and Flex 3. web2py v1.61 introduced the @service decorators, which allow you to tag a controller function with @service.amfrpc. You can then call that function remotely using http://..../app/default/call/amfrpc/[function]. See http://www.web2py.com/examples/default/tools#services. Does anybody have an example of how you would set up a Flex 3 to call a function like this? Here is what I have tried so far:
<mx:RemoteObject id="myRemote" destination="amfrpc" source="amfrpc"
endpoint="http://{mysite}/{myapp}/default/call/amfrpc/">
<mx:method name="getContacts"
result="show_results(event)"
fault="on_fault(event)" />
</mx:RemoteObject>
In my scenario, what should be the value of the destination and source attributes? I have read a couple of articles on non-web2py implementations, such as http://corlan.org/2008/10/10/flex-and-php-remoting-with-amfphp/, but they use a .../gateway.php file instead of having a URI that maps directly to the function.
Alternatively, I have been able to use flash.net.NetConnection to successfully call my remote function, but most of the documentation I have found considers this to be the old, pre-Flex 3 way of doing AMF. See http://pyamf.org/wiki/HelloWorld/Flex. Here is the NetConnection code:
gateway = new NetConnection();
gateway.connect("http://{mysite}/{myapp}/default/call/amfrpc/");
resp = new Responder(show_results, on_fault);
gateway.call("getContacts", resp);
-Rob
|
[
"I have not found a way to use a RemoteObject with the @service.amfrpc decorator. However, I can use the older ActionScript code using a NetConnection (similar to what I posted originally) and pair that with a @service.amfrpc function on the web2py side. This seems to work fine. The one thing that you would want to change in the NetConnection code I shared originally, is adding an event listener for connection status. You can add more listeners if you feel the need, but I found that NetStatusEvent was a must. This status will be fired if the server is not responding. You connection set up would look like:\ngateway = new NetConnection();\ngateway.addEventListener(NetStatusEvent.NET_STATUS, gateway_status);\ngateway.connect(\"http://127.0.0.1:8000/robs_amf/default/call/amfrpc/\");\nresp = new Responder(show_results, on_fault);\ngateway.call(\"getContacts\", resp);\n\n-Rob\n"
] |
[
1
] |
[] |
[] |
[
"apache_flex",
"flex3",
"pyamf",
"python",
"web2py"
] |
stackoverflow_0000927028_apache_flex_flex3_pyamf_python_web2py.txt
|
Q:
How Python calculate number?
Possible Duplicate:
python - decimal place issues with floats
In [4]: 52+121.2
Out[4]: 173.19999999999999
A:
Short answer: Python uses binary arithmetic for floating-point numbers, not decimal arithmetic. Decimal fractions are not exactly representable in binary.
Long answer: What Every Computer Scientist Should Know About Floating-Point Arithmetic
A:
If you're familiar with the idea that the number "thirteen point two" is written in base ten as "13.2" because it's "10^1 * 1 + 10^0 * 3 + 10^-1 * 2" then try to do the same thing with a base of 2 instead of 10 for the number 173.2.
Here's the whole part:
(1 * 2^7) + (0 * 2^6) + (1 * 2^5) + (0 * 2^4) + (1 * 2^3) + (1 * 2^2) + (0 * 2^1) + (0 * 2^0)
Now here's the start fractional part:
(0 * 2^-1) + (0 * 2^-2) + (1 * 2^-3)
That's .125, which isn't yet 2/10ths so we need more additions that are of the form (1 * 2^-n), we can carry this out a bit further with (1 * 2^-4) + (1 * 2^-7), which gets us a bit closer ... to 0.1953125, but no matter how long we do this, we'll never get to ".2" because ".2" is not representable as a addition of sums of numbers of the form (1 * 2^-n).
Also see .9999… = 1.0 (http://en.wikipedia.org/wiki/0.999...)
A:
Try this:
>>> from decimal import Decimal
>>> Decimal("52") + Decimal("121.2")
Decimal("173.2")
A:
The other answers, pointing to good floating-point resources, are where to start. If you understand floating point roundoff errors, however, and just want your numbers to look prettier and not include a dozen extra digits for that extra last bit of precision, try using str() to print the number rather than repr():
>>> 52+121.2
173.19999999999999
>>> str(52+121.2)
'173.2'
|
How Python calculate number?
|
Possible Duplicate:
python - decimal place issues with floats
In [4]: 52+121.2
Out[4]: 173.19999999999999
|
[
"Short answer: Python uses binary arithmetic for floating-point numbers, not decimal arithmetic. Decimal fractions are not exactly representable in binary.\nLong answer: What Every Computer Scientist Should Know About Floating-Point Arithmetic\n",
"If you're familiar with the idea that the number \"thirteen point two\" is written in base ten as \"13.2\" because it's \"10^1 * 1 + 10^0 * 3 + 10^-1 * 2\" then try to do the same thing with a base of 2 instead of 10 for the number 173.2.\nHere's the whole part:\n(1 * 2^7) + (0 * 2^6) + (1 * 2^5) + (0 * 2^4) + (1 * 2^3) + (1 * 2^2) + (0 * 2^1) + (0 * 2^0)\nNow here's the start fractional part:\n(0 * 2^-1) + (0 * 2^-2) + (1 * 2^-3)\nThat's .125, which isn't yet 2/10ths so we need more additions that are of the form (1 * 2^-n), we can carry this out a bit further with (1 * 2^-4) + (1 * 2^-7), which gets us a bit closer ... to 0.1953125, but no matter how long we do this, we'll never get to \".2\" because \".2\" is not representable as a addition of sums of numbers of the form (1 * 2^-n).\nAlso see .9999… = 1.0 (http://en.wikipedia.org/wiki/0.999...)\n",
"Try this:\n>>> from decimal import Decimal\n>>> Decimal(\"52\") + Decimal(\"121.2\")\nDecimal(\"173.2\")\n\n",
"The other answers, pointing to good floating-point resources, are where to start. If you understand floating point roundoff errors, however, and just want your numbers to look prettier and not include a dozen extra digits for that extra last bit of precision, try using str() to print the number rather than repr():\n>>> 52+121.2 \n173.19999999999999\n>>> str(52+121.2)\n'173.2'\n\n"
] |
[
22,
8,
7,
3
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000937692_python.txt
|
Q:
Why no pure Python SSH1 (version 1) client implementations?
There seem to be a few good pure Python SSH2 client implementations out there, but I haven't been able to find one for SSH1. Is there some specific reason for this other than lack of interest in such a project? I am fully aware of the many SSH1 vulnerabilities, but a pure Python SSH1 client implementation would still be very useful to those of us who want to write SSH clients to manage older embedded devices which only support SSH1 (Cisco PIX for example). I also know I'm not the only person looking for this.
The reason I'm asking is because I'm bored, and I've been thinking about taking a stab at writing this myself. I've just been hesitant to start, since I know there are a lot of people out there who are much smarter than me, and I figured there might be some reason why nobody has done it yet.
A:
SSHv1 was considered deprecated in 2001, so I assume nobody really wanted to put the effort into it. I'm not sure if there's even an rfc for SSH1, so getting the full protocol spec may require reading through old source code.
Since there are known vulnerabilities, it's not much better than telnet, which is almost universally supported on old and/or embedded devices.
A:
Well, the main reason probably was that when people started getting interested in such things in VHLLs such as Python, it didn't make sense to them to implement a standard which they themselves would not find useful.
I am not familiar with the protocol differences, but would it be possible for you to adapt an existing codebase to the older protocol?
|
Why no pure Python SSH1 (version 1) client implementations?
|
There seem to be a few good pure Python SSH2 client implementations out there, but I haven't been able to find one for SSH1. Is there some specific reason for this other than lack of interest in such a project? I am fully aware of the many SSH1 vulnerabilities, but a pure Python SSH1 client implementation would still be very useful to those of us who want to write SSH clients to manage older embedded devices which only support SSH1 (Cisco PIX for example). I also know I'm not the only person looking for this.
The reason I'm asking is because I'm bored, and I've been thinking about taking a stab at writing this myself. I've just been hesitant to start, since I know there are a lot of people out there who are much smarter than me, and I figured there might be some reason why nobody has done it yet.
|
[
"SSHv1 was considered deprecated in 2001, so I assume nobody really wanted to put the effort into it. I'm not sure if there's even an rfc for SSH1, so getting the full protocol spec may require reading through old source code.\nSince there are known vulnerabilities, it's not much better than telnet, which is almost universally supported on old and/or embedded devices.\n",
"Well, the main reason probably was that when people started getting interested in such things in VHLLs such as Python, it didn't make sense to them to implement a standard which they themselves would not find useful.\nI am not familiar with the protocol differences, but would it be possible for you to adapt an existing codebase to the older protocol?\n"
] |
[
3,
1
] |
[] |
[] |
[
"python",
"ssh"
] |
stackoverflow_0000936783_python_ssh.txt
|
Q:
Python Extension Returned Object Etiquette
I am writing a python extension to provide access to Solaris kstat data ( in the same spirit as the shipping perl library Sun::Solaris::Kstat ) and I have a question about conditionally returning a list or a single object. The python use case would look something like:
cpu_stats = cKstats.lookup(module='cpu_stat')
cpu_stat0 = cKstats.lookup('cpu_stat',0,'cpu_stat0')
As it's currently implemented, lookup() returns a list of all kstat objects which match. The first case would result in a list of objects ( as many as there are CPUs ) and the second call specifies a single kstat completely and would return a list containing one kstat.
My question is it poor form to return a single object when there is only one match, and a list when there are many?
Thank you for the thoughtful answer! My python-fu is weak but growing stronger due to folks like you.
A:
"My question is it poor form to return a single object when there is only one match, and a list when there are many?"
It's poor form to return inconsistent types.
Return a consistent type: List of kstat.
Most Pythonistas don't like using type(result) to determine if it's a kstat or a list of kstats.
We'd rather check the length of the list in a simple, consistent way.
Also, if the length depends on a piece of system information, perhaps an API method could provide this metadata.
Look at DB-API PEP for advice and ideas on how to handle query-like things.
|
Python Extension Returned Object Etiquette
|
I am writing a python extension to provide access to Solaris kstat data ( in the same spirit as the shipping perl library Sun::Solaris::Kstat ) and I have a question about conditionally returning a list or a single object. The python use case would look something like:
cpu_stats = cKstats.lookup(module='cpu_stat')
cpu_stat0 = cKstats.lookup('cpu_stat',0,'cpu_stat0')
As it's currently implemented, lookup() returns a list of all kstat objects which match. The first case would result in a list of objects ( as many as there are CPUs ) and the second call specifies a single kstat completely and would return a list containing one kstat.
My question is it poor form to return a single object when there is only one match, and a list when there are many?
Thank you for the thoughtful answer! My python-fu is weak but growing stronger due to folks like you.
|
[
"\"My question is it poor form to return a single object when there is only one match, and a list when there are many?\"\nIt's poor form to return inconsistent types.\nReturn a consistent type: List of kstat.\nMost Pythonistas don't like using type(result) to determine if it's a kstat or a list of kstats.\nWe'd rather check the length of the list in a simple, consistent way. \nAlso, if the length depends on a piece of system information, perhaps an API method could provide this metadata.\nLook at DB-API PEP for advice and ideas on how to handle query-like things.\n"
] |
[
7
] |
[] |
[] |
[
"python",
"python_c_api"
] |
stackoverflow_0000940563_python_python_c_api.txt
|
Q:
How to pass pointer to an array in Python for a wrapped C++ function
I am new to C++/Python mixed language programming and do not have much idea about Python/C API. I just started using Boost.Python to wrap a C++ library for Python. I am stuck at wrapping a function that takes pointer to an array as an argument. Following (2nd ctor) is its prototype in C++.
class AAF{
AAF(AAF_TYPE t);
AAF(double v0, const double * t1, const unsigned * t2, unsigned T);
~AAF();
}
Am I doing right by wrapping it like this in boost::python?
class_<AAF>("AAF", init<AAF_TYPE>())
.def(init<double, const double*, const unsigned*, unsigned>());
Note that it compiled and linked successfully, but I couldn't figure out how to call it in Python. My naive tries like the following failed.
>>> z = AAF(10, [4, 5.5, 10], [1, 1, 2], 3);
Traceback (most recent call last):
File "./test_interval.py", line 40, in <module>
z = AAF(10, [4, 5.5, 10], [1, 1, 2], 3);
Boost.Python.ArgumentError: Python argument types in
AAF.__init__(AAF, int, list, list, int)
did not match C++ signature:
__init__(_object*, AAF_TYPE)
__init__(_object*, double, double const*, unsigned int const*, unsigned int)
>>> t1 = array.array('d', [4, 5.5, 10])
>>> t2 = array.array('I', [1, 1, 2])
>>> z = AAF(10, t1, t2, 3);
Traceback (most recent call last):
File "./test_interval.py", line 40, in <module>
z = AAF(10, t1, t2, 3);
Boost.Python.ArgumentError: Python argument types in
AAF.__init__(AAF, int, array.array, array.array, int)
did not match C++ signature:
__init__(_object*, AAF_TYPE)
__init__(_object*, double, double const*, unsigned int const*, unsigned int)
My second question is that do I also need to wrap the destructor? Please specify if this might be necessary in some cases but not always.
A:
The wrapping is right (in principle) but in
AAF(10, [4, 5.5, 10], [1, 1, 2], 3);
(as the interpreter points out) you're passing to your function python's list objects, not pointers.
In short, if your function needs only to work on python's lists you need to change your code to use that interface (instead of using pointers). If you need to keep that interface, you have to write a wrapper function that takes a list from python, does the proper conversion and calls your orginal c++ function. The same applies to numpy arrays.
Please note that boost::python offers some built-in mechanism to convert python containers to stl compatible containers.
An example wrapping code for your case could be
void f(list o) {
std::size_t n = len(o);
double* tmp = new double[n];
for (int i = 0; i < n; i++) {
tmp[i] = extract<double>(o[i]);
}
std::cout << std::endl;
// use tmp
delete tmp;
}
Please give a look at the boost.python tutorial at http://www.boost.org/doc/libs/1_39_0/libs/python/doc/tutorial/doc/html/index.html.
|
How to pass pointer to an array in Python for a wrapped C++ function
|
I am new to C++/Python mixed language programming and do not have much idea about Python/C API. I just started using Boost.Python to wrap a C++ library for Python. I am stuck at wrapping a function that takes pointer to an array as an argument. Following (2nd ctor) is its prototype in C++.
class AAF{
AAF(AAF_TYPE t);
AAF(double v0, const double * t1, const unsigned * t2, unsigned T);
~AAF();
}
Am I doing right by wrapping it like this in boost::python?
class_<AAF>("AAF", init<AAF_TYPE>())
.def(init<double, const double*, const unsigned*, unsigned>());
Note that it compiled and linked successfully, but I couldn't figure out how to call it in Python. My naive tries like the following failed.
>>> z = AAF(10, [4, 5.5, 10], [1, 1, 2], 3);
Traceback (most recent call last):
File "./test_interval.py", line 40, in <module>
z = AAF(10, [4, 5.5, 10], [1, 1, 2], 3);
Boost.Python.ArgumentError: Python argument types in
AAF.__init__(AAF, int, list, list, int)
did not match C++ signature:
__init__(_object*, AAF_TYPE)
__init__(_object*, double, double const*, unsigned int const*, unsigned int)
>>> t1 = array.array('d', [4, 5.5, 10])
>>> t2 = array.array('I', [1, 1, 2])
>>> z = AAF(10, t1, t2, 3);
Traceback (most recent call last):
File "./test_interval.py", line 40, in <module>
z = AAF(10, t1, t2, 3);
Boost.Python.ArgumentError: Python argument types in
AAF.__init__(AAF, int, array.array, array.array, int)
did not match C++ signature:
__init__(_object*, AAF_TYPE)
__init__(_object*, double, double const*, unsigned int const*, unsigned int)
My second question is that do I also need to wrap the destructor? Please specify if this might be necessary in some cases but not always.
|
[
"The wrapping is right (in principle) but in\nAAF(10, [4, 5.5, 10], [1, 1, 2], 3);\n\n(as the interpreter points out) you're passing to your function python's list objects, not pointers.\nIn short, if your function needs only to work on python's lists you need to change your code to use that interface (instead of using pointers). If you need to keep that interface, you have to write a wrapper function that takes a list from python, does the proper conversion and calls your orginal c++ function. The same applies to numpy arrays.\nPlease note that boost::python offers some built-in mechanism to convert python containers to stl compatible containers.\nAn example wrapping code for your case could be\nvoid f(list o) {\n std::size_t n = len(o);\n double* tmp = new double[n];\n for (int i = 0; i < n; i++) {\n tmp[i] = extract<double>(o[i]);\n }\n std::cout << std::endl;\n // use tmp\n delete tmp;\n}\n\nPlease give a look at the boost.python tutorial at http://www.boost.org/doc/libs/1_39_0/libs/python/doc/tutorial/doc/html/index.html.\n"
] |
[
4
] |
[] |
[] |
[
"arrays",
"boost_python",
"pointers",
"python",
"word_wrap"
] |
stackoverflow_0000940132_arrays_boost_python_pointers_python_word_wrap.txt
|
Q:
Python 2.6 subprocess.call() appears to be invoking setgid behavior triggering Perl's taint checks. How can I resolve?
I've got some strange behavioral differences between Python's subprocess.call() and os.system() that appears to be related to setgid. The difference is causing Perl's taint checks to be invoked when subprocess.call() is used, which creates problems because I do not have the ability to modify all the Perl scripts that would need untaint code added to them.
Example, "process.py"
#!/usr/bin/python
import os, subprocess
print "Python calling os.system"
os.system('perl subprocess.pl true')
print "Python done calling os.system"
print "Python calling subprocess.call"
subprocess.call(['perl', 'subprocess.pl', 'true'])
print "Python done calling subprocess.call"
"subprocess.pl"
#!/usr/bin/perl
print "perl subprocess\n";
`$ARGV[0]`;
print "perl subprocess done\n";
The output - both runs of subprocess.pl should be the same, but the one run with subprocess.call() gets a taint error:
mybox> process.py
Python calling os.system
perl subprocess
perl subprocess done
Python done calling os.system
Python calling subprocess.call
perl subprocess
Insecure dependency in `` while running setgid at subprocess.pl line 4.
Python done calling subprocess.call
mybox>
While using os.system() works, I would really rather be using subprocess.check_call() as it's more forward-compatible and has nice checking behaviors.
Any suggestions or documentation that might explain why these two are different? Is it possible this is some strange setting in my local unix environment that is invoking these behaviors?
A:
I think your error is with perl, or the way it's interacting with your environment.
Your backtick process is calling setgid for some reason. The only way I can replicate this, is to setgid on /usr/bin/perl (-rwxr-sr-x). [EDIT] Having python setgid does this too!
[EDIT] I forgot that os.system is working for you. I think the only relevant difference here, is that with os.system the environment is not inherited by the subprocess. Look through the environment of each subprocess, and you may find your culprit.
A:
Doesn't happen for me:
$ python proc.py
Python calling os.system
perl subprocess
perl subprocess done
Python done calling os.system
Python calling subprocess.call
perl subprocess
perl subprocess done
Python done calling subprocess.call
$ python --version
Python 2.5.2
$ perl --version
This is perl, v5.8.8 built for i486-linux-gnu-thread-multi
What are your version numbers?
Under what sort of account are you running?
EDIT:
Sorry missed the title - I don't have python 2.6 anywhere easy to access, so I'll have to leave this problem.
EDIT:
So it looks like we worked out the problem - sgid on the python 2.6 binary.
It would also be interesting to see if subprocess with the shell also avoids the problem.
|
Python 2.6 subprocess.call() appears to be invoking setgid behavior triggering Perl's taint checks. How can I resolve?
|
I've got some strange behavioral differences between Python's subprocess.call() and os.system() that appears to be related to setgid. The difference is causing Perl's taint checks to be invoked when subprocess.call() is used, which creates problems because I do not have the ability to modify all the Perl scripts that would need untaint code added to them.
Example, "process.py"
#!/usr/bin/python
import os, subprocess
print "Python calling os.system"
os.system('perl subprocess.pl true')
print "Python done calling os.system"
print "Python calling subprocess.call"
subprocess.call(['perl', 'subprocess.pl', 'true'])
print "Python done calling subprocess.call"
"subprocess.pl"
#!/usr/bin/perl
print "perl subprocess\n";
`$ARGV[0]`;
print "perl subprocess done\n";
The output - both runs of subprocess.pl should be the same, but the one run with subprocess.call() gets a taint error:
mybox> process.py
Python calling os.system
perl subprocess
perl subprocess done
Python done calling os.system
Python calling subprocess.call
perl subprocess
Insecure dependency in `` while running setgid at subprocess.pl line 4.
Python done calling subprocess.call
mybox>
While using os.system() works, I would really rather be using subprocess.check_call() as it's more forward-compatible and has nice checking behaviors.
Any suggestions or documentation that might explain why these two are different? Is it possible this is some strange setting in my local unix environment that is invoking these behaviors?
|
[
"I think your error is with perl, or the way it's interacting with your environment.\nYour backtick process is calling setgid for some reason. The only way I can replicate this, is to setgid on /usr/bin/perl (-rwxr-sr-x). [EDIT] Having python setgid does this too!\n[EDIT] I forgot that os.system is working for you. I think the only relevant difference here, is that with os.system the environment is not inherited by the subprocess. Look through the environment of each subprocess, and you may find your culprit.\n",
"Doesn't happen for me:\n$ python proc.py\nPython calling os.system\nperl subprocess\nperl subprocess done\nPython done calling os.system\nPython calling subprocess.call\nperl subprocess\nperl subprocess done\nPython done calling subprocess.call\n\n$ python --version\nPython 2.5.2\n\n$ perl --version\nThis is perl, v5.8.8 built for i486-linux-gnu-thread-multi\n\nWhat are your version numbers?\nUnder what sort of account are you running?\nEDIT:\nSorry missed the title - I don't have python 2.6 anywhere easy to access, so I'll have to leave this problem.\nEDIT:\nSo it looks like we worked out the problem - sgid on the python 2.6 binary.\nIt would also be interesting to see if subprocess with the shell also avoids the problem.\n"
] |
[
2,
0
] |
[] |
[] |
[
"perl",
"python",
"subprocess"
] |
stackoverflow_0000940552_perl_python_subprocess.txt
|
Q:
Threading In Python
I'm new to threading and was wondering if it's bad to spawn a lot of threads for various tasks (in a server environment). Do threads take up a lot more memory/cpu compared to more linear programming?
A:
You have to consider multiple things if you want to use multiple threads:
You can only run #processors threads simultaneously. (Obvious)
In Python each thread is a 'kernel thread' which normally takes a non-trivial amount of resources (8 mb stack by default on linux)
Python has a global interpreter lock, which means only one python instructions can be processed at once independently of the number of processors. This lock is however released if one of your threads waits on IO.
The conclusion I take from that:
If you are doing IO (Turbogears, Twisted) or are using properly coded extension modules (numpy) use threads.
If you want to execute python code concurrently use processes (easiest with multiprocess module)
A:
Since you're new to threading, there's something else worth bearing in mind - which I prefer to view as parallel scoping of values.
With traditional linear/sequential programming for any given object you only have one thread accessing and changing a piece of data. This is generally made safe due to having lexical scope. Specifically one function can operate on a variable's value without affect a global value. If you don't have lexical scope, or poor lexical scoping, then changing the value of a variable named "foo" in one function affects another called "foo". It's less commonly a problem these days, but still common enough to be alluding to.
With threading, you have the same issue, in more subtle ways. Whilst you still have lexical scoping helping you - in that a local value "X" inside one function is independent of another local value called "X" in another, the fact that data structures are mutable is a major source of bugs in threading.
Specifically, if a function is passed a mutable value, then in a threaded environment unless you have taken care, that function cannot guarantee that the value is not being changed by anything else.
This shared state is the source of probably 90-99% of bugs in threaded systems, and can be very hard to debug. As a result, if you're going to write a threaded system you should try to bear in mind the distance that your shared values will travel - ie the scope of parallel access.
In order to limit bugs you have a handful of options which are known to work:
Use no shared state - pass shared data using thread safe queues
Place locks around all shared data, and ensure this is used religiously throughout your code. This can be far harder sometimes than people think. The problem comes from when you "forget" to lock an object - something which is remarkably easy for people to do.
Have a single object - an owner - in charge of shared state. Allow client threads to ask it for copies of values in the shared state, which are accompanied by a token. When they want to update the shared state, they pass back messages to the single object, along with the token they had. The owner can then determine whether an update clash has occured.
1 is most equivalent to unix pipelines. 3 is logically equivalent to version control, and is normally referred to as software transactional memory.
1 & 3 are modes of concurrency supported in Kamaelia which aims to eliminate bugs caused by class 2. (disclosure, I run the Kamaelia project) 2 isn't supported because it relies on "always getting everything right".
No matter which approach you use to solve your problem though, bearing in mind this issue, and the ways of dealing with it, and planning upfront how you intend to deal with it will save you no end of headaches later on.
A:
It depends on the platform.
Windows threads have to commit around 1MB of memory when created. It's better to have some kind of threadpool than spawning threads like a madman, to make sure you never allocate more than a fixed amount of threads. Also, when you work in Python, you're subject to the Global Interpreter Lock which hinders code that rely on lots of concurrent threads.
On Unix, you may consider using different processes instead of threads, or look at other asynchronous way of doing things (the Twisted server framework has interesting ways of handling asynchronous network tasks, and if you feel really adventurous you can take a look at stackless Python, a continuation framework which don't really use kernel threads at all).
A:
You might consider using microthreads if you need concurrency. There's a good article on the subject here. The advantage is that you're not creating "real" threads that eat up resources and cause context switching. Of course the downside is that you're not taking advantage of multicore technology.
This is the approach Kamaelia and stackless take.
If you're doing I/O, you might consider using asynchronous I/O as well. This can be a real pain to program, but it prevents you from having threads fight over CPU time. Unfortunately, I don't know of any platform independent way to do this in Python.
A:
Threads do have some CPU and memory overhead but unless you spawn hundreds or thousands of them, it usually isn't all that significant. The more important issue is that threads make correct programming a lot more difficult if you share any writable datastructures between concurrent threads. See the paper The Problem with Threads for an explanation why they aren't a good abstraction for concurrent programming.
A:
Excellent answers all around! I just wanted to add that, if you decide to go with a thread pool (generally advisable if you decide that threads are suitable for your app), you would be well advised to reuse (and possibly adapt) an existing general-purpose implementation, such as Christopher Arndt's, rather than roll your own from scratch (which is always an instructive undertaking, to be sure, but less productive in terms of time it takes to get you properly working and debugged code;-).
|
Threading In Python
|
I'm new to threading and was wondering if it's bad to spawn a lot of threads for various tasks (in a server environment). Do threads take up a lot more memory/cpu compared to more linear programming?
|
[
"You have to consider multiple things if you want to use multiple threads:\n\nYou can only run #processors threads simultaneously. (Obvious)\nIn Python each thread is a 'kernel thread' which normally takes a non-trivial amount of resources (8 mb stack by default on linux)\nPython has a global interpreter lock, which means only one python instructions can be processed at once independently of the number of processors. This lock is however released if one of your threads waits on IO.\n\nThe conclusion I take from that:\n\nIf you are doing IO (Turbogears, Twisted) or are using properly coded extension modules (numpy) use threads.\nIf you want to execute python code concurrently use processes (easiest with multiprocess module)\n\n",
"Since you're new to threading, there's something else worth bearing in mind - which I prefer to view as parallel scoping of values.\nWith traditional linear/sequential programming for any given object you only have one thread accessing and changing a piece of data. This is generally made safe due to having lexical scope. Specifically one function can operate on a variable's value without affect a global value. If you don't have lexical scope, or poor lexical scoping, then changing the value of a variable named \"foo\" in one function affects another called \"foo\". It's less commonly a problem these days, but still common enough to be alluding to.\nWith threading, you have the same issue, in more subtle ways. Whilst you still have lexical scoping helping you - in that a local value \"X\" inside one function is independent of another local value called \"X\" in another, the fact that data structures are mutable is a major source of bugs in threading.\nSpecifically, if a function is passed a mutable value, then in a threaded environment unless you have taken care, that function cannot guarantee that the value is not being changed by anything else.\nThis shared state is the source of probably 90-99% of bugs in threaded systems, and can be very hard to debug. As a result, if you're going to write a threaded system you should try to bear in mind the distance that your shared values will travel - ie the scope of parallel access.\nIn order to limit bugs you have a handful of options which are known to work:\n\nUse no shared state - pass shared data using thread safe queues\nPlace locks around all shared data, and ensure this is used religiously throughout your code. This can be far harder sometimes than people think. The problem comes from when you \"forget\" to lock an object - something which is remarkably easy for people to do.\nHave a single object - an owner - in charge of shared state. Allow client threads to ask it for copies of values in the shared state, which are accompanied by a token. When they want to update the shared state, they pass back messages to the single object, along with the token they had. The owner can then determine whether an update clash has occured.\n\n1 is most equivalent to unix pipelines. 3 is logically equivalent to version control, and is normally referred to as software transactional memory.\n1 & 3 are modes of concurrency supported in Kamaelia which aims to eliminate bugs caused by class 2. (disclosure, I run the Kamaelia project) 2 isn't supported because it relies on \"always getting everything right\".\nNo matter which approach you use to solve your problem though, bearing in mind this issue, and the ways of dealing with it, and planning upfront how you intend to deal with it will save you no end of headaches later on.\n",
"It depends on the platform. \nWindows threads have to commit around 1MB of memory when created. It's better to have some kind of threadpool than spawning threads like a madman, to make sure you never allocate more than a fixed amount of threads. Also, when you work in Python, you're subject to the Global Interpreter Lock which hinders code that rely on lots of concurrent threads.\nOn Unix, you may consider using different processes instead of threads, or look at other asynchronous way of doing things (the Twisted server framework has interesting ways of handling asynchronous network tasks, and if you feel really adventurous you can take a look at stackless Python, a continuation framework which don't really use kernel threads at all).\n",
"You might consider using microthreads if you need concurrency. There's a good article on the subject here. The advantage is that you're not creating \"real\" threads that eat up resources and cause context switching. Of course the downside is that you're not taking advantage of multicore technology.\nThis is the approach Kamaelia and stackless take.\nIf you're doing I/O, you might consider using asynchronous I/O as well. This can be a real pain to program, but it prevents you from having threads fight over CPU time. Unfortunately, I don't know of any platform independent way to do this in Python.\n",
"Threads do have some CPU and memory overhead but unless you spawn hundreds or thousands of them, it usually isn't all that significant. The more important issue is that threads make correct programming a lot more difficult if you share any writable datastructures between concurrent threads. See the paper The Problem with Threads for an explanation why they aren't a good abstraction for concurrent programming.\n",
"Excellent answers all around! I just wanted to add that, if you decide to go with a thread pool (generally advisable if you decide that threads are suitable for your app), you would be well advised to reuse (and possibly adapt) an existing general-purpose implementation, such as Christopher Arndt's, rather than roll your own from scratch (which is always an instructive undertaking, to be sure, but less productive in terms of time it takes to get you properly working and debugged code;-).\n"
] |
[
19,
5,
4,
2,
1,
1
] |
[] |
[] |
[
"multithreading",
"python"
] |
stackoverflow_0000939754_multithreading_python.txt
|
Q:
Unable to replace the word in a given folder's contents by Sed/Python/Perl
I have a project where I have folders, subfolders, and files. I need to replace the word Masi by the word Bond in each files.
I run the following Sed script called replace unsuccessfully
s/Masi/Bond/
in Zsh by
sed -f PATH/replace PATH2/project/**
It gives me all files, also the ones which do not have Masi, as an output.
Sed is not necessarily the best tool for the task.
I am interested in Python and Perl.
How would you do the replacement in Sed/Perl/Python, such that only the file contents are changed?
A:
To replace the word in all files found in the current directory and subdirectories
perl -p -i -e 's/Masi/Bond/g' $(grep -rl Masi *)
The above won't work if you have spaces in filenames. Safer to do:
find . -type f -exec perl -p -i -e 's/Masi/Bond/g' {} \;
or in Mac which has spaces in filenames
find . -type f -print0 | xargs -0 perl -p -i -e 's/Masi/Bond/g'
Explanations
-p means print or die
-i means "do not make any backup files"
-e allows you to run perl code in command line
A:
Why not just pass the -i option (man sed) to sed and be done with it? If it doesn't find Masi in a file, the file will just be rewritten with no modification. Or am I missing something?
If you don't want to replace the files' contents inline (which is what the -i will do) you can do exactly as you are now, but throw a grep & xargs in front of it:
grep -rl Masi PATH/project/* | xargs sed -f PATH/replace
Lots of options, but do not write an entire perl script for this (I'll give the one-liner a pass ;)). find, grep, sed, xargs, etc. will always be more flexible, IMHO.
In response to comment:
grep -rl Masi PATH/project/* | xargs sed -n -e '/Masi/ p'
A:
Renaming a folder full of files:
use warnings;
use strict;
use File::Find::Rule;
my @list = File::Find::Rule->new()->name(qr/Masi/)->file->in('./');
for( @list ){
my $old = $_;
my $new = $_;
$new =~ s/Masi/Bond/g;
rename $old , $new ;
}
Replacing Strings in Files
use warnings;
use strict;
use File::Find::Rule;
use File::Slurp;
use File::Copy;
my @list = File::Find::Rule->new()->name("*.something")->file->grep(qr/Masi/)->in('./');
for( @list ){
my $c = read_file( $_ );
if ( $c =~ s/Masi/Bond/g; ){
File::Copy::copy($_, "$_.bak"); # backup.
write_file( $_ , $c );
}
}
strict (core) - Perl pragma to restrict unsafe constructs
warnings (core) - Perl pragma to control optional warnings
File::Find::Rule - Alternative interface to File::Find
File::Find (core) - Traverse a directory tree.
File::Slurp - Efficient Reading/Writing of Complete Files
File::Copy (core) - Copy files or filehandles
A:
A solution tested on Windows
Requires CPAN module File::Slurp. Will work with standard Unix shell wildcards. Like ./replace.pl PATH/replace.txt PATH2/replace*
#!/usr/bin/perl
use strict;
use warnings;
use File::Glob ':glob';
use File::Slurp;
foreach my $dir (@ARGV) {
my @filelist = bsd_glob($dir);
foreach my $file (@filelist) {
next if -d $file;
my $c=read_file($file);
if ($c=~s/Masi/Bond/g) {
print "replaced in $file\n";
write_file($file,$c);
} else {
print "no match in $file\n";
}
}
}
A:
import glob
import os
# Change the glob for different filename matching
for filename in glob.glob("*"):
dst=filename.replace("Masi","Bond")
os.rename(filename, dst)
|
Unable to replace the word in a given folder's contents by Sed/Python/Perl
|
I have a project where I have folders, subfolders, and files. I need to replace the word Masi by the word Bond in each files.
I run the following Sed script called replace unsuccessfully
s/Masi/Bond/
in Zsh by
sed -f PATH/replace PATH2/project/**
It gives me all files, also the ones which do not have Masi, as an output.
Sed is not necessarily the best tool for the task.
I am interested in Python and Perl.
How would you do the replacement in Sed/Perl/Python, such that only the file contents are changed?
|
[
"To replace the word in all files found in the current directory and subdirectories\nperl -p -i -e 's/Masi/Bond/g' $(grep -rl Masi *)\n\nThe above won't work if you have spaces in filenames. Safer to do:\nfind . -type f -exec perl -p -i -e 's/Masi/Bond/g' {} \\;\n\nor in Mac which has spaces in filenames\nfind . -type f -print0 | xargs -0 perl -p -i -e 's/Masi/Bond/g'\n\nExplanations\n\n-p means print or die\n-i means \"do not make any backup files\"\n-e allows you to run perl code in command line\n\n",
"Why not just pass the -i option (man sed) to sed and be done with it? If it doesn't find Masi in a file, the file will just be rewritten with no modification. Or am I missing something?\nIf you don't want to replace the files' contents inline (which is what the -i will do) you can do exactly as you are now, but throw a grep & xargs in front of it:\ngrep -rl Masi PATH/project/* | xargs sed -f PATH/replace\n\nLots of options, but do not write an entire perl script for this (I'll give the one-liner a pass ;)). find, grep, sed, xargs, etc. will always be more flexible, IMHO.\nIn response to comment:\ngrep -rl Masi PATH/project/* | xargs sed -n -e '/Masi/ p'\n\n",
"Renaming a folder full of files:\nuse warnings;\nuse strict;\nuse File::Find::Rule;\n\nmy @list = File::Find::Rule->new()->name(qr/Masi/)->file->in('./');\n\nfor( @list ){\n my $old = $_;\n my $new = $_;\n $new =~ s/Masi/Bond/g;\n rename $old , $new ;\n}\n\nReplacing Strings in Files\nuse warnings;\nuse strict;\nuse File::Find::Rule;\nuse File::Slurp;\nuse File::Copy;\n\nmy @list = File::Find::Rule->new()->name(\"*.something\")->file->grep(qr/Masi/)->in('./');\n\nfor( @list ){\n my $c = read_file( $_ );\n if ( $c =~ s/Masi/Bond/g; ){\n File::Copy::copy($_, \"$_.bak\"); # backup.\n write_file( $_ , $c );\n }\n}\n\n\nstrict (core) - Perl pragma to restrict unsafe constructs\nwarnings (core) - Perl pragma to control optional warnings\nFile::Find::Rule - Alternative interface to File::Find \nFile::Find (core) - Traverse a directory tree.\nFile::Slurp - Efficient Reading/Writing of Complete Files\nFile::Copy (core) - Copy files or filehandles\n\n",
"A solution tested on Windows\nRequires CPAN module File::Slurp. Will work with standard Unix shell wildcards. Like ./replace.pl PATH/replace.txt PATH2/replace*\n#!/usr/bin/perl\n\nuse strict;\nuse warnings;\nuse File::Glob ':glob';\nuse File::Slurp;\nforeach my $dir (@ARGV) {\n my @filelist = bsd_glob($dir);\n foreach my $file (@filelist) {\n next if -d $file;\n my $c=read_file($file);\n if ($c=~s/Masi/Bond/g) {\n print \"replaced in $file\\n\";\n write_file($file,$c);\n } else {\n print \"no match in $file\\n\";\n }\n }\n}\n\n",
"import glob\nimport os\n\n# Change the glob for different filename matching \nfor filename in glob.glob(\"*\"):\n dst=filename.replace(\"Masi\",\"Bond\")\n os.rename(filename, dst)\n\n"
] |
[
12,
3,
3,
0,
0
] |
[] |
[] |
[
"perl",
"python",
"replace",
"sed"
] |
stackoverflow_0000894802_perl_python_replace_sed.txt
|
Q:
using "range" in a google app engine template for - loop
i've got an appengine project and in my template i want to do something like
{% for i in range(0, len(somelist)) %}
{{ somelist[i] }} {{ otherlist[i] }}
{% endfor %}
i've tried using 'forloop.counter' to access list items too, but that didn't work out either. any suggestions?
regards, mux
A:
What you might want to do instead is change the data that you're passing in to the template so that somelist and otherlist are zipped together into a single list:
combined_list = zip(somelist, otherlist)
...
{% for item in combined_list %}
{{ item.0 }} {{ item.1 }}
{% endfor %}
|
using "range" in a google app engine template for - loop
|
i've got an appengine project and in my template i want to do something like
{% for i in range(0, len(somelist)) %}
{{ somelist[i] }} {{ otherlist[i] }}
{% endfor %}
i've tried using 'forloop.counter' to access list items too, but that didn't work out either. any suggestions?
regards, mux
|
[
"What you might want to do instead is change the data that you're passing in to the template so that somelist and otherlist are zipped together into a single list:\ncombined_list = zip(somelist, otherlist)\n...\n{% for item in combined_list %}\n {{ item.0 }} {{ item.1 }}\n{% endfor %}\n\n"
] |
[
6
] |
[] |
[] |
[
"django_templates",
"python"
] |
stackoverflow_0000941282_django_templates_python.txt
|
Q:
Is it reasonable to integrate python with c for performance?
I like to use python for almost everything and always had clear in my mind that if for some reason I was to find a bottleneck in my python code(due to python's limitations), I could always use a C script integrated to my code.
But, as I started to read a guide on how to integrate python. In the article the author says:
There are several reasons why one might wish to extend Python in C or C++, such as:
Calling functions in an existing library.
Adding a new builtin type to Python
Optimising inner loops in code
Exposing a C++ class library to Python
Embedding Python inside a C/C++ application
Nothing about performance. So I ask again, is it reasonable to integrate python with c for performance?
A:
* Optimising inner loops in code
Isn't that about performance ?
A:
In my experience it is rarely necessary to optimize using C. I prefer to identify bottlenecks and improve algorithms in those areas completely in Python. Using hash tables, caching, and generally re-organizing your data structures to suit future needs has amazing potential for speeding up your program. As your program develops you'll get a better sense of what kind of material can be precalculated, so don't be afraid to go back and redo your storage and algorithms. Additionally, look for chances to kill "two birds with one stone", such as sorting objects as you render them instead of doing huge sorts.
When everything is worked to the best of your knowledge, I'd consider using an optimizer like Psyco. I've experienced literally 10x performance improvements just by using Psyco and adding one line to my program.
If all else fails, use C in the proper places and you'll get what you want.
A:
Performance is a broad topic so you should be more specific. If the bottleneck in your program involves a lot of networking then rewriting it in C/C++ probably won't make a difference since it's the network calls taking up time, not your code. You would be better off rewriting the slow section of your program to use fewer network calls thus reducing the time your program spends waiting on entwork IO. If your doing math intensive stuff such as solving differential equations and you know there are C librarys that can offer better performance then the way you are currently doing it in Python you may want to rewrite the section of your program to use those librarys to increase it's performance.
A:
The C extensions API is notoriously hard to work with, but there are a number of other ways to integrate C code.
For some more usable alternatives see http://www.scipy.org/PerformancePython, in particular the section about using Weave for easy inlining of C code.
Also of interest is Cython, which provides a nice system for integrating with C code. Cython is used for optimization by some well-respected high-performance Python projects such as NumPy and Sage.
As mentioned above, Psyco is another attractive option for optimization, and one which requires nothing more than
import psyco
psyco.bind(myfunction)
Psyco will identify your inner loops and automatically substitute optimized versions of the routines.
A:
C can definitely speed up processor bound tasks. Integrating is even easier now, with the ctypes library, or you could go for any of the other methods you mention.
I feel mercurial has done a good job with the integration if you want to look at their code as an example. The compute intensive tasks are in C, and everything else is python.
A:
You will gain a large performance boost using C from Python (assuming your code is well written, etc) because Python is interpreted at run time, whereas C is compiled beforehand. This will speed up things quite a bit because with C, your code is simply running, whereas with Python, the Python interpreter must figure out what you are doing and interpret it into machine instructions.
A:
I've been told for the calculating portion use C for the scripting use python. So yes you can integrate both. C is capable of faster calculations than that of python
|
Is it reasonable to integrate python with c for performance?
|
I like to use python for almost everything and always had clear in my mind that if for some reason I was to find a bottleneck in my python code(due to python's limitations), I could always use a C script integrated to my code.
But, as I started to read a guide on how to integrate python. In the article the author says:
There are several reasons why one might wish to extend Python in C or C++, such as:
Calling functions in an existing library.
Adding a new builtin type to Python
Optimising inner loops in code
Exposing a C++ class library to Python
Embedding Python inside a C/C++ application
Nothing about performance. So I ask again, is it reasonable to integrate python with c for performance?
|
[
"\n* Optimising inner loops in code\n\nIsn't that about performance ?\n",
"In my experience it is rarely necessary to optimize using C. I prefer to identify bottlenecks and improve algorithms in those areas completely in Python. Using hash tables, caching, and generally re-organizing your data structures to suit future needs has amazing potential for speeding up your program. As your program develops you'll get a better sense of what kind of material can be precalculated, so don't be afraid to go back and redo your storage and algorithms. Additionally, look for chances to kill \"two birds with one stone\", such as sorting objects as you render them instead of doing huge sorts.\nWhen everything is worked to the best of your knowledge, I'd consider using an optimizer like Psyco. I've experienced literally 10x performance improvements just by using Psyco and adding one line to my program.\nIf all else fails, use C in the proper places and you'll get what you want.\n",
"Performance is a broad topic so you should be more specific. If the bottleneck in your program involves a lot of networking then rewriting it in C/C++ probably won't make a difference since it's the network calls taking up time, not your code. You would be better off rewriting the slow section of your program to use fewer network calls thus reducing the time your program spends waiting on entwork IO. If your doing math intensive stuff such as solving differential equations and you know there are C librarys that can offer better performance then the way you are currently doing it in Python you may want to rewrite the section of your program to use those librarys to increase it's performance.\n",
"The C extensions API is notoriously hard to work with, but there are a number of other ways to integrate C code. \nFor some more usable alternatives see http://www.scipy.org/PerformancePython, in particular the section about using Weave for easy inlining of C code. \nAlso of interest is Cython, which provides a nice system for integrating with C code. Cython is used for optimization by some well-respected high-performance Python projects such as NumPy and Sage.\nAs mentioned above, Psyco is another attractive option for optimization, and one which requires nothing more than\nimport psyco\npsyco.bind(myfunction)\n\nPsyco will identify your inner loops and automatically substitute optimized versions of the routines.\n",
"C can definitely speed up processor bound tasks. Integrating is even easier now, with the ctypes library, or you could go for any of the other methods you mention.\nI feel mercurial has done a good job with the integration if you want to look at their code as an example. The compute intensive tasks are in C, and everything else is python.\n",
"You will gain a large performance boost using C from Python (assuming your code is well written, etc) because Python is interpreted at run time, whereas C is compiled beforehand. This will speed up things quite a bit because with C, your code is simply running, whereas with Python, the Python interpreter must figure out what you are doing and interpret it into machine instructions.\n",
"I've been told for the calculating portion use C for the scripting use python. So yes you can integrate both. C is capable of faster calculations than that of python\n"
] |
[
8,
8,
7,
4,
3,
2,
1
] |
[] |
[] |
[
"c",
"performance",
"python"
] |
stackoverflow_0000940982_c_performance_python.txt
|
Q:
Testing time sensitive applications in Python
I've written an auction system in Django. I want to write unit tests but the application is time sensitive (e.g. the amount advertisers are charged is a function of how long their ad has been active on a website). What's a good approach for testing this type of application?
Here's one possible solution: a DateFactory class which provides some methods to generate a predictable date in testing and the realtime value in production. Do you have any thoughts on this approach, or have you tried something else in practice?
A:
In the link you provided, the author somewhat rejects the idea of adding additional parameters to your methods for the sake of unit testing, but in some cases I think you can justify this as just an extension of your business logic. In my opinion, it's a form of inversion of control that can make your model more flexible and possibly even more expressive. For example:
def is_expired(self, check_date=None):
_check_date = check_date or datetime.utcnow()
return self.create_date + timedelta(days=15) < _check_date
Essentially this allows my unit test to supply its own date/time for the purpose of validating my logic.
The argument in the referenced blog seems to be that this mucks up the API. However, I have encountered situations in which production use cases called for supplanting current date/time with an alternate value. In other words, the inversion of control approach eventually became a necessary part of my application.
A:
In general I try to make the production code take date objects as input (where the semantics allows). In many testing situations a DateFactory as you describe is what people do.
In Python you can also get away with changing the static module methods Datetime.now or Time.now directly. You need to be careful here to replace them in the teardown part of the test. This is particularly useful when you are unable to (or it is awkward to) change the class you are testing.
To do this you have
def setUp(self)
self.oldNow = Datetime.now
Datetime.now = self._fakenow
...
def tearDown(self)
Datetime.now = self.oldNow
I do the substitutions last if there is the slightest possiblity that the setup method will fail.
For many cases a custom DateFactory is safer to use, particularly if you have to worry about people forgetting the tearDown portion.
|
Testing time sensitive applications in Python
|
I've written an auction system in Django. I want to write unit tests but the application is time sensitive (e.g. the amount advertisers are charged is a function of how long their ad has been active on a website). What's a good approach for testing this type of application?
Here's one possible solution: a DateFactory class which provides some methods to generate a predictable date in testing and the realtime value in production. Do you have any thoughts on this approach, or have you tried something else in practice?
|
[
"In the link you provided, the author somewhat rejects the idea of adding additional parameters to your methods for the sake of unit testing, but in some cases I think you can justify this as just an extension of your business logic. In my opinion, it's a form of inversion of control that can make your model more flexible and possibly even more expressive. For example:\ndef is_expired(self, check_date=None):\n _check_date = check_date or datetime.utcnow()\n return self.create_date + timedelta(days=15) < _check_date\n\nEssentially this allows my unit test to supply its own date/time for the purpose of validating my logic. \nThe argument in the referenced blog seems to be that this mucks up the API. However, I have encountered situations in which production use cases called for supplanting current date/time with an alternate value. In other words, the inversion of control approach eventually became a necessary part of my application. \n",
"In general I try to make the production code take date objects as input (where the semantics allows). In many testing situations a DateFactory as you describe is what people do.\nIn Python you can also get away with changing the static module methods Datetime.now or Time.now directly. You need to be careful here to replace them in the teardown part of the test. This is particularly useful when you are unable to (or it is awkward to) change the class you are testing.\nTo do this you have\n def setUp(self) \n self.oldNow = Datetime.now\n Datetime.now = self._fakenow\n ...\n\n def tearDown(self)\n Datetime.now = self.oldNow\n\nI do the substitutions last if there is the slightest possiblity that the setup method will fail.\nFor many cases a custom DateFactory is safer to use, particularly if you have to worry about people forgetting the tearDown portion.\n"
] |
[
3,
1
] |
[] |
[] |
[
"datetime",
"django",
"python",
"unit_testing"
] |
stackoverflow_0000765773_datetime_django_python_unit_testing.txt
|
Q:
Regular expression syntax for "match nothing"?
I have a python template engine that heavily uses regexp. It uses concatenation like:
re.compile( regexp1 + "|" + regexp2 + "*|" + regexp3 + "+" )
I can modify the individual substrings (regexp1, regexp2 etc).
Is there any small and light expression that matches nothing, which I can use inside a template where I don't want any matches? Unfortunately, sometimes '+' or '*' is appended to the regexp atom so I can't use an empty string - that will raise a "nothing to repeat" error.
A:
This shouldn't match anything:
re.compile('$^')
So if you replace regexp1, regexp2 and regexp3 with '$^' it will be impossible to find a match. Unless you are using the multi line mode.
After some tests I found a better solution
re.compile('a^')
It is impossible to match and will fail earlier than the previous solution. You can replace a with any other character and it will always be impossible to match
A:
(?!) should always fail to match. It is the zero-width negative look-ahead. If what is in the parentheses matches then the whole match fails. Given that it has nothing in it, it will fail the match for anything (including nothing).
A:
To match an empty string - even in multiline mode - you can use \A\Z, so:
re.compile('\A\Z|\A\Z*|\A\Z+')
The difference is that \A and \Z are start and end of string, whilst ^ and $ these can match start/end of lines, so $^|$^*|$^+ could potentially match a string containing newlines (if the flag is enabled).
And to fail to match anything (even an empty string), simply attempt to find content before the start of the string, e.g:
re.compile('.\A|.\A*|.\A+')
Since no characters can come before \A (by definition), this will always fail to match.
A:
Maybe '.{0}'?
A:
You could use
\z..
This is the absolute end of string, followed by two of anything
If + or * is tacked on the end this still works refusing to match anything
A:
Or, use some list comprehension to remove the useless regexp entries and join to put them all together. Something like:
re.compile('|'.join([x for x in [regexp1, regexp2, ...] if x != None]))
Be sure to add some comments next to that line of code though :-)
|
Regular expression syntax for "match nothing"?
|
I have a python template engine that heavily uses regexp. It uses concatenation like:
re.compile( regexp1 + "|" + regexp2 + "*|" + regexp3 + "+" )
I can modify the individual substrings (regexp1, regexp2 etc).
Is there any small and light expression that matches nothing, which I can use inside a template where I don't want any matches? Unfortunately, sometimes '+' or '*' is appended to the regexp atom so I can't use an empty string - that will raise a "nothing to repeat" error.
|
[
"This shouldn't match anything:\nre.compile('$^')\n\nSo if you replace regexp1, regexp2 and regexp3 with '$^' it will be impossible to find a match. Unless you are using the multi line mode.\n\nAfter some tests I found a better solution\nre.compile('a^')\n\nIt is impossible to match and will fail earlier than the previous solution. You can replace a with any other character and it will always be impossible to match\n",
"(?!) should always fail to match. It is the zero-width negative look-ahead. If what is in the parentheses matches then the whole match fails. Given that it has nothing in it, it will fail the match for anything (including nothing).\n",
"To match an empty string - even in multiline mode - you can use \\A\\Z, so:\nre.compile('\\A\\Z|\\A\\Z*|\\A\\Z+')\n\nThe difference is that \\A and \\Z are start and end of string, whilst ^ and $ these can match start/end of lines, so $^|$^*|$^+ could potentially match a string containing newlines (if the flag is enabled).\nAnd to fail to match anything (even an empty string), simply attempt to find content before the start of the string, e.g:\nre.compile('.\\A|.\\A*|.\\A+')\n\nSince no characters can come before \\A (by definition), this will always fail to match.\n",
"Maybe '.{0}'?\n",
"You could use\n\\z..\nThis is the absolute end of string, followed by two of anything\nIf + or * is tacked on the end this still works refusing to match anything \n",
"Or, use some list comprehension to remove the useless regexp entries and join to put them all together. Something like:\nre.compile('|'.join([x for x in [regexp1, regexp2, ...] if x != None]))\n\nBe sure to add some comments next to that line of code though :-)\n"
] |
[
154,
56,
17,
5,
3,
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0000940822_python_regex.txt
|
Q:
Python - Library Problems
I'm relatively new to Python and am having problems programming with Scapy, the Python network manipulation tool. However, I can't tell if it's as much a Scapy problem as it is a being-a-Python-newbie problem. On the scapy site, they give a sample program which I'm not able to run on my own machine:
#! /usr/bin/env python
import sys
from scapy import sr1,IP,ICMP
p=sr1(IP(dst=sys.argv[1])/ICMP())
if p:
p.show()
To which I get:
Traceback (most recent call last):
File "test.py", line 4, in <module>
from scapy import sr1,IP,ICMP
ImportError: cannot import name sr1
So my question then is: when installing Python libraries, do I need to change my path or anything similar? Also, is there something I can run in the interpreter to tell me the contents of the scapy package? I can run from scapy import * just fine, but since I have no idea what's inside it, it's hard to use it.
A:
With the caveat from Federico Ramponi "You should use scapy as an interpreter by its own, not as a library", I want to answer the non-scapy-specific parts of the question.
Q: when installing Python libraries, do I need to change my path or anything similar?
A: I think you are talking about changing PYTHONPATH system-wide. This is usually not required or a good idea.
Third party Python libraries should either be installed in system directories, such as /usr/lib/python2.5/site-packages, or installed locally, in which case you might want to set PYTHONPATH in your Makefile or a in driver shell script.
Q: Also, is there something I can run in the interpreter to tell me the contents of the scapy package?
A: You can do something like this:
>>> import scapy
>>> dir(scapy)
Or even better:
>>> import scapy
>>> help(scapy)
Bonus question asked in a comment.
Q: Is 'import scapy' the same as 'from scapy import *'?
A: import scapy binds the scapy name in the local namespace to the scapy module object. OTOH, from scapy import * does not bind the module name, but all public names defined in the scapy module are bound in the local namespace.
See paragraphs 6 and 7 of the Python Reference Manual, 6.12 The import statement.
A:
I had the same problem, in the scapy v2.x use
from scapy.all import *
instead the v1.x
from scapy import *
as written here
Enjoy it =)
A:
It tells you that it can't find sr1 in scapy. Not sure just how newbite you are, but the interpreter is always your friend. Fire up the interpreter (just type "python" on the commandline), and at the prompt (>>>) type (but don't type the >'s, they'll show up by themselves):
>>> import scapy
>>> from pprint import pformat
>>> pformat(dir(scapy))
The last line should print a lot of stuff. Do you see 'sr1', 'IP', and 'ICMP' there anywhere? If not, the example is at fault.
Try also help(scapy)
That's about how much I can help you without installing scapy and looking at your actual source-file myself.
A:
The scapy package is a tool for network manipulation and monitoring. I'm curious as to what you're trying to do with it. It's rude to spy on your friends. :-)
coventry@metta:~/src$ wget -q http://www.secdev.org/projects/scapy/files/scapy-latest.zip
coventry@metta:~/src$ unzip -qq scapy-latest.zip
warning [scapy-latest.zip]: 61 extra bytes at beginning or within zipfile
(attempting to process anyway)
coventry@metta:~/src$ find scapy-2.0.0.10 -name \*.py | xargs grep sr1
scapy-2.0.0.10/scapy/layers/dns.py: r=sr1(IP(dst=nameserver)/UDP()/DNS(opcode=5,
scapy-2.0.0.10/scapy/layers/dns.py: r=sr1(IP(dst=nameserver)/UDP()/DNS(opcode=5,
scapy-2.0.0.10/scapy/layers/inet6.py:from scapy.sendrecv import sr,sr1,srp1
scapy-2.0.0.10/scapy/layers/snmp.py: r = sr1(IP(dst=dst)/UDP(sport=RandShort())/SNMP(community=community, PDU=SNMPnext(varbindlist=[SNMPvarbind(oid=oid)])),timeout=2, chainCC=1, verbose=0, retry=2)
scapy-2.0.0.10/scapy/layers/inet.py:from scapy.sendrecv import sr,sr1,srp1
scapy-2.0.0.10/scapy/layers/inet.py: p = sr1(IP(dst=target, options="\x00"*40, proto=200)/"XXXXYYYYYYYYYYYY",timeout=timeout,verbose=0)
scapy-2.0.0.10/scapy/sendrecv.py:def sr1(x,filter=None,iface=None, nofilter=0, *args,**kargs):
According to the last line, sr1 is a function defined in scapy.sendrecv. Someone should file a documentation bug with the author.
|
Python - Library Problems
|
I'm relatively new to Python and am having problems programming with Scapy, the Python network manipulation tool. However, I can't tell if it's as much a Scapy problem as it is a being-a-Python-newbie problem. On the scapy site, they give a sample program which I'm not able to run on my own machine:
#! /usr/bin/env python
import sys
from scapy import sr1,IP,ICMP
p=sr1(IP(dst=sys.argv[1])/ICMP())
if p:
p.show()
To which I get:
Traceback (most recent call last):
File "test.py", line 4, in <module>
from scapy import sr1,IP,ICMP
ImportError: cannot import name sr1
So my question then is: when installing Python libraries, do I need to change my path or anything similar? Also, is there something I can run in the interpreter to tell me the contents of the scapy package? I can run from scapy import * just fine, but since I have no idea what's inside it, it's hard to use it.
|
[
"With the caveat from Federico Ramponi \"You should use scapy as an interpreter by its own, not as a library\", I want to answer the non-scapy-specific parts of the question.\nQ: when installing Python libraries, do I need to change my path or anything similar?\nA: I think you are talking about changing PYTHONPATH system-wide. This is usually not required or a good idea.\nThird party Python libraries should either be installed in system directories, such as /usr/lib/python2.5/site-packages, or installed locally, in which case you might want to set PYTHONPATH in your Makefile or a in driver shell script.\nQ: Also, is there something I can run in the interpreter to tell me the contents of the scapy package?\nA: You can do something like this:\n>>> import scapy\n>>> dir(scapy)\n\nOr even better:\n>>> import scapy\n>>> help(scapy)\n\nBonus question asked in a comment.\nQ: Is 'import scapy' the same as 'from scapy import *'?\nA: import scapy binds the scapy name in the local namespace to the scapy module object. OTOH, from scapy import * does not bind the module name, but all public names defined in the scapy module are bound in the local namespace.\nSee paragraphs 6 and 7 of the Python Reference Manual, 6.12 The import statement.\n",
"I had the same problem, in the scapy v2.x use \n from scapy.all import * \n\ninstead the v1.x\n from scapy import *\n\nas written here\nEnjoy it =)\n",
"It tells you that it can't find sr1 in scapy. Not sure just how newbite you are, but the interpreter is always your friend. Fire up the interpreter (just type \"python\" on the commandline), and at the prompt (>>>) type (but don't type the >'s, they'll show up by themselves):\n>>> import scapy\n>>> from pprint import pformat\n>>> pformat(dir(scapy))\n\nThe last line should print a lot of stuff. Do you see 'sr1', 'IP', and 'ICMP' there anywhere? If not, the example is at fault.\nTry also help(scapy)\nThat's about how much I can help you without installing scapy and looking at your actual source-file myself.\n",
"The scapy package is a tool for network manipulation and monitoring. I'm curious as to what you're trying to do with it. It's rude to spy on your friends. :-)\ncoventry@metta:~/src$ wget -q http://www.secdev.org/projects/scapy/files/scapy-latest.zip\ncoventry@metta:~/src$ unzip -qq scapy-latest.zip \nwarning [scapy-latest.zip]: 61 extra bytes at beginning or within zipfile\n (attempting to process anyway)\ncoventry@metta:~/src$ find scapy-2.0.0.10 -name \\*.py | xargs grep sr1\nscapy-2.0.0.10/scapy/layers/dns.py: r=sr1(IP(dst=nameserver)/UDP()/DNS(opcode=5,\nscapy-2.0.0.10/scapy/layers/dns.py: r=sr1(IP(dst=nameserver)/UDP()/DNS(opcode=5,\nscapy-2.0.0.10/scapy/layers/inet6.py:from scapy.sendrecv import sr,sr1,srp1\nscapy-2.0.0.10/scapy/layers/snmp.py: r = sr1(IP(dst=dst)/UDP(sport=RandShort())/SNMP(community=community, PDU=SNMPnext(varbindlist=[SNMPvarbind(oid=oid)])),timeout=2, chainCC=1, verbose=0, retry=2)\nscapy-2.0.0.10/scapy/layers/inet.py:from scapy.sendrecv import sr,sr1,srp1\nscapy-2.0.0.10/scapy/layers/inet.py: p = sr1(IP(dst=target, options=\"\\x00\"*40, proto=200)/\"XXXXYYYYYYYYYYYY\",timeout=timeout,verbose=0)\nscapy-2.0.0.10/scapy/sendrecv.py:def sr1(x,filter=None,iface=None, nofilter=0, *args,**kargs):\n\nAccording to the last line, sr1 is a function defined in scapy.sendrecv. Someone should file a documentation bug with the author.\n"
] |
[
6,
4,
3,
1
] |
[] |
[] |
[
"networking",
"python",
"scapy"
] |
stackoverflow_0000229756_networking_python_scapy.txt
|
Q:
How do I represent many to many relation in the form of Google App Engine?
class Entry(db.Model):
...
class Tag(db.Model):
...
class EntryTag(db.Model):
entry = db.ReferenceProperty(Entry, required=True, collection_name='tag_set')
tag = db.ReferenceProperty(Tag, required=True, collection_name='entry_set')
The template should be {{form.as_table}}
The question is how to make a form to create Entry where I can choose to add some of the tags ?
A:
You will need to create a formset for your EntryTag class. For more information, see the Django formset docs.
Otherwise, you may wish to create a custom form with a ModelMultipleChoiceField and add the EntryTag entities using a custom view.
|
How do I represent many to many relation in the form of Google App Engine?
|
class Entry(db.Model):
...
class Tag(db.Model):
...
class EntryTag(db.Model):
entry = db.ReferenceProperty(Entry, required=True, collection_name='tag_set')
tag = db.ReferenceProperty(Tag, required=True, collection_name='entry_set')
The template should be {{form.as_table}}
The question is how to make a form to create Entry where I can choose to add some of the tags ?
|
[
"You will need to create a formset for your EntryTag class. For more information, see the Django formset docs.\nOtherwise, you may wish to create a custom form with a ModelMultipleChoiceField and add the EntryTag entities using a custom view.\n"
] |
[
1
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0000856462_google_app_engine_python.txt
|
Q:
How to Change Mouse Cursor in PythonCard
How do I change the mouse cursor to indicate a waiting state using Python and PythonCard?
I didn't see anything in the documentation.
A:
PythonCard builds on top of wx, so if you import wx you should be able to build a suitable cursor (e.g. with wx.CursorFromImage), set it (e.g. with wx.BeginBusyCursor) when your wait begins, and end it (with wx.EndBusyCursor) when your wait ends.
|
How to Change Mouse Cursor in PythonCard
|
How do I change the mouse cursor to indicate a waiting state using Python and PythonCard?
I didn't see anything in the documentation.
|
[
"PythonCard builds on top of wx, so if you import wx you should be able to build a suitable cursor (e.g. with wx.CursorFromImage), set it (e.g. with wx.BeginBusyCursor) when your wait begins, and end it (with wx.EndBusyCursor) when your wait ends.\n"
] |
[
1
] |
[] |
[] |
[
"cursor",
"mouse",
"python",
"pythoncard",
"user_interface"
] |
stackoverflow_0000942730_cursor_mouse_python_pythoncard_user_interface.txt
|
Q:
Extracting YouTube Video's author using Python and YouTubeAPI
how do I get the author/username from an object using:
GetYouTubeVideoEntry(video_id=youtube_video_id_to_output)
I'm using Google's gdata.youtube.service Python library
Thanks in advance! :)
A:
So because YouTube's API is based on GData, which is based on Atom, the 'author' object is an array with name objects, which can contain names, URLs, etc.
This is what you want:
>>> client = gdata.youtube.service.YouTubeService()
>>> video = client.GetYouTubeVideoEntry(video_id='CoYBkXD0QeU')
>>> video.author[0].name.text
'GoogleDevelopers'
A:
Have you tried something like this?
foo = GetYouTubeVideoEntry(video_id=youtube_video_id_to_output)
foo.author
The docs for YouTubeVideoEntry aren't great, but the __init__ method seems to accept an author.
|
Extracting YouTube Video's author using Python and YouTubeAPI
|
how do I get the author/username from an object using:
GetYouTubeVideoEntry(video_id=youtube_video_id_to_output)
I'm using Google's gdata.youtube.service Python library
Thanks in advance! :)
|
[
"So because YouTube's API is based on GData, which is based on Atom, the 'author' object is an array with name objects, which can contain names, URLs, etc.\nThis is what you want:\n>>> client = gdata.youtube.service.YouTubeService()\n>>> video = client.GetYouTubeVideoEntry(video_id='CoYBkXD0QeU')\n>>> video.author[0].name.text\n'GoogleDevelopers'\n\n",
"Have you tried something like this?\nfoo = GetYouTubeVideoEntry(video_id=youtube_video_id_to_output)\nfoo.author\n\nThe docs for YouTubeVideoEntry aren't great, but the __init__ method seems to accept an author.\n"
] |
[
6,
0
] |
[] |
[] |
[
"python",
"youtube",
"youtube_api"
] |
stackoverflow_0000938742_python_youtube_youtube_api.txt
|
Q:
Flash-based file upload (swfupload) fails with Apache/mod-wsgi
This question has been retitled/retagged so that others may more easily find the solution to this problem.
I am in the process of trying to migrate a project from the Django development server to a Apache/mod-wsgi environment. If you had asked me yesterday I would have said the transition was going very smoothly. My site is up, accessible, fast, etc. However, a portion of the site relies on file uploads and with this I am experiencing the strangest and most maddening issue. The particular page in question uses swfupload to POST a file and associated metadata to a url which catches the file and initiates some server-side processing. This works perfectly on the development server, but whenever I POST to this url on Apache the Django request object comes up empty--no GET, POST, or FILES data.
I have eliminated client-side issues by snooping with Wireshark. As far as I can discern the root cause stems from some sort of Apache configuration issue, possibly related to the temporary file directory I am trying to access. I am a relative newcomer to Apache configuration and have been banging my head against this for hours.
My Apache config:
<VirtualHost *:80>
ServerAdmin [email protected]
ServerName sitename.com
ServerAlias www.sitename.com
LogLevel warn
WSGIDaemonProcess sitename processes=2 maximum-requests=500 threads=1
WSGIProcessGroup sitename
WSGIScriptAlias / /home/user/src/sitename/apache/django.wsgi
Alias /static /home/user/src/sitename/static
Alias /media /usr/share/python-support/python-django/django/contrib/admin/media
</VirtualHost>
My intuition is that this may have something to do with the permissions of the file upload directory I have specified in my Django settings.py ('/home/sk/src/sitename/uploads/'), however my Apache error log doesn't suggest anything of the sort, even with the log level bumped up to debug.
Suggestions on how I should go about debugging this?
A:
Normally apache runs as a user "www-data"; and you could have problems if it doesn't have read/write access. However, your setup doesn't seem to use apache to access the '/home/sk/src/sitename/uploads'; my understanding from this config file is unless it hit /static or /media, apache will hand it off WGSI, so it might be good to check out those permissions and logs, rather than the apache ones.
A:
Another possibility is a bug in "old" releases of mod_wsgi (I got crazy to find, and fix, it). More info in this bug report. I fixed it (for curl uploads) thanks to the following hint (that works on the CLI too, using the -H switch).
|
Flash-based file upload (swfupload) fails with Apache/mod-wsgi
|
This question has been retitled/retagged so that others may more easily find the solution to this problem.
I am in the process of trying to migrate a project from the Django development server to a Apache/mod-wsgi environment. If you had asked me yesterday I would have said the transition was going very smoothly. My site is up, accessible, fast, etc. However, a portion of the site relies on file uploads and with this I am experiencing the strangest and most maddening issue. The particular page in question uses swfupload to POST a file and associated metadata to a url which catches the file and initiates some server-side processing. This works perfectly on the development server, but whenever I POST to this url on Apache the Django request object comes up empty--no GET, POST, or FILES data.
I have eliminated client-side issues by snooping with Wireshark. As far as I can discern the root cause stems from some sort of Apache configuration issue, possibly related to the temporary file directory I am trying to access. I am a relative newcomer to Apache configuration and have been banging my head against this for hours.
My Apache config:
<VirtualHost *:80>
ServerAdmin [email protected]
ServerName sitename.com
ServerAlias www.sitename.com
LogLevel warn
WSGIDaemonProcess sitename processes=2 maximum-requests=500 threads=1
WSGIProcessGroup sitename
WSGIScriptAlias / /home/user/src/sitename/apache/django.wsgi
Alias /static /home/user/src/sitename/static
Alias /media /usr/share/python-support/python-django/django/contrib/admin/media
</VirtualHost>
My intuition is that this may have something to do with the permissions of the file upload directory I have specified in my Django settings.py ('/home/sk/src/sitename/uploads/'), however my Apache error log doesn't suggest anything of the sort, even with the log level bumped up to debug.
Suggestions on how I should go about debugging this?
|
[
"Normally apache runs as a user \"www-data\"; and you could have problems if it doesn't have read/write access. However, your setup doesn't seem to use apache to access the '/home/sk/src/sitename/uploads'; my understanding from this config file is unless it hit /static or /media, apache will hand it off WGSI, so it might be good to check out those permissions and logs, rather than the apache ones.\n",
"Another possibility is a bug in \"old\" releases of mod_wsgi (I got crazy to find, and fix, it). More info in this bug report. I fixed it (for curl uploads) thanks to the following hint (that works on the CLI too, using the -H switch).\n"
] |
[
3,
2
] |
[] |
[] |
[
"apache",
"file_upload",
"flash",
"mod_wsgi",
"python"
] |
stackoverflow_0000943000_apache_file_upload_flash_mod_wsgi_python.txt
|
Q:
How to construct a web file browser?
Goal: simple browser app, for navigating files on a web server, in a tree view.
Background: Building a web site as a learning experience, w/ Apache, mod_python, Python code. (No mod_wsgi yet.)
What tools should I learn to write the browser tree? I see JavaScript, Ajax, neither of which I know. Learn them? Grab a JS example from the web and rework? Can such a thing be built in raw HTML? Python I'm advanced beginner but I realize that's server side.
If you were going to build such a toy from scratch, what would you use? What would be the totally easy, cheesy way, the intermediate way, the fully professional way?
No Django yet please -- This is an exercise in learning web programming nuts and bolts.
A:
First, switch to mod_wsgi.
Second, write a hello world in Python using mod_wsgi.
Third, change your hello world to show the results of os.listdir().
I think you're approximately done.
As you mess with this, you'll realize that transforming the content you have (information from os.listdir) into presentation in HTML is a pain in the neck.
You can add Jinja templates to this to separate content from presentation.
Finally, you'll notice that you've started to build Django the hard way. Stop. Learn Django. You'll see that it's still "programming nuts and bolts". It doesn't "conceal" or "abstract" much away from the web server development experience. It just saves you from reinventing the wheel.
A:
If you want to make interactive browser, you have to learn JS and ajax.
If you want to build only browser based on links, python would be enough.
A:
The "totally cheesy" way:
python -m SimpleHTTPServer
This will serve up the files in the current directory at http://localhost:8000/
A:
set "Indexes" option to the directory in the apache config.
To learn how to build webapps in python, learn django.
|
How to construct a web file browser?
|
Goal: simple browser app, for navigating files on a web server, in a tree view.
Background: Building a web site as a learning experience, w/ Apache, mod_python, Python code. (No mod_wsgi yet.)
What tools should I learn to write the browser tree? I see JavaScript, Ajax, neither of which I know. Learn them? Grab a JS example from the web and rework? Can such a thing be built in raw HTML? Python I'm advanced beginner but I realize that's server side.
If you were going to build such a toy from scratch, what would you use? What would be the totally easy, cheesy way, the intermediate way, the fully professional way?
No Django yet please -- This is an exercise in learning web programming nuts and bolts.
|
[
"First, switch to mod_wsgi.\nSecond, write a hello world in Python using mod_wsgi.\nThird, change your hello world to show the results of os.listdir().\nI think you're approximately done.\nAs you mess with this, you'll realize that transforming the content you have (information from os.listdir) into presentation in HTML is a pain in the neck.\nYou can add Jinja templates to this to separate content from presentation. \nFinally, you'll notice that you've started to build Django the hard way. Stop. Learn Django. You'll see that it's still \"programming nuts and bolts\". It doesn't \"conceal\" or \"abstract\" much away from the web server development experience. It just saves you from reinventing the wheel.\n",
"If you want to make interactive browser, you have to learn JS and ajax. \nIf you want to build only browser based on links, python would be enough.\n",
"The \"totally cheesy\" way:\npython -m SimpleHTTPServer\n\nThis will serve up the files in the current directory at http://localhost:8000/\n",
"set \"Indexes\" option to the directory in the apache config.\nTo learn how to build webapps in python, learn django.\n"
] |
[
10,
1,
1,
0
] |
[] |
[] |
[
"html",
"javascript",
"python",
"web_applications"
] |
stackoverflow_0000941638_html_javascript_python_web_applications.txt
|
Q:
Connect to a running instance of Visual Studio 2003 using COM, build and read output
For Visual Studio 6.0, I can connect to a running instance like:
o = GetActiveObject("MSDev.Application")
What prog ID do I use for Visual Studio 2003?
How do I execute a 'Build Solution' once I have the COM object that references the VS2003 instance?
How do I get the string contents of the build output window after executing the build solution command?
Yes, I am aware that I can build a solution from the command line. But in this case, I need to connect to a running instance of Visual Studio.
EDIT: found and submitted an answer, see below.
A:
After a bit of research (mainly looking at EnvDTE docs), I found the solution to this myself:
To build current solution (code in Python):
def build_active_solution(progid="VisualStudio.DTE.7.1"):
from win32com.client import GetActiveObject
dte = GetActiveObject(progid)
sb = dte.Solution.SolutionBuild
sb.Build(True)
output = dte.Windows['Output'].Object.ActivePane.TextDocument.Selection
output.SelectAll()
return output.Text
|
Connect to a running instance of Visual Studio 2003 using COM, build and read output
|
For Visual Studio 6.0, I can connect to a running instance like:
o = GetActiveObject("MSDev.Application")
What prog ID do I use for Visual Studio 2003?
How do I execute a 'Build Solution' once I have the COM object that references the VS2003 instance?
How do I get the string contents of the build output window after executing the build solution command?
Yes, I am aware that I can build a solution from the command line. But in this case, I need to connect to a running instance of Visual Studio.
EDIT: found and submitted an answer, see below.
|
[
"After a bit of research (mainly looking at EnvDTE docs), I found the solution to this myself:\nTo build current solution (code in Python):\ndef build_active_solution(progid=\"VisualStudio.DTE.7.1\"):\n from win32com.client import GetActiveObject\n dte = GetActiveObject(progid)\n sb = dte.Solution.SolutionBuild\n sb.Build(True)\n\n output = dte.Windows['Output'].Object.ActivePane.TextDocument.Selection\n output.SelectAll()\n return output.Text\n\n"
] |
[
2
] |
[] |
[] |
[
"com",
"python",
"visual_studio"
] |
stackoverflow_0000943863_com_python_visual_studio.txt
|
Q:
Python: defaultdict became unmarshallable object in 2.6?
Did defaultdict's become not marshal'able as of Python 2.6? The following works under 2.5, fails under 2.6 with "ValueError: unmarshallable object" on OS X 1.5.6, python-2.6.1-macosx2008-12-06.dmg from python.org:
from collections import defaultdict
import marshal
dd = defaultdict(list)
marshal.dump(dd, file('/tmp/junk.bin','wb') )
A:
Marshal was deliberately changed to not support subclasses of built-in types. Marshal was never supposed to handle defaultdicts, but happened to since they are a subclass of dict. Marshal is not a general "persistence" module; only None, integers, long integers, floating point numbers, strings, Unicode objects, tuples, lists, sets, dictionaries, and code objects are supported.
Python 2.5:
>>> marshal.dumps(defaultdict(list))
'{0'
>>> marshal.dumps(dict())
'{0'
If for some reason you really want to marshal a defaultdict you can convert it to a dict first, but odds are you should be using a different serialization mechanism, like pickling.
A:
wrt performance issues.. encoding a list of ~600000 dicts, each with 4 key/values, one of the values has a list (around 1-3 length) of 2 key/val dicts:
In [27]: timeit(cjson.encode, data)
4.93589496613
In [28]: timeit(cPickle.dumps, data, -1)
141.412974119
In [30]: timeit(marshal.dumps, data, marshal.version)
1.13546991348
|
Python: defaultdict became unmarshallable object in 2.6?
|
Did defaultdict's become not marshal'able as of Python 2.6? The following works under 2.5, fails under 2.6 with "ValueError: unmarshallable object" on OS X 1.5.6, python-2.6.1-macosx2008-12-06.dmg from python.org:
from collections import defaultdict
import marshal
dd = defaultdict(list)
marshal.dump(dd, file('/tmp/junk.bin','wb') )
|
[
"Marshal was deliberately changed to not support subclasses of built-in types. Marshal was never supposed to handle defaultdicts, but happened to since they are a subclass of dict. Marshal is not a general \"persistence\" module; only None, integers, long integers, floating point numbers, strings, Unicode objects, tuples, lists, sets, dictionaries, and code objects are supported.\nPython 2.5:\n>>> marshal.dumps(defaultdict(list))\n'{0'\n>>> marshal.dumps(dict())\n'{0'\n\nIf for some reason you really want to marshal a defaultdict you can convert it to a dict first, but odds are you should be using a different serialization mechanism, like pickling.\n",
"wrt performance issues.. encoding a list of ~600000 dicts, each with 4 key/values, one of the values has a list (around 1-3 length) of 2 key/val dicts:\nIn [27]: timeit(cjson.encode, data)\n4.93589496613\n\nIn [28]: timeit(cPickle.dumps, data, -1)\n141.412974119\n\nIn [30]: timeit(marshal.dumps, data, marshal.version)\n1.13546991348\n\n"
] |
[
11,
7
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000665061_python.txt
|
Q:
Execution of a OS command from a Python daemon
I've got a daemon.py with a callback. How should I make the handler function execute a OS command?
A:
when i learned Python some time ago, I used:
import os
os.system('ls -lt')
but it seems like in Python 3.x, the recommended use is commands or os.popen()
|
Execution of a OS command from a Python daemon
|
I've got a daemon.py with a callback. How should I make the handler function execute a OS command?
|
[
"when i learned Python some time ago, I used:\nimport os\nos.system('ls -lt')\n\nbut it seems like in Python 3.x, the recommended use is commands or os.popen()\n"
] |
[
-2
] |
[] |
[] |
[
"bash",
"command",
"handler",
"operating_system",
"python"
] |
stackoverflow_0000944501_bash_command_handler_operating_system_python.txt
|
Q:
Creating connection between two computers in python
The question: How do I create a python application that can connect and send packets over the internet to another computer running the same application? Is there any existing code/library I could use?
The background: I am pretty new to programming (HS senior). I've created a lot of simple things in python but I've recently decided to start on a bigger project. I'm considering creating a Magic: the Gathering booster draft simulator, but I'm not sure if it is feasible given my skill set so I'm asking around before I get started. The application would need to send data between computers about which cards are being picked/passed.
Thanks!
A:
Twisted is a python event-driven networking engine licensed under MIT. Means that a single machine can communicate with one or more other machines, while doing other things between data being received and sent, all asynchronously, and running a in a single thread/process.
It supports many protocols out of the box, so you can just as well using an existing one. That's better because you get support for the protocol from 3rd party software (i.e. using HTTP for communication means middleware software that uses HTTP will be compatible: proxies etc.)
It also makes easy to create your own communication protocol, if that's what you want.
The documentation is filled with examples.
A:
The standard library includes SocketServer (documented here), which might do what you want.
However I wonder if a better solution might be to use a message queue. Lots of good implementations already exist, including Python interfaces. I've used RabbitMQ before. The idea is that the computers both subscribe to the queue, and can either post or listen for events.
A:
A great place to start looking is the Python Standard Library. In particular there are a few sections that will be relevant:
18. Interprocess Communication and Networking
19. Internet Data Handling
21. Internet Protocols and Support
Since you mentioned that you have no experience with this, I'd suggest starting with a HTTP based implementation. The HTTP protocol is fairly simple and easy to work with. In addition, there are nice frameworks to support this operation, such as webpy for the server and HTTPLib from the standard library for the client.
If you really want to dive into networking, then a socket based implementation might be educational. This will teach you the underlying concepts that are used in lots of networking code while resulting in an interface that is similar to a file stream.
A:
Also, check out Pyro (Python remoting objects). Se this answer for more details.
Pyro basically allows you to publish Python object instances as services that can be called remotely. Pyro is probably the easiest way to implement Python-to-python process communication.
A:
It's also worth looking at Kamaelia for this sort of thing - it's original usecase was network systems, and makes building such things relatively intuitive.
Some links: Overview of basic TCP system, Simple Chat server, Building a layered protocol, walk-through of how to evolve new components. Other extreme: P2P radio system: source, peer.
If it makes any difference, we've tested whether the system is accessible to novices through involvement in google summer of code 3 years in a row, actively taking on both experienced and inexperienced developers. All of them managed to build useful systems.
Essentially, if you've ever played with unix pipelines the ideas should be familiar.
Caveat: I wrote major chunks of Kamaelia :-)
If you want to learn how to do these things though, playing with a few different approaches makes sense, and you should definitely check out Twisted (the standard answer to this question), Pyro & the standard library tools. Each has a different approach, and learning them will definitely benefit you!
However, like nosklo, I would recommend against using the socket library directly and use a library instead - simply because it is much much harder to get sockets programming correct than people tend to realise.
A:
Communication will take place with sockets, one way or another. Just a question of whether you use existing higher-level libraries, or roll your own.
If you're doing this as a learning experience, probably want to start as low-level as you can, to see the real nuts and bolts. Which means you probably want to start with a SocketServer, using a TCP connection (TCP is basically guaranteed delivery of data; UDP is not).
Google for some simple example code. Setting one up is very easy. But you will have to define all the details of your communications protocol: which end sends when and what, which end listens and when, what exactly the listener will expect, does it reply to confirm receipt, etc.
|
Creating connection between two computers in python
|
The question: How do I create a python application that can connect and send packets over the internet to another computer running the same application? Is there any existing code/library I could use?
The background: I am pretty new to programming (HS senior). I've created a lot of simple things in python but I've recently decided to start on a bigger project. I'm considering creating a Magic: the Gathering booster draft simulator, but I'm not sure if it is feasible given my skill set so I'm asking around before I get started. The application would need to send data between computers about which cards are being picked/passed.
Thanks!
|
[
"Twisted is a python event-driven networking engine licensed under MIT. Means that a single machine can communicate with one or more other machines, while doing other things between data being received and sent, all asynchronously, and running a in a single thread/process.\nIt supports many protocols out of the box, so you can just as well using an existing one. That's better because you get support for the protocol from 3rd party software (i.e. using HTTP for communication means middleware software that uses HTTP will be compatible: proxies etc.)\nIt also makes easy to create your own communication protocol, if that's what you want.\nThe documentation is filled with examples.\n",
"The standard library includes SocketServer (documented here), which might do what you want.\nHowever I wonder if a better solution might be to use a message queue. Lots of good implementations already exist, including Python interfaces. I've used RabbitMQ before. The idea is that the computers both subscribe to the queue, and can either post or listen for events. \n",
"A great place to start looking is the Python Standard Library. In particular there are a few sections that will be relevant:\n\n18. Interprocess Communication and Networking\n19. Internet Data Handling\n21. Internet Protocols and Support\n\nSince you mentioned that you have no experience with this, I'd suggest starting with a HTTP based implementation. The HTTP protocol is fairly simple and easy to work with. In addition, there are nice frameworks to support this operation, such as webpy for the server and HTTPLib from the standard library for the client.\nIf you really want to dive into networking, then a socket based implementation might be educational. This will teach you the underlying concepts that are used in lots of networking code while resulting in an interface that is similar to a file stream.\n",
"Also, check out Pyro (Python remoting objects). Se this answer for more details. \nPyro basically allows you to publish Python object instances as services that can be called remotely. Pyro is probably the easiest way to implement Python-to-python process communication.\n",
"It's also worth looking at Kamaelia for this sort of thing - it's original usecase was network systems, and makes building such things relatively intuitive.\nSome links: Overview of basic TCP system, Simple Chat server, Building a layered protocol, walk-through of how to evolve new components. Other extreme: P2P radio system: source, peer.\nIf it makes any difference, we've tested whether the system is accessible to novices through involvement in google summer of code 3 years in a row, actively taking on both experienced and inexperienced developers. All of them managed to build useful systems.\nEssentially, if you've ever played with unix pipelines the ideas should be familiar.\nCaveat: I wrote major chunks of Kamaelia :-)\nIf you want to learn how to do these things though, playing with a few different approaches makes sense, and you should definitely check out Twisted (the standard answer to this question), Pyro & the standard library tools. Each has a different approach, and learning them will definitely benefit you!\nHowever, like nosklo, I would recommend against using the socket library directly and use a library instead - simply because it is much much harder to get sockets programming correct than people tend to realise.\n",
"Communication will take place with sockets, one way or another. Just a question of whether you use existing higher-level libraries, or roll your own.\nIf you're doing this as a learning experience, probably want to start as low-level as you can, to see the real nuts and bolts. Which means you probably want to start with a SocketServer, using a TCP connection (TCP is basically guaranteed delivery of data; UDP is not).\nGoogle for some simple example code. Setting one up is very easy. But you will have to define all the details of your communications protocol: which end sends when and what, which end listens and when, what exactly the listener will expect, does it reply to confirm receipt, etc.\n"
] |
[
15,
7,
5,
2,
2,
1
] |
[] |
[] |
[
"networking",
"python"
] |
stackoverflow_0000936625_networking_python.txt
|
Q:
Benefit of installing Django from .deb versus .tar.gz?
I'm starting Django development, and I can either install it from the .deb using
$ apt-get install python-django
on my Ubuntu machine, or I can download the .tar.gz from djangoproject.com, and start with that.
What are the benefits and drawbacks of each approach?
A:
Using apt-get lets your system keep track of the install (e.g. if you want to disinstall, upgrade, or the like, late). Installing from source (.tar.gz or otherwise) puts you in charge of what's what and where -- you can have multiple versions installed at various locations, etc, but there's no easy "uninstall" and the like. Personally I prefer to install by my OS's supported method (apt-get and the like) for packages I think as secondary or auxiliary, directly from svn/hg/&c for ones I'm contributing to or otherwise want to keep the closest control on, and .tar.gz (or better when available .tar.bz2;-) "snapshots" and "source releases" that are s/where in the middle...
A:
The best way to install is to check out the code, which ever the changeset (branch/tag) you want, and define a symbolic link to it
Checkout the version you want:
# For trunk
svn co http://code.djangoproject.com/svn/django/trunk/ django-trunk
# For a tag, 1.02 release
svn co http://code.djangoproject.com/svn/django/tag/1.02 django-1.02
# To update the trunk
cd django-trunk
svn up
Then define symbolic link
ln -fs /usr/lib/python2.5/site-packages/django/* ~/django-1.02/
If you want to test your code in the latest release, just redefine the symbolic link:
ln -fs /usr/lib/python2.5/site-packages/django/* ~/django-trunk/
The package managers aptitude and apt-get are good for auto updating those software you don't really bother about developing with every day, like media players, browsers. For stuff U code with everyday, full control of versions is needed, you get that only by source.
A:
Using apt-get you'll get better uninstall support via the package manager and it can also install dependencies for you. If you install with apt-get you might get automatic updates, which is very nice for security patches.
With the tar you might get a newer version and you might get the opportunity to tailor the compile flags. A build could be more optimized for your particular processor, but since it's python that doesn't matter in this case.
A:
Getting Django from your Ubuntu repository gives you the older "stable" version. This may be fine with you, but I believe most developers prefer sticking with latest code available in the trunk to get more features.
IMHO the cleanest solution is not to install .tar.gz/SVN version with straightforward sudo python setup.py install (or use easy-install) but to make a .deb package. This way you should get the maximum benefits: 1) all the bleeding edge features you want 2) proper Debian/Ubuntu package, which you may easily uninstall, upgrade and deploy to any number of Debian machines.
Here's a quick and dirty way how to do it:
#
# This is dirty (you have been warned) way to quickly
# make new Django .deb package from SVN trunk for personal use.
#
apt-get source python-django
apt-get build-dep python-django
svn co http://code.djangoproject.com/svn/django/trunk/ django-trunk
DJANGO_SVN_REVISION=`LC_ALL=C svn info django-trunk \
| grep ^Revision: | awk '{ print $2 }'`
cp -R python-django-*/debian django-trunk/
cd django-trunk
dch --newversion=1.1-1ubuntu1~svn${DJANGO_SVN_REVISION} \
"Non-maintainer quick-and-dirty update to SVN r${DJANGO_SVN_REVISION}"
dpkg-buildpackage
# Have a good sip of tea, coffee or whatever you prefer.
# Because of tests, this is going to take quite a while.
# You may consider disabling (this is bad!) tests by commenting out
# line mentioning "runtests.py" in debian/rules.
cd ..
dpkg -i python-django_*.deb
This is not even really guarranteed to work (and I'm not really sure even about proper package version naming), but I've tried it myself and it worked for me.
A:
I've always installed using the dev version. (Instructions)
This makes updating really easy and gives you all the fancy features in the /dev/ docs. I would suggest you try going this route if possible (if anything it gives you an idea of how site-packages work).
Note: ubuntu 9.04's recent move to dist-packages from site-packages (8.04) made this a bit confusing, had to recreate the link.
A:
I know with debian and probably some other distros, the version of django in the package manager is the 0.9 branch, not the 1.X branch. Definately something you want to avoid.
|
Benefit of installing Django from .deb versus .tar.gz?
|
I'm starting Django development, and I can either install it from the .deb using
$ apt-get install python-django
on my Ubuntu machine, or I can download the .tar.gz from djangoproject.com, and start with that.
What are the benefits and drawbacks of each approach?
|
[
"Using apt-get lets your system keep track of the install (e.g. if you want to disinstall, upgrade, or the like, late). Installing from source (.tar.gz or otherwise) puts you in charge of what's what and where -- you can have multiple versions installed at various locations, etc, but there's no easy \"uninstall\" and the like. Personally I prefer to install by my OS's supported method (apt-get and the like) for packages I think as secondary or auxiliary, directly from svn/hg/&c for ones I'm contributing to or otherwise want to keep the closest control on, and .tar.gz (or better when available .tar.bz2;-) \"snapshots\" and \"source releases\" that are s/where in the middle...\n",
"The best way to install is to check out the code, which ever the changeset (branch/tag) you want, and define a symbolic link to it\nCheckout the version you want:\n# For trunk\nsvn co http://code.djangoproject.com/svn/django/trunk/ django-trunk \n# For a tag, 1.02 release\nsvn co http://code.djangoproject.com/svn/django/tag/1.02 django-1.02\n# To update the trunk\ncd django-trunk\nsvn up\n\nThen define symbolic link\nln -fs /usr/lib/python2.5/site-packages/django/* ~/django-1.02/\n\nIf you want to test your code in the latest release, just redefine the symbolic link:\nln -fs /usr/lib/python2.5/site-packages/django/* ~/django-trunk/\n\nThe package managers aptitude and apt-get are good for auto updating those software you don't really bother about developing with every day, like media players, browsers. For stuff U code with everyday, full control of versions is needed, you get that only by source.\n",
"Using apt-get you'll get better uninstall support via the package manager and it can also install dependencies for you. If you install with apt-get you might get automatic updates, which is very nice for security patches.\nWith the tar you might get a newer version and you might get the opportunity to tailor the compile flags. A build could be more optimized for your particular processor, but since it's python that doesn't matter in this case.\n",
"Getting Django from your Ubuntu repository gives you the older \"stable\" version. This may be fine with you, but I believe most developers prefer sticking with latest code available in the trunk to get more features.\nIMHO the cleanest solution is not to install .tar.gz/SVN version with straightforward sudo python setup.py install (or use easy-install) but to make a .deb package. This way you should get the maximum benefits: 1) all the bleeding edge features you want 2) proper Debian/Ubuntu package, which you may easily uninstall, upgrade and deploy to any number of Debian machines.\nHere's a quick and dirty way how to do it:\n#\n# This is dirty (you have been warned) way to quickly\n# make new Django .deb package from SVN trunk for personal use.\n#\napt-get source python-django\napt-get build-dep python-django\nsvn co http://code.djangoproject.com/svn/django/trunk/ django-trunk\nDJANGO_SVN_REVISION=`LC_ALL=C svn info django-trunk \\\n | grep ^Revision: | awk '{ print $2 }'`\ncp -R python-django-*/debian django-trunk/\ncd django-trunk\ndch --newversion=1.1-1ubuntu1~svn${DJANGO_SVN_REVISION} \\\n \"Non-maintainer quick-and-dirty update to SVN r${DJANGO_SVN_REVISION}\"\ndpkg-buildpackage\n# Have a good sip of tea, coffee or whatever you prefer.\n# Because of tests, this is going to take quite a while.\n# You may consider disabling (this is bad!) tests by commenting out\n# line mentioning \"runtests.py\" in debian/rules.\ncd ..\ndpkg -i python-django_*.deb\n\nThis is not even really guarranteed to work (and I'm not really sure even about proper package version naming), but I've tried it myself and it worked for me.\n",
"I've always installed using the dev version. (Instructions)\nThis makes updating really easy and gives you all the fancy features in the /dev/ docs. I would suggest you try going this route if possible (if anything it gives you an idea of how site-packages work).\nNote: ubuntu 9.04's recent move to dist-packages from site-packages (8.04) made this a bit confusing, had to recreate the link.\n",
"I know with debian and probably some other distros, the version of django in the package manager is the 0.9 branch, not the 1.X branch. Definately something you want to avoid.\n"
] |
[
8,
6,
4,
1,
0,
0
] |
[] |
[] |
[
"apt_get",
"django",
"python",
"ubuntu"
] |
stackoverflow_0000943242_apt_get_django_python_ubuntu.txt
|
Q:
How to extract nested tables from HTML?
I have an HTML file (encoded in utf-8). I open it with codecs.open(). The file architecture is:
<html>
// header
<body>
// some text
<table>
// some rows with cells here
// some cells contains tables
</table>
// maybe some text here
<table>
// a form and other stuff
</table>
// probably some more text
</body></html>
I need to retrieve only first table (discard the one with form). Omit all input before first <table> and after corresponding </table>. Some cells contains also paragraphs, bolds and scripts. There is no more than one nested table per row of main table.
How can I extract it to get a list of rows, where each elements holds plain (unicode string) cell's data and a list of rows for each nested table? There's no more than 1 level of nesting.
I tried HTMLParse, PyParse and re module, but can't get this working.
I'm quite new to Python.
A:
Try beautiful soup
In principle you need to use a real parser (which Beaut. Soup is), regex cannot deal with nested elements, for computer sciencey reasons (finite state machines can't parse context-free grammars, IIRC)
A:
You may like lxml. I'm not sure I really understood what you want to do with that structure, but maybe this example will help...
import lxml.html
def process_row(row):
for cell in row.xpath('./td'):
inner_tables = cell.xpath('./table')
if len(inner_tables) < 1:
yield cell.text_content()
else:
yield [process_table(t) for t in inner_tables]
def process_table(table):
return [process_row(row) for row in table.xpath('./tr')]
html = lxml.html.parse('test.html')
first_table = html.xpath('//body/table[1]')[0]
data = process_table(first_table))
A:
If the HTML is well-formed you can parse it into a DOM tree and use XPath to extract the table you want. I usually use lxml for parsing XML, and it can parse HTML as well.
The XPath for pulling out the first table would be "//table[1]".
|
How to extract nested tables from HTML?
|
I have an HTML file (encoded in utf-8). I open it with codecs.open(). The file architecture is:
<html>
// header
<body>
// some text
<table>
// some rows with cells here
// some cells contains tables
</table>
// maybe some text here
<table>
// a form and other stuff
</table>
// probably some more text
</body></html>
I need to retrieve only first table (discard the one with form). Omit all input before first <table> and after corresponding </table>. Some cells contains also paragraphs, bolds and scripts. There is no more than one nested table per row of main table.
How can I extract it to get a list of rows, where each elements holds plain (unicode string) cell's data and a list of rows for each nested table? There's no more than 1 level of nesting.
I tried HTMLParse, PyParse and re module, but can't get this working.
I'm quite new to Python.
|
[
"Try beautiful soup\nIn principle you need to use a real parser (which Beaut. Soup is), regex cannot deal with nested elements, for computer sciencey reasons (finite state machines can't parse context-free grammars, IIRC)\n",
"You may like lxml. I'm not sure I really understood what you want to do with that structure, but maybe this example will help...\nimport lxml.html\n\ndef process_row(row):\n for cell in row.xpath('./td'):\n inner_tables = cell.xpath('./table')\n if len(inner_tables) < 1:\n yield cell.text_content()\n else:\n yield [process_table(t) for t in inner_tables]\n\ndef process_table(table):\n return [process_row(row) for row in table.xpath('./tr')]\n\nhtml = lxml.html.parse('test.html')\nfirst_table = html.xpath('//body/table[1]')[0]\n\ndata = process_table(first_table))\n\n",
"If the HTML is well-formed you can parse it into a DOM tree and use XPath to extract the table you want. I usually use lxml for parsing XML, and it can parse HTML as well.\nThe XPath for pulling out the first table would be \"//table[1]\".\n"
] |
[
5,
4,
2
] |
[] |
[] |
[
"extract",
"html",
"html_table",
"python"
] |
stackoverflow_0000944860_extract_html_html_table_python.txt
|
Q:
Numpy: Should I use newaxis or None?
In numpy one can use the 'newaxis' object in the slicing syntax to create an axis of length one, e.g.:
import numpy as np
print np.zeros((3,5))[:,np.newaxis,:].shape
# shape will be (3,1,5)
The documentation states that one can also use None instead of newaxis, the effect is exactly the same.
Is there any reason to choose one over the other? Is there any general preference or style guide? My impression is that newaxis is more popular, probably because it is more explicit. So is there any reason why None is allowed?
A:
None is allowed because numpy.newaxis is merely an alias for None.
In [1]: import numpy
In [2]: numpy.newaxis is None
Out[2]: True
The authors probably chose it because they needed a convenient constant, and None was available.
As for why you should prefer newaxis over None: mainly it's because it's more explicit, and partly because someday the numpy authors might change it to something other than None. (They're not planning to, and probably won't, but there's no good reason to prefer None.)
|
Numpy: Should I use newaxis or None?
|
In numpy one can use the 'newaxis' object in the slicing syntax to create an axis of length one, e.g.:
import numpy as np
print np.zeros((3,5))[:,np.newaxis,:].shape
# shape will be (3,1,5)
The documentation states that one can also use None instead of newaxis, the effect is exactly the same.
Is there any reason to choose one over the other? Is there any general preference or style guide? My impression is that newaxis is more popular, probably because it is more explicit. So is there any reason why None is allowed?
|
[
"None is allowed because numpy.newaxis is merely an alias for None.\nIn [1]: import numpy\n\nIn [2]: numpy.newaxis is None\nOut[2]: True\n\nThe authors probably chose it because they needed a convenient constant, and None was available.\nAs for why you should prefer newaxis over None: mainly it's because it's more explicit, and partly because someday the numpy authors might change it to something other than None. (They're not planning to, and probably won't, but there's no good reason to prefer None.)\n"
] |
[
116
] |
[] |
[] |
[
"numpy",
"python"
] |
stackoverflow_0000944863_numpy_python.txt
|
Q:
pygtk gui freezes with pyjack thread
I have a program that records audio from firewire device (FA-66) with Jack connection. The interface is created with pygtk and the recording with py-jack (http://sourceforge.net/projects/py-jack/). The recording is done in a different thread because the GUI must be used at the same time for viewing results from the audio.
The problem is that when I start the recording thread, the GUI becomes very slow to respond. I have gtk.gdk function start_threads() at the beginning of the main thread. If I've got it right, I don't need threads_enter() and threads_leave(), because the recording doesn't affect the GUI. Please correct if I'm wrong.
The function jack.process() records audio from three microphones. If I replace it, for example, with time.sleep(2), everything works ok.
What is the best way to create threading in this case? Why does the jack.process freeze the GUI? Does it take all the cpu time or something?
Samples of my code below:
soundrecorder.py:
...
def start(self):
Thread(target=self._start).start()
def _start(self):
while self.eventhandler.record.isSet():
data = self._jackRecord(self.sample_length)
self.datahandler.queue.put(data)
def _jackRecord(self, length):
capture = Numeric.zeros((self.inputs, int(self.sample_rate * length)), 'f')
output = Numeric.zeros((self.inputs, self.buffer_size), 'f')
i = 0
while i < capture.shape[1] - self.buffer_size:
try:
jack.process(output, capture[:,i:i+self.buffer_size])
i += self.buffer_size
except:
pass
return capture
eventhandler.py:
recordStart() and recordStop() are simply callback functions that are called when start and stop buttons are pressed.
...
def recordStart(self, widget, data=None):
if not self.record.isSet():
self.record.set()
self.soundrecorder = SoundRecorder(self, self.datahandler)
self.soundrecorder.connect()
self.soundrecorder.start()
def recordStop(self, widget, data=None):
if self.record.isSet():
self.record.clear()
del(self.soundrecorder)
A:
You misunderstand how threads work.
Threads don't help you in this case.
"Then when one sample is recorded, it will be analyzed and the results are
shown in the GUI. At the same time the
next sample is already being
recorded."
WRONG. Threads don't do two things at the same time. In python there's a global lock that prevent two threads from running python code or touching python objects at the same time. And besides that, two things don't ever happen at the same time if you don't have two CPUs or cores. The threading mechanism just switches between them executing a fixed number of instructions of each at a time.
Threads also add a processing, memory and code complexibility overhead for no benefit. Python code using threads run slower and have lower performance than if it was single-threaded. There are only a few exceptions for this rule and your case is not one of them.
You probably want to rewrite your recording loop as a callback and integrate it with the GTK loop (you'll get better performance than using threads).
For that, use a gobject.idle_add with a big priority.
If you want to run two things really at "the same time", using two processors/cores, you want to launch another process. Launch a process to collect data and transmit it via some inter-process communication mechanism to the other process that is analizing and plotting data. multiprocessing module can help you with that.
|
pygtk gui freezes with pyjack thread
|
I have a program that records audio from firewire device (FA-66) with Jack connection. The interface is created with pygtk and the recording with py-jack (http://sourceforge.net/projects/py-jack/). The recording is done in a different thread because the GUI must be used at the same time for viewing results from the audio.
The problem is that when I start the recording thread, the GUI becomes very slow to respond. I have gtk.gdk function start_threads() at the beginning of the main thread. If I've got it right, I don't need threads_enter() and threads_leave(), because the recording doesn't affect the GUI. Please correct if I'm wrong.
The function jack.process() records audio from three microphones. If I replace it, for example, with time.sleep(2), everything works ok.
What is the best way to create threading in this case? Why does the jack.process freeze the GUI? Does it take all the cpu time or something?
Samples of my code below:
soundrecorder.py:
...
def start(self):
Thread(target=self._start).start()
def _start(self):
while self.eventhandler.record.isSet():
data = self._jackRecord(self.sample_length)
self.datahandler.queue.put(data)
def _jackRecord(self, length):
capture = Numeric.zeros((self.inputs, int(self.sample_rate * length)), 'f')
output = Numeric.zeros((self.inputs, self.buffer_size), 'f')
i = 0
while i < capture.shape[1] - self.buffer_size:
try:
jack.process(output, capture[:,i:i+self.buffer_size])
i += self.buffer_size
except:
pass
return capture
eventhandler.py:
recordStart() and recordStop() are simply callback functions that are called when start and stop buttons are pressed.
...
def recordStart(self, widget, data=None):
if not self.record.isSet():
self.record.set()
self.soundrecorder = SoundRecorder(self, self.datahandler)
self.soundrecorder.connect()
self.soundrecorder.start()
def recordStop(self, widget, data=None):
if self.record.isSet():
self.record.clear()
del(self.soundrecorder)
|
[
"You misunderstand how threads work. \nThreads don't help you in this case. \n\n\"Then when one sample is recorded, it will be analyzed and the results are\n shown in the GUI. At the same time the\n next sample is already being\n recorded.\"\n\nWRONG. Threads don't do two things at the same time. In python there's a global lock that prevent two threads from running python code or touching python objects at the same time. And besides that, two things don't ever happen at the same time if you don't have two CPUs or cores. The threading mechanism just switches between them executing a fixed number of instructions of each at a time.\nThreads also add a processing, memory and code complexibility overhead for no benefit. Python code using threads run slower and have lower performance than if it was single-threaded. There are only a few exceptions for this rule and your case is not one of them.\nYou probably want to rewrite your recording loop as a callback and integrate it with the GTK loop (you'll get better performance than using threads).\nFor that, use a gobject.idle_add with a big priority.\nIf you want to run two things really at \"the same time\", using two processors/cores, you want to launch another process. Launch a process to collect data and transmit it via some inter-process communication mechanism to the other process that is analizing and plotting data. multiprocessing module can help you with that.\n"
] |
[
2
] |
[] |
[] |
[
"multithreading",
"pygtk",
"python"
] |
stackoverflow_0000944161_multithreading_pygtk_python.txt
|
Q:
Why doesn't anyone care about this MySQLdb bug? is it a bug?
TL;DR: I've supplied a patch for a bug I found and I've got 0 feedback on it. I'm wondering if it's a bug at all. This is not a rant. Please read this and if you may be affected by it check the fix.
I have found and reported this MySQLdb bug some weeks ago (edit: 6 weeks ago), sent a patch, posted it on a couple of ORM's forums, mailed the MySQLdb author, mailed some people talking about handling deadlocks, mailed ORM authors and I'm still waiting for any kind of feedback.
This bug caused me a lot of grief and the only explanations I can find on the feedback is that either no one uses "SELECT ... FOR UPDATE" in python with mysql or that this is not a bug.
Basically the problem is that deadlocks and "lock wait timeout" exceptions are NOT being raised when issuing a "SELECT ... FOR UPDATE" using a MySQLdb cursor.
Instead, the statement fails silently and returns an empty resultset, which any application will interpret as if there were no rows matched.
I've tested the SVN version and it's still affected. Tested on the default installations of Ubuntu Intrepid, Jaunty and Debian Lenny and those are affected too. The current version installed by easy_install (1.2.3c1) is affected.
This affects SQLAlchemy and SQLObject too and probably any ORM that used MySQLdb cursors is affected too.
This script can reproduce a deadlock that will trigger the bug (just change the user/pass in get_conn, it will create the necessary tables):
import time
import threading
import traceback
import logging
import MySQLdb
def get_conn():
return MySQLdb.connect(host='localhost', db='TESTS',
user='tito', passwd='testing123')
class DeadlockTestThread(threading.Thread):
def __init__(self, order):
super(DeadlockTestThread, self).__init__()
self.first_select_done = threading.Event()
self.do_the_second_one = threading.Event()
self.order = order
def log(self, msg):
logging.info('%s: %s' % (self.getName(), msg))
def run(self):
db = get_conn()
c = db.cursor()
c.execute('BEGIN;')
query = 'SELECT * FROM locktest%i FOR UPDATE;'
try:
try:
c.execute(query % self.order[0])
self.first_select_done.set()
self.do_the_second_one.wait()
c.execute(query % self.order[1])
self.log('2nd SELECT OK, we got %i rows' % len(c.fetchall()))
c.execute('SHOW WARNINGS;')
self.log('SHOW WARNINGS: %s' % str(c.fetchall()))
except:
self.log('Failed! Rolling back')
c.execute('ROLLBACK;')
raise
else:
c.execute('COMMIT;')
finally:
c.close()
db.close()
def init():
db = get_conn()
# Create the tables.
c = db.cursor()
c.execute('DROP TABLE IF EXISTS locktest1;')
c.execute('DROP TABLE IF EXISTS locktest2;')
c.execute('''CREATE TABLE locktest1 (
a int(11), PRIMARY KEY(a)
) ENGINE=innodb;''')
c.execute('''CREATE TABLE locktest2 (
a int(11), PRIMARY KEY(a)
) ENGINE=innodb;''')
c.close()
# Insert some data.
c = db.cursor()
c.execute('BEGIN;')
c.execute('INSERT INTO locktest1 VALUES (123456);')
c.execute('INSERT INTO locktest2 VALUES (123456);')
c.execute('COMMIT;')
c.close()
db.close()
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
init()
t1 = DeadlockTestThread(order=[1, 2])
t2 = DeadlockTestThread(order=[2, 1])
t1.start()
t2.start()
# Wait till both threads did the 1st select.
t1.first_select_done.wait()
t2.first_select_done.wait()
# Let thread 1 continue, it will get wait for the lock
# at this point.
t1.do_the_second_one.set()
# Just make sure thread 1 is waiting for the lock.
time.sleep(0.1)
# This will trigger the deadlock and thread-2 will
# fail silently, getting 0 rows.
t2.do_the_second_one.set()
t1.join()
t2.join()
The output of running this on an unpatched MySQLdb is this:
$ python bug_mysqldb_deadlock.py
INFO:root:Thread-2: 2nd SELECT OK, we got 0 rows
INFO:root:Thread-2: SHOW WARNINGS: (('Error', 1213L, 'Deadlock found when trying to get lock; try restarting transaction'),)
INFO:root:Thread-1: 2nd SELECT OK, we got 1 rows
INFO:root:Thread-1: SHOW WARNINGS: ()
You can see that Thread-2 got 0 rows from a table we know has 1 and only issuing a "SHOW WARNINGS" statement you can see what happened.
If you check "SHOW ENGINE INNODB STATUS" you will see this line in the log "*** WE ROLL BACK TRANSACTION (2)", everything that happens after the failing select on Thread-2 is on a half rolled back transaction.
After applying the patch (check the ticket for it, url below), this is the output of running the script:
$ python bug_mysqldb_deadlock.py
INFO:root:Thread-2: Failed! Rolling back
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.4/threading.py", line 442, in __bootstrap
self.run()
File "bug_mysqldb_deadlock.py", line 33, in run
c.execute(query % self.order[1])
File "/home/koba/Desarollo/InetPub/IBSRL/VirtualEnv-1.0-p2.4/lib/python2.4/site-packages/MySQL_python-1.2.2-py2.4-linux-x86_64.egg/MySQLdb/cursors.py", line 178, in execute
self.errorhandler(self, exc, value)
File "/home/koba/Desarollo/InetPub/IBSRL/VirtualEnv-1.0-p2.4/lib/python2.4/site-packages/MySQL_python-1.2.2-py2.4-linux-x86_64.egg/MySQLdb/connections.py", line 35, in defaulterrorhandler
raise errorclass, errorvalue
OperationalError: (1213, 'Deadlock found when trying to get lock; try restarting transaction')
INFO:root:Thread-1: 2nd SELECT OK, we got 1 rows
INFO:root:Thread-1: SHOW WARNINGS: ()
In this case an exception is raised on Thread-2 and it rolls back properly.
So, what's your opinion?, is this a bug? no one cares or am I just crazy?
This is the ticket I opened on SF: http://sourceforge.net/tracker/index.php?func=detail&aid=2776267&group_id=22307&atid=374932
A:
Why doesn’t anyone care about this
MySQLdb bug?
bugs can take a while to prioritize, research, verify the problem, find a fix, test the fix, make sure the fix fix does not break anything else. I would suggest you deploy a work around, since it could take some time for this fix to arrive for you.
|
Why doesn't anyone care about this MySQLdb bug? is it a bug?
|
TL;DR: I've supplied a patch for a bug I found and I've got 0 feedback on it. I'm wondering if it's a bug at all. This is not a rant. Please read this and if you may be affected by it check the fix.
I have found and reported this MySQLdb bug some weeks ago (edit: 6 weeks ago), sent a patch, posted it on a couple of ORM's forums, mailed the MySQLdb author, mailed some people talking about handling deadlocks, mailed ORM authors and I'm still waiting for any kind of feedback.
This bug caused me a lot of grief and the only explanations I can find on the feedback is that either no one uses "SELECT ... FOR UPDATE" in python with mysql or that this is not a bug.
Basically the problem is that deadlocks and "lock wait timeout" exceptions are NOT being raised when issuing a "SELECT ... FOR UPDATE" using a MySQLdb cursor.
Instead, the statement fails silently and returns an empty resultset, which any application will interpret as if there were no rows matched.
I've tested the SVN version and it's still affected. Tested on the default installations of Ubuntu Intrepid, Jaunty and Debian Lenny and those are affected too. The current version installed by easy_install (1.2.3c1) is affected.
This affects SQLAlchemy and SQLObject too and probably any ORM that used MySQLdb cursors is affected too.
This script can reproduce a deadlock that will trigger the bug (just change the user/pass in get_conn, it will create the necessary tables):
import time
import threading
import traceback
import logging
import MySQLdb
def get_conn():
return MySQLdb.connect(host='localhost', db='TESTS',
user='tito', passwd='testing123')
class DeadlockTestThread(threading.Thread):
def __init__(self, order):
super(DeadlockTestThread, self).__init__()
self.first_select_done = threading.Event()
self.do_the_second_one = threading.Event()
self.order = order
def log(self, msg):
logging.info('%s: %s' % (self.getName(), msg))
def run(self):
db = get_conn()
c = db.cursor()
c.execute('BEGIN;')
query = 'SELECT * FROM locktest%i FOR UPDATE;'
try:
try:
c.execute(query % self.order[0])
self.first_select_done.set()
self.do_the_second_one.wait()
c.execute(query % self.order[1])
self.log('2nd SELECT OK, we got %i rows' % len(c.fetchall()))
c.execute('SHOW WARNINGS;')
self.log('SHOW WARNINGS: %s' % str(c.fetchall()))
except:
self.log('Failed! Rolling back')
c.execute('ROLLBACK;')
raise
else:
c.execute('COMMIT;')
finally:
c.close()
db.close()
def init():
db = get_conn()
# Create the tables.
c = db.cursor()
c.execute('DROP TABLE IF EXISTS locktest1;')
c.execute('DROP TABLE IF EXISTS locktest2;')
c.execute('''CREATE TABLE locktest1 (
a int(11), PRIMARY KEY(a)
) ENGINE=innodb;''')
c.execute('''CREATE TABLE locktest2 (
a int(11), PRIMARY KEY(a)
) ENGINE=innodb;''')
c.close()
# Insert some data.
c = db.cursor()
c.execute('BEGIN;')
c.execute('INSERT INTO locktest1 VALUES (123456);')
c.execute('INSERT INTO locktest2 VALUES (123456);')
c.execute('COMMIT;')
c.close()
db.close()
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
init()
t1 = DeadlockTestThread(order=[1, 2])
t2 = DeadlockTestThread(order=[2, 1])
t1.start()
t2.start()
# Wait till both threads did the 1st select.
t1.first_select_done.wait()
t2.first_select_done.wait()
# Let thread 1 continue, it will get wait for the lock
# at this point.
t1.do_the_second_one.set()
# Just make sure thread 1 is waiting for the lock.
time.sleep(0.1)
# This will trigger the deadlock and thread-2 will
# fail silently, getting 0 rows.
t2.do_the_second_one.set()
t1.join()
t2.join()
The output of running this on an unpatched MySQLdb is this:
$ python bug_mysqldb_deadlock.py
INFO:root:Thread-2: 2nd SELECT OK, we got 0 rows
INFO:root:Thread-2: SHOW WARNINGS: (('Error', 1213L, 'Deadlock found when trying to get lock; try restarting transaction'),)
INFO:root:Thread-1: 2nd SELECT OK, we got 1 rows
INFO:root:Thread-1: SHOW WARNINGS: ()
You can see that Thread-2 got 0 rows from a table we know has 1 and only issuing a "SHOW WARNINGS" statement you can see what happened.
If you check "SHOW ENGINE INNODB STATUS" you will see this line in the log "*** WE ROLL BACK TRANSACTION (2)", everything that happens after the failing select on Thread-2 is on a half rolled back transaction.
After applying the patch (check the ticket for it, url below), this is the output of running the script:
$ python bug_mysqldb_deadlock.py
INFO:root:Thread-2: Failed! Rolling back
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.4/threading.py", line 442, in __bootstrap
self.run()
File "bug_mysqldb_deadlock.py", line 33, in run
c.execute(query % self.order[1])
File "/home/koba/Desarollo/InetPub/IBSRL/VirtualEnv-1.0-p2.4/lib/python2.4/site-packages/MySQL_python-1.2.2-py2.4-linux-x86_64.egg/MySQLdb/cursors.py", line 178, in execute
self.errorhandler(self, exc, value)
File "/home/koba/Desarollo/InetPub/IBSRL/VirtualEnv-1.0-p2.4/lib/python2.4/site-packages/MySQL_python-1.2.2-py2.4-linux-x86_64.egg/MySQLdb/connections.py", line 35, in defaulterrorhandler
raise errorclass, errorvalue
OperationalError: (1213, 'Deadlock found when trying to get lock; try restarting transaction')
INFO:root:Thread-1: 2nd SELECT OK, we got 1 rows
INFO:root:Thread-1: SHOW WARNINGS: ()
In this case an exception is raised on Thread-2 and it rolls back properly.
So, what's your opinion?, is this a bug? no one cares or am I just crazy?
This is the ticket I opened on SF: http://sourceforge.net/tracker/index.php?func=detail&aid=2776267&group_id=22307&atid=374932
|
[
"\nWhy doesn’t anyone care about this\n MySQLdb bug?\n\nbugs can take a while to prioritize, research, verify the problem, find a fix, test the fix, make sure the fix fix does not break anything else. I would suggest you deploy a work around, since it could take some time for this fix to arrive for you.\n"
] |
[
7
] |
[] |
[] |
[
"deadlock",
"mysql",
"python"
] |
stackoverflow_0000945482_deadlock_mysql_python.txt
|
Q:
what are the applications of the python reload function?
I have been wondering about the reload() function in python, which seems like it can lead to problems if used without care.
Why would you want to reload a module, rather than just stop/start python again?
I imagine one application might be to test changes to a module interactively.
A:
reload is useful for reloading code that may have changed in a Python module. Usually this means a plugin system.
Take a look at this link:
http://www.codexon.com/posts/a-better-python-reload
It will tell you the shortcomings of reload and a possible fix.
A:
I imagine one application might be to test changes to a module interactively.
That's really the only use for it. From the Python documentation:
This is useful if you have edited the module source file using an external editor and want to try out the new version without leaving the Python interpreter.
A:
Adding a link to my new reimport module that was released yesterday. This provides a more thorough reimport than the reload() builtin. With this reimport you can be sure that class changes and function updates will get reflected in all instances and callbacks.
http://code.google.com/p/reimport/
|
what are the applications of the python reload function?
|
I have been wondering about the reload() function in python, which seems like it can lead to problems if used without care.
Why would you want to reload a module, rather than just stop/start python again?
I imagine one application might be to test changes to a module interactively.
|
[
"reload is useful for reloading code that may have changed in a Python module. Usually this means a plugin system.\nTake a look at this link:\nhttp://www.codexon.com/posts/a-better-python-reload\nIt will tell you the shortcomings of reload and a possible fix.\n",
"\nI imagine one application might be to test changes to a module interactively.\n\nThat's really the only use for it. From the Python documentation:\n\nThis is useful if you have edited the module source file using an external editor and want to try out the new version without leaving the Python interpreter.\n\n",
"Adding a link to my new reimport module that was released yesterday. This provides a more thorough reimport than the reload() builtin. With this reimport you can be sure that class changes and function updates will get reflected in all instances and callbacks.\nhttp://code.google.com/p/reimport/\n"
] |
[
6,
4,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000856174_python.txt
|
Q:
How can I script the creation of a movie from a set of images?
I managed to get a set of images loaded using Python.
I'd like my script to take this series of images (in whatever format I need them), and create a video from them. The big limit in all this is that I am looking for something easy and simple to install. Ideally, using the standard OS X installation procedure:
download .dmg
click
move into the application folder
I do not want to expend a lot of effort to install the video editing program. Just something simple that works.
Questions
What format should I aim for? I need my video to be playable on Linux, Mac, and Windows systems. The images are graphs, so we are speaking of discreet images, not photographs. It should be pretty easy to compress it. There will be about 1000 images, so this will be a short movie.
What tools should I use to produce the actual video? I need to either do it directly from Python using a library designed for this purpose, or by scripting command-line tools called from Python.
A:
If you're not averse to using the command-line, there's the convert command from the ImageMagick package. It's available for Mac, Linux, Windows. See http://www.imagemagick.org/script/index.php.
It supports a huge number of image formats and you can output your movie as an mpeg file:
convert -quality 100 *.png outvideo.mpeg
or as animated gifs for uploading to webpages:
convert -set delay 3 -loop 0 -scale 50% *.png animation.gif
More options for the convert command available here: ImageMagick v6 Examples -
Animation Basics
A:
You may use OpenCV. And it can be installed on Mac. Also, it has a python interface.
I have slightly modified a program taken from here, but don't know if it compiles, and can't check it.
import opencv
from opencv.cv import *
from opencv.highgui import *
isColor = 1
fps = 25 # or 30, frames per second
frameW = 256 # images width
frameH = 256 # images height
writer = cvCreateVideoWriter("video.avi",-1,
fps,cvSize(frameW,frameH),isColor)
#-----------------------------
#Writing the video file:
#-----------------------------
nFrames = 70; #number of frames
for i in range(nFrames):
img = cvLoadImage("image_number_%d.png"%i) #specify filename and the extension
# add the frame to the video
cvWriteFrame(writer,img)
cvReleaseVideoWriter(writer) #
A:
Do you have to use python? There are other tools that are created just for these purposes. For example, to use ffmpeg or mencoder.
|
How can I script the creation of a movie from a set of images?
|
I managed to get a set of images loaded using Python.
I'd like my script to take this series of images (in whatever format I need them), and create a video from them. The big limit in all this is that I am looking for something easy and simple to install. Ideally, using the standard OS X installation procedure:
download .dmg
click
move into the application folder
I do not want to expend a lot of effort to install the video editing program. Just something simple that works.
Questions
What format should I aim for? I need my video to be playable on Linux, Mac, and Windows systems. The images are graphs, so we are speaking of discreet images, not photographs. It should be pretty easy to compress it. There will be about 1000 images, so this will be a short movie.
What tools should I use to produce the actual video? I need to either do it directly from Python using a library designed for this purpose, or by scripting command-line tools called from Python.
|
[
"If you're not averse to using the command-line, there's the convert command from the ImageMagick package. It's available for Mac, Linux, Windows. See http://www.imagemagick.org/script/index.php.\nIt supports a huge number of image formats and you can output your movie as an mpeg file:\nconvert -quality 100 *.png outvideo.mpeg\n\nor as animated gifs for uploading to webpages:\nconvert -set delay 3 -loop 0 -scale 50% *.png animation.gif\n\nMore options for the convert command available here: ImageMagick v6 Examples -\nAnimation Basics\n",
"You may use OpenCV. And it can be installed on Mac. Also, it has a python interface.\nI have slightly modified a program taken from here, but don't know if it compiles, and can't check it.\nimport opencv\nfrom opencv.cv import *\nfrom opencv.highgui import *\n\nisColor = 1\nfps = 25 # or 30, frames per second\nframeW = 256 # images width\nframeH = 256 # images height\nwriter = cvCreateVideoWriter(\"video.avi\",-1, \nfps,cvSize(frameW,frameH),isColor)\n\n#-----------------------------\n#Writing the video file:\n#-----------------------------\n\nnFrames = 70; #number of frames\nfor i in range(nFrames):\n img = cvLoadImage(\"image_number_%d.png\"%i) #specify filename and the extension\n # add the frame to the video\n cvWriteFrame(writer,img)\n\ncvReleaseVideoWriter(writer) #\n\n",
"Do you have to use python? There are other tools that are created just for these purposes. For example, to use ffmpeg or mencoder.\n"
] |
[
21,
9,
4
] |
[] |
[] |
[
"macos",
"python",
"video"
] |
stackoverflow_0000945250_macos_python_video.txt
|
Q:
django Authentication using auth.views
User should be redirected to the Login page after registration and after logout. In both cases there must be a message displayed indicating relevant messages.
Using the django.contrib.auth.views.login how do I send these {{ info }} messages.
A possible option would be to copy the auth.views to new registration module and include all essential stuff. But that doesn't seem DRY enough.
What is the best approach.
Update: Question elaboration:
For normal cases when you want to indicate to some user the response of an action you can use
request.user.message_set.create()
This creates a message that is displayed in one of the templates and automatically deletes.
However this message system only works for logged in users who continue to have same session id. In the case of registration, the user is not authenticated and in the case of logout since the session changes, this system cannot be used.
Add to that, the built in login and logout functions from django.contrib.auth.views return a 'HttpResponseRedirect' which make it impossible to add another variable to the template.
I tried setting things on the request object itself
request.info='Registered'
and check this in a different view
try:
info = request.info:
del request.info
except:
info = ''
#later
render_to_response('app/file',{'info':info})
Even this didn't work.
Clearly I can define a registered.html and add this static message there, but I was being lazy to write another template and trying to implement it DRY.
I realized that the cases were different for "registered" message and "logged out" message. And the DRY approach I used, I shall write as an answer.
A:
If the messages are static you can use your own templates for those views:
(r'^accounts/login/$', 'django.contrib.auth.views.login', {'template_name': 'myapp/login.html'}
From the docs.
A:
I think the best solution to this problem is to use a "flash"-type session-based messaging system. There are several floating around: django-flash seems really nice, I use django-session-messages which is very simple. Hopefully by the time we get to Django 1.2 this'll be baked-in.
A:
You have Request Context Processors to add this kind of information to the context of every template that gets rendered.
This is the "zero impact" way to do this kind of thing. You don't update any view functions, so it meets some definitions of DRY.
See http://docs.djangoproject.com/en/dev/ref/templates/api/#id1
First, write your own login.html template.
Second, write your own context function to provide any additional information that must be inserted into the template.
Third, update settings to addy your context processor to the TEMPLATE_CONTEXT_PROCESSORS setting.
|
django Authentication using auth.views
|
User should be redirected to the Login page after registration and after logout. In both cases there must be a message displayed indicating relevant messages.
Using the django.contrib.auth.views.login how do I send these {{ info }} messages.
A possible option would be to copy the auth.views to new registration module and include all essential stuff. But that doesn't seem DRY enough.
What is the best approach.
Update: Question elaboration:
For normal cases when you want to indicate to some user the response of an action you can use
request.user.message_set.create()
This creates a message that is displayed in one of the templates and automatically deletes.
However this message system only works for logged in users who continue to have same session id. In the case of registration, the user is not authenticated and in the case of logout since the session changes, this system cannot be used.
Add to that, the built in login and logout functions from django.contrib.auth.views return a 'HttpResponseRedirect' which make it impossible to add another variable to the template.
I tried setting things on the request object itself
request.info='Registered'
and check this in a different view
try:
info = request.info:
del request.info
except:
info = ''
#later
render_to_response('app/file',{'info':info})
Even this didn't work.
Clearly I can define a registered.html and add this static message there, but I was being lazy to write another template and trying to implement it DRY.
I realized that the cases were different for "registered" message and "logged out" message. And the DRY approach I used, I shall write as an answer.
|
[
"If the messages are static you can use your own templates for those views:\n(r'^accounts/login/$', 'django.contrib.auth.views.login', {'template_name': 'myapp/login.html'}\n\nFrom the docs.\n",
"I think the best solution to this problem is to use a \"flash\"-type session-based messaging system. There are several floating around: django-flash seems really nice, I use django-session-messages which is very simple. Hopefully by the time we get to Django 1.2 this'll be baked-in.\n",
"You have Request Context Processors to add this kind of information to the context of every template that gets rendered.\nThis is the \"zero impact\" way to do this kind of thing. You don't update any view functions, so it meets some definitions of DRY.\nSee http://docs.djangoproject.com/en/dev/ref/templates/api/#id1\nFirst, write your own login.html template.\nSecond, write your own context function to provide any additional information that must be inserted into the template.\nThird, update settings to addy your context processor to the TEMPLATE_CONTEXT_PROCESSORS setting.\n"
] |
[
3,
1,
0
] |
[] |
[] |
[
"authentication",
"django",
"django_authentication",
"python"
] |
stackoverflow_0000938427_authentication_django_django_authentication_python.txt
|
Q:
is there a multiple format specifier in Python?
I have a data table 44 columns wide that I need to write to file. I don't want to write:
outfile.write("%i,%f,%f,$f ... )\n" % (i, a,b,c ...))
In Fortran you can specify multiple format specifiers easily:
write (*,"(3f8.3)") a,b,c
Is there a similar capability in Python?
A:
>>> "%d " * 3
'%d %d %d '
>>> "%d " * 3 % (1,2,3)
'1 2 3 '
A:
Are you asking about
format= "%i" + ",%f"*len(row) + "\n"
outfile.write( format % ([i]+row))
A:
Is not exactly the same, but you can try something like this:
values=[1,2.1,3,4,5] #you can use variables instead of values of course
outfile.write(",".join(["%f" % value for value in values]));
A:
Note that I think it'd be much better to do something like:
outfile.write(", ".join(map(str, row)))
...which isn't what you asked for, but is better in a couple of ways.
|
is there a multiple format specifier in Python?
|
I have a data table 44 columns wide that I need to write to file. I don't want to write:
outfile.write("%i,%f,%f,$f ... )\n" % (i, a,b,c ...))
In Fortran you can specify multiple format specifiers easily:
write (*,"(3f8.3)") a,b,c
Is there a similar capability in Python?
|
[
">>> \"%d \" * 3\n'%d %d %d '\n>>> \"%d \" * 3 % (1,2,3)\n'1 2 3 '\n\n",
"Are you asking about\nformat= \"%i\" + \",%f\"*len(row) + \"\\n\"\noutfile.write( format % ([i]+row))\n\n",
"Is not exactly the same, but you can try something like this:\nvalues=[1,2.1,3,4,5] #you can use variables instead of values of course\noutfile.write(\",\".join([\"%f\" % value for value in values]));\n\n",
"Note that I think it'd be much better to do something like:\noutfile.write(\", \".join(map(str, row)))\n\n...which isn't what you asked for, but is better in a couple of ways.\n"
] |
[
22,
3,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000945972_python.txt
|
Q:
How to I reload global vars on every page refresh in DJango
Here is my problem. DJango continues to store all the global objects after the first run of a script. For instance, an object you instantiate in views.py globally will be there until you restart the app server. This is fine unless your object is tied to some outside resource that may time out. Now the way I was thinking to correct was some sort of factory method that checks if the object is instantiated and creates it if it's not, and then returns it. However, the this fails because the object exists there since the last page request, so a factory method is always returning the object that was instantiated during the first request.
What I am looking for is a way to trigger something to happen on a per request basis. I have seen ways of doing this by implementing your own middleware, but I think that is overkill. Does anyone know of some reserved methods or some other per request trigger.
A:
Simple: Don't use global objects.
If you want an object inside the view, instantiate it inside the view, not as global. That way it will be collected after the view ends.
|
How to I reload global vars on every page refresh in DJango
|
Here is my problem. DJango continues to store all the global objects after the first run of a script. For instance, an object you instantiate in views.py globally will be there until you restart the app server. This is fine unless your object is tied to some outside resource that may time out. Now the way I was thinking to correct was some sort of factory method that checks if the object is instantiated and creates it if it's not, and then returns it. However, the this fails because the object exists there since the last page request, so a factory method is always returning the object that was instantiated during the first request.
What I am looking for is a way to trigger something to happen on a per request basis. I have seen ways of doing this by implementing your own middleware, but I think that is overkill. Does anyone know of some reserved methods or some other per request trigger.
|
[
"Simple: Don't use global objects.\nIf you want an object inside the view, instantiate it inside the view, not as global. That way it will be collected after the view ends.\n"
] |
[
6
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0000947436_django_python.txt
|
Q:
Custom simple Python HTTP server not serving css files
I had found written in python, a very simple http server, it's do_get method looks like this:
def do_GET(self):
try:
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers();
filepath = self.path
print filepath, USTAW['rootwww']
f = file("./www" + filepath)
s = f.readline();
while s != "":
self.wfile.write(s);
s = f.readline();
return
except IOError:
self.send_error(404,'File Not Found: %s ' % filepath)
It works ok, besides the fact - that it is not serving any css files ( it is rendered without css). Anyone got a suggestion / solution for this quirk?
Best regards,
praavDa
A:
You're explicitly serving all files as Content-type: text/html, where you need to serve CSS files as Content-type: text/css. See this page on the CSS-Discuss Wiki for details. Web servers usually have a lookup table to map from file extension to Content-Type.
A:
it seems to be returning the html mimetype for all files:
self.send_header('Content-type', 'text/html')
Also, it seems to be pretty bad. Why are you interested in this sucky server? Look at cherrypy or paste for good python implementations of HTTP server and a good code to study.
EDIT: Trying to fix it for you:
import os
import mimetypes
#...
def do_GET(self):
try:
filepath = self.path
print filepath, USTAW['rootwww']
f = open(os.path.join('.', 'www', filepath))
except IOError:
self.send_error(404,'File Not Found: %s ' % filepath)
else:
self.send_response(200)
mimetype, _ = mimetypes.guess_type(filepath)
self.send_header('Content-type', mimetype)
self.end_headers()
for s in f:
self.wfile.write(s)
A:
See SimpleHTTPServer.py in the standard library for a safer, saner implementation that you can customize if you need.
|
Custom simple Python HTTP server not serving css files
|
I had found written in python, a very simple http server, it's do_get method looks like this:
def do_GET(self):
try:
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers();
filepath = self.path
print filepath, USTAW['rootwww']
f = file("./www" + filepath)
s = f.readline();
while s != "":
self.wfile.write(s);
s = f.readline();
return
except IOError:
self.send_error(404,'File Not Found: %s ' % filepath)
It works ok, besides the fact - that it is not serving any css files ( it is rendered without css). Anyone got a suggestion / solution for this quirk?
Best regards,
praavDa
|
[
"You're explicitly serving all files as Content-type: text/html, where you need to serve CSS files as Content-type: text/css. See this page on the CSS-Discuss Wiki for details. Web servers usually have a lookup table to map from file extension to Content-Type.\n",
"it seems to be returning the html mimetype for all files:\nself.send_header('Content-type', 'text/html')\n\nAlso, it seems to be pretty bad. Why are you interested in this sucky server? Look at cherrypy or paste for good python implementations of HTTP server and a good code to study.\n\nEDIT: Trying to fix it for you:\nimport os\nimport mimetypes\n\n#...\n\n def do_GET(self):\n try:\n\n filepath = self.path\n print filepath, USTAW['rootwww']\n\n f = open(os.path.join('.', 'www', filepath))\n\n except IOError:\n self.send_error(404,'File Not Found: %s ' % filepath)\n\n else:\n self.send_response(200)\n mimetype, _ = mimetypes.guess_type(filepath)\n self.send_header('Content-type', mimetype)\n self.end_headers()\n for s in f:\n self.wfile.write(s)\n\n",
"See SimpleHTTPServer.py in the standard library for a safer, saner implementation that you can customize if you need.\n"
] |
[
10,
6,
2
] |
[] |
[] |
[
"css",
"http",
"python"
] |
stackoverflow_0000947372_css_http_python.txt
|
Q:
Coverage not showing executed lines in virtualenv
I have a project and I am trying to run nosetests with coverage. I am running in a virtualenv.
When I run
$ python setup.py nosetests
The tests run fine but coverage is not showing that any code is executed (coverage
is all 0%).
Name Stmts Exec Cover Missing
------------------------------------------------------------------
package.module1 60 0 0% 3-106
package.module2 32 0 0% 3-93
package.module3 55 0 0% 8-74
package.module4 38 0 0% 3-125
package.module5 107 0 0% 8-123
package.module6 1 0 0% 1
package.module7 41 0 0% 3-143
package.module8 150 0 0% 7-281
package.module9 158 0 0% 3-338
------------------------------------------------------------------
TOTAL 642 0 0%
----------------------------------------------------------------------
Ran 15 tests in 0.099s
Coverage version 3.0b3, Darwin Kernel Version 9.7.0, Mac OS X 10.5.7, setuptools 0.6c9,
nose 0.11.1, Python 2.5.4
A:
This is going to require some back and forth. How can I see your code?
And why did you come to stackoverflow for an answer rather than to the developer (that is, me)? :)
A:
try...
easy_install "coverage==2.85"
I was having the same issue and this solved my problem and gave me glorious coverage reports as expected.
|
Coverage not showing executed lines in virtualenv
|
I have a project and I am trying to run nosetests with coverage. I am running in a virtualenv.
When I run
$ python setup.py nosetests
The tests run fine but coverage is not showing that any code is executed (coverage
is all 0%).
Name Stmts Exec Cover Missing
------------------------------------------------------------------
package.module1 60 0 0% 3-106
package.module2 32 0 0% 3-93
package.module3 55 0 0% 8-74
package.module4 38 0 0% 3-125
package.module5 107 0 0% 8-123
package.module6 1 0 0% 1
package.module7 41 0 0% 3-143
package.module8 150 0 0% 7-281
package.module9 158 0 0% 3-338
------------------------------------------------------------------
TOTAL 642 0 0%
----------------------------------------------------------------------
Ran 15 tests in 0.099s
Coverage version 3.0b3, Darwin Kernel Version 9.7.0, Mac OS X 10.5.7, setuptools 0.6c9,
nose 0.11.1, Python 2.5.4
|
[
"This is going to require some back and forth. How can I see your code?\nAnd why did you come to stackoverflow for an answer rather than to the developer (that is, me)? :)\n",
"try... \neasy_install \"coverage==2.85\" \n\nI was having the same issue and this solved my problem and gave me glorious coverage reports as expected. \n"
] |
[
2,
2
] |
[] |
[] |
[
"code_coverage",
"macos",
"nosetests",
"python",
"virtualenv"
] |
stackoverflow_0000931248_code_coverage_macos_nosetests_python_virtualenv.txt
|
Q:
Custom Markup in Django
Can anyone give me an idea or perhaps some references on how to create custom markups for django using textile or Markdown(or am I thinking wrong here)?
For example: I'd like to convert the following markups(the outer bracket mean they are grouped as one tag:
[
[Contacts]
* Contact #1
* Contact #2
* Contact #3
[Friend Requests]
* Jose
]
to have them converted to:
<div class="tabs">
<ul>
<li class="tab">Contacts</li>
<li>Contact #1</li>
(etc.. etc..)
</ul>
</div>
or is regex more recommended for my needs?
A:
The built in markup app uses a filter template tag to render textile, markdown and restructuredtext. If that is not what your looking for, another option is to use a 'markup' field. e.g.,
class TownHallUpdate(models.Model):
content = models.TextField()
content_html = models.TextField(editable=False)
def save(self, **kwargs):
self.content_html = textile.textile(sanitize_html(self.content))
super(TownHallUpdate, self).save(**kwargs)
Example from James Tauber's (and Brian Rosner's) django patterns talk.
A:
Django comes with a built-in contrib app that provides filters to display data using several different markup languages, including textile and markdown.
See the relevant docs for more info.
A:
A quick google search resulted with this
A:
Well it seems the best way is still use a regex and create my own filter.
here are some links that helped me out:
http://showmedo.com/videos/video?name=1100010&fromSeriesID=110
http://www.smashingmagazine.com/2009/05/06/introduction-to-advanced-regular-expressions/
hope this helps someone who had the same problem as me!
|
Custom Markup in Django
|
Can anyone give me an idea or perhaps some references on how to create custom markups for django using textile or Markdown(or am I thinking wrong here)?
For example: I'd like to convert the following markups(the outer bracket mean they are grouped as one tag:
[
[Contacts]
* Contact #1
* Contact #2
* Contact #3
[Friend Requests]
* Jose
]
to have them converted to:
<div class="tabs">
<ul>
<li class="tab">Contacts</li>
<li>Contact #1</li>
(etc.. etc..)
</ul>
</div>
or is regex more recommended for my needs?
|
[
"The built in markup app uses a filter template tag to render textile, markdown and restructuredtext. If that is not what your looking for, another option is to use a 'markup' field. e.g.,\nclass TownHallUpdate(models.Model):\n content = models.TextField()\n content_html = models.TextField(editable=False)\n\n def save(self, **kwargs):\n self.content_html = textile.textile(sanitize_html(self.content))\n super(TownHallUpdate, self).save(**kwargs)\n\nExample from James Tauber's (and Brian Rosner's) django patterns talk.\n",
"Django comes with a built-in contrib app that provides filters to display data using several different markup languages, including textile and markdown.\nSee the relevant docs for more info.\n",
"A quick google search resulted with this\n",
"Well it seems the best way is still use a regex and create my own filter. \nhere are some links that helped me out:\nhttp://showmedo.com/videos/video?name=1100010&fromSeriesID=110\nhttp://www.smashingmagazine.com/2009/05/06/introduction-to-advanced-regular-expressions/ \nhope this helps someone who had the same problem as me!\n"
] |
[
3,
1,
0,
0
] |
[] |
[] |
[
"django",
"markdown",
"python"
] |
stackoverflow_0000933500_django_markdown_python.txt
|
Q:
If slicing does not create a copy of a list nor does list() how can I get a real copy of my list?
I am trying to modify a list and since my modifications were getting a bit tricky and my list large I took a slice of my list using the following code
tempList=origList[0:10]
for item in tempList:
item[-1].insert(0 , item[1])
del item[1]
I did this thinking that all of the modifications to the list would affect tempList object and not origList objects.
Well once I got my code right and ran it on my original list the first ten items (indexed 0-9) were affected by my manipulation in testing the code printed above.
So I googled it and I find references that say taking a slice copies the list and creates a new-one. I also found code that helped me find the id of the items so I created my origList from scratch, got the ids of the first ten items. I sliced the list again and found that the ids from the slices matched the ids from the first ten items of the origList.
I found more notes that suggested a more pythonic way to copy a list would be to use
tempList=list(origList([0:10])
I tried that and I still find that the ids from the tempList match the ids from the origList.
Please don't suggest better ways to do the coding-I am going to figure out how to do this in a list Comprehension on my own after I understand what how copying works
Based on Kai's answer the correct method is:
import copy
tempList=copy.deepcopy(origList[0:10])
id(origList[0])
>>>>42980096
id(tempList[0])
>>>>42714136
Works like a charm
A:
Slicing creates a shallow copy. In your example, I see that you are calling insert() on item[-1], which means that item is a list of lists. That means that your shallow copies still reference the original objects. You can think of it as making copies of the pointers, not the actual objects.
Your solution lies in using deep copies instead. Python provides a copy module for just this sort of thing. You'll find lots more information on shallow vs deep copying when you search for it.
A:
If you copy an object the contents of it are not copied. In probably most cases this is what you want. In your case you have to make sure that the contents are copied by yourself. You could use copy.deepcopy but if you have a list of lists or something similar i would recommend using copy = [l[:] for l in list_of_lists], that should be a lot faster.
A little note to your codestyle:
del is a statement and not a function so it is better to not use parens there, they are just confusing.
Whitespaces around operators and after commas would make your code easier to read.
list(alist) copies a list but it is not more pythonic than alist[:], I think alist[:] is even more commonly used then the alternative.
|
If slicing does not create a copy of a list nor does list() how can I get a real copy of my list?
|
I am trying to modify a list and since my modifications were getting a bit tricky and my list large I took a slice of my list using the following code
tempList=origList[0:10]
for item in tempList:
item[-1].insert(0 , item[1])
del item[1]
I did this thinking that all of the modifications to the list would affect tempList object and not origList objects.
Well once I got my code right and ran it on my original list the first ten items (indexed 0-9) were affected by my manipulation in testing the code printed above.
So I googled it and I find references that say taking a slice copies the list and creates a new-one. I also found code that helped me find the id of the items so I created my origList from scratch, got the ids of the first ten items. I sliced the list again and found that the ids from the slices matched the ids from the first ten items of the origList.
I found more notes that suggested a more pythonic way to copy a list would be to use
tempList=list(origList([0:10])
I tried that and I still find that the ids from the tempList match the ids from the origList.
Please don't suggest better ways to do the coding-I am going to figure out how to do this in a list Comprehension on my own after I understand what how copying works
Based on Kai's answer the correct method is:
import copy
tempList=copy.deepcopy(origList[0:10])
id(origList[0])
>>>>42980096
id(tempList[0])
>>>>42714136
Works like a charm
|
[
"Slicing creates a shallow copy. In your example, I see that you are calling insert() on item[-1], which means that item is a list of lists. That means that your shallow copies still reference the original objects. You can think of it as making copies of the pointers, not the actual objects.\nYour solution lies in using deep copies instead. Python provides a copy module for just this sort of thing. You'll find lots more information on shallow vs deep copying when you search for it.\n",
"If you copy an object the contents of it are not copied. In probably most cases this is what you want. In your case you have to make sure that the contents are copied by yourself. You could use copy.deepcopy but if you have a list of lists or something similar i would recommend using copy = [l[:] for l in list_of_lists], that should be a lot faster.\nA little note to your codestyle:\n\ndel is a statement and not a function so it is better to not use parens there, they are just confusing.\nWhitespaces around operators and after commas would make your code easier to read.\nlist(alist) copies a list but it is not more pythonic than alist[:], I think alist[:] is even more commonly used then the alternative. \n\n"
] |
[
24,
4
] |
[] |
[] |
[
"copy",
"list",
"python"
] |
stackoverflow_0000948032_copy_list_python.txt
|
Q:
User Authentication in Pylons + AuthKit
I am trying to create a web application using Pylons and the resources on the web point to the PylonsBook page which isn't of much help. I want authentication and authorisation and is there anyway to setup Authkit to work easily with Pylons?
I tried downloading the SimpleSiteTemplate from the cheeseshop but wasn't able to run the setup-app command. It throws up an error:
File "/home/cnu/env/lib/python2.5/site-packages/SQLAlchemy-0.4.7-py2.5.egg/sqlalchemy/schema.py", line 96, in __call__
table = metadata.tables[key]
AttributeError: 'module' object has no attribute 'tables'
I use Pylons 0.9.7rc1, SQLAlchemy 0.4.7, Authkit 0.4.
A:
Ok, another update on the subject. It seems that the cheeseshop template is broken. I've followed the chapter you linked in the post and it seems that authkit is working fine. There are some caveats:
sqlalchemy has to be in 0.5 version
authkit has to be the dev version from svn (easy_install authkit==dev)
I managed to get it working fine.
A:
I gave up on authkit and rolled my own:
http://tonylandis.com/openid-db-authentication-in-pylons-is-easy-with-rpx/
A:
I don't think AuthKit is actively maintained anymore. It does use the Paste (http://pythonpaste.org) libs though for things like HTTP Basic/Digest authentication. I would probably go ahead and take a look at the source for some inspiration and then use the Paste tools if you want to use HTTP authentication.
There is also OpenID which is very easy to setup. The python-openid libs have an excellent example that is easy to translate to WSGI for wrapping a Pylons app. You can look at an example:
http://ionrock.org/hg/brightcontent-main/file/d87b7dcc606c/brightcontent/plugins/openidauth.py
A:
This actually got me interested:Check out this mailing on the pylons list. So AuthKit is being developed, and I will follow the book and get back on the results.
|
User Authentication in Pylons + AuthKit
|
I am trying to create a web application using Pylons and the resources on the web point to the PylonsBook page which isn't of much help. I want authentication and authorisation and is there anyway to setup Authkit to work easily with Pylons?
I tried downloading the SimpleSiteTemplate from the cheeseshop but wasn't able to run the setup-app command. It throws up an error:
File "/home/cnu/env/lib/python2.5/site-packages/SQLAlchemy-0.4.7-py2.5.egg/sqlalchemy/schema.py", line 96, in __call__
table = metadata.tables[key]
AttributeError: 'module' object has no attribute 'tables'
I use Pylons 0.9.7rc1, SQLAlchemy 0.4.7, Authkit 0.4.
|
[
"Ok, another update on the subject. It seems that the cheeseshop template is broken. I've followed the chapter you linked in the post and it seems that authkit is working fine. There are some caveats:\n\nsqlalchemy has to be in 0.5 version\nauthkit has to be the dev version from svn (easy_install authkit==dev)\n\nI managed to get it working fine.\n",
"I gave up on authkit and rolled my own:\nhttp://tonylandis.com/openid-db-authentication-in-pylons-is-easy-with-rpx/\n",
"I don't think AuthKit is actively maintained anymore. It does use the Paste (http://pythonpaste.org) libs though for things like HTTP Basic/Digest authentication. I would probably go ahead and take a look at the source for some inspiration and then use the Paste tools if you want to use HTTP authentication. \nThere is also OpenID which is very easy to setup. The python-openid libs have an excellent example that is easy to translate to WSGI for wrapping a Pylons app. You can look at an example:\nhttp://ionrock.org/hg/brightcontent-main/file/d87b7dcc606c/brightcontent/plugins/openidauth.py\n",
"This actually got me interested:Check out this mailing on the pylons list. So AuthKit is being developed, and I will follow the book and get back on the results.\n"
] |
[
2,
2,
1,
0
] |
[] |
[] |
[
"authentication",
"authkit",
"pylons",
"python",
"sqlalchemy"
] |
stackoverflow_0000047801_authentication_authkit_pylons_python_sqlalchemy.txt
|
Q:
How can I get an accurate absolute url from get_absolute_url with an included urls.py in Django?
I've building a app right now that I'm trying to keep properly decoupled from the other apps in my Django project (feel free to lecture me on keeping Django apps decoupled, I'd be happy to learn more any/all the time).
My problem is this: The get_ absolute_url() method I've written is returning a relative path based on my view. I think it's wrong to have to add a special named view in the project urls.py just so I can have absolute urls in my app, and I can't figure out what I'm doing wrong. So if someone can help me out, I'll really appreciate it (and mention you when I release this sucker!)
I have a project-level urls.py that includes another urls.py based on the URL pattern like so (the names are verbose for this example only):
project-urls.py
urlpatterns = patterns('',
('^$', direct_to_template, {'template': 'base.html'}),
(r'^app', include('project.app.urls')),
)
app-urls.py
urlpatterns = patterns('',
url(r'(?P<slug>[-\w]+)?/?$', 'app.views.home', name='app_home'),
)
Now, in my Model, I have something like this:
class AppModel(models.Model):
title = models.CharField(_('title'), max_length=100)
slug = models.SlugField(_('slug'), unique=True)
@permalink
def get_absolute_url(self):
return ('app_home', None, {'slug': self.slug})
When I call {{ AppInstance.get_ absolute_url }} in the template, I get something like this:
/slug-is-here
Which is obvs not absolute & makes sense based on my urls.py. What should I change to get a real absolute url while keeping this app clean & not couple it too deeply w/the project?
A:
Welp,
It turns out that when I was seeing this:
/slug-is-here
I should have looked closer. What was really happening was:
/app-pathslug-is-here
I was missing a trailing slash on my app's regex in my project urls.py.
So yea. let that be a lesson to y'all.
|
How can I get an accurate absolute url from get_absolute_url with an included urls.py in Django?
|
I've building a app right now that I'm trying to keep properly decoupled from the other apps in my Django project (feel free to lecture me on keeping Django apps decoupled, I'd be happy to learn more any/all the time).
My problem is this: The get_ absolute_url() method I've written is returning a relative path based on my view. I think it's wrong to have to add a special named view in the project urls.py just so I can have absolute urls in my app, and I can't figure out what I'm doing wrong. So if someone can help me out, I'll really appreciate it (and mention you when I release this sucker!)
I have a project-level urls.py that includes another urls.py based on the URL pattern like so (the names are verbose for this example only):
project-urls.py
urlpatterns = patterns('',
('^$', direct_to_template, {'template': 'base.html'}),
(r'^app', include('project.app.urls')),
)
app-urls.py
urlpatterns = patterns('',
url(r'(?P<slug>[-\w]+)?/?$', 'app.views.home', name='app_home'),
)
Now, in my Model, I have something like this:
class AppModel(models.Model):
title = models.CharField(_('title'), max_length=100)
slug = models.SlugField(_('slug'), unique=True)
@permalink
def get_absolute_url(self):
return ('app_home', None, {'slug': self.slug})
When I call {{ AppInstance.get_ absolute_url }} in the template, I get something like this:
/slug-is-here
Which is obvs not absolute & makes sense based on my urls.py. What should I change to get a real absolute url while keeping this app clean & not couple it too deeply w/the project?
|
[
"Welp, \nIt turns out that when I was seeing this:\n/slug-is-here\n\nI should have looked closer. What was really happening was:\n/app-pathslug-is-here\n\nI was missing a trailing slash on my app's regex in my project urls.py.\nSo yea. let that be a lesson to y'all.\n"
] |
[
0
] |
[] |
[] |
[
"django",
"models",
"python",
"regex",
"url"
] |
stackoverflow_0000947797_django_models_python_regex_url.txt
|
Q:
Are there memory efficiencies gained when code is wrapped in functions?
I have been working on some code. My usual approach is to first solve all of the pieces of the problem, creating the loops and other pieces of code I need as I work through the problem and then if I expect to reuse the code I go back through it and group the parts of code together that I think should be grouped to create functions.
I have just noticed that creating functions and calling them seems to be much more efficient than writing lines of code and deleting containers as I am finished with them.
for example:
def someFunction(aList):
do things to aList
that create a dictionary
return aDict
seems to release more memory at the end than
>>do things to alist
>>that create a dictionary
>>del(aList)
Is this expected behavior?
EDIT added example code
When this function finishes running the PF Usage shows an increase of about 100 mb the filingsList has about 8 million lines.
def getAllCIKS(filingList):
cikDICT=defaultdict(int)
for filing in filingList:
if filing.startswith('.'):
del(filing)
continue
cik=filing.split('^')[0].strip()
cikDICT[cik]+=1
del(filing)
ciklist=cikDICT.keys()
ciklist.sort()
return ciklist
allCIKS=getAllCIKS(open(r'c:\filinglist.txt').readlines())
If I run this instead I show an increase of almost 400 mb
cikDICT=defaultdict(int)
for filing in open(r'c:\filinglist.txt').readlines():
if filing.startswith('.'):
del(filing)
continue
cik=filing.split('^')[0].strip()
cikDICT[cik]+=1
del(filing)
ciklist=cikDICT.keys()
ciklist.sort()
del(cikDICT)
EDIT
I have been playing around with this some more today. My observation and question should be refined a bit since my focus has been on the PF Usage. Unfortunately I can only poke at this between my other tasks. However I am starting to wonder about references versus copies. If I create a dictionary from a list does the dictionary container hold a copy of the values that came from the list or do they hold references to the values in the list? My bet is that the values are copied instead of referenced.
Another thing I noticed is that items in the GC list were items from containers that were deleted. Does that make sense? Soo I have a list and suppose each of the items in the list was [(aTuple),anInteger,[another list]]. When I started learning about how to manipulate the gc objects and inspect them I found those objects in the gc even though the list had been forcefully deleted and even though I passed the 0,1 & 2 value to the method that I don't remember to try to still delete them.
I appreciate the insights people have been sharing. Unfortunately I am always interested in figuring out how things work under the hood.
A:
Maybe you used some local variables in your function, which are implicitly released by reference counting at the end of the function, while they are not released at the end of your code segment?
A:
You can use the Python garbage collector interface provided to more closely examine what (if anything) is being left around in the second case. Specifically, you may want to check out gc.get_objects() to see what is left uncollected, or gc.garbage to see if you have any reference cycles.
A:
Some extra memory is freed when you return from a function, but that's exactly as much extra memory as was allocated to call the function in the first place. In any case - if you seeing a large amount of difference, that's likely an artifact of the state of the runtime, and is not something you should really be worrying about. If you are running low on memory, the way to solve the problem is to keep more data on disk using things like b-trees (or just use a database), or use algorithms that use less memory. Also, keep an eye out for making unnecessary copies of large data structures.
The real memory savings in creating functions is in your short-term memory. By moving something into a function, you reduce the amount of detail you need to remember by encapsulating part of the minutia away.
A:
Maybe you should re-engineer your code to get rid of unnecessary variables (that may not be freed instantly)... how about the following snippet?
myfile = file(r"c:\fillinglist.txt")
ciklist = sorted(set(x.split("^")[0].strip() for x in myfile if not x.startswith(".")))
EDIT: I don't know why this answer was voted negative... Maybe because it's short? Or maybe because the dude who voted was unable to understand how this one-liner does the same that the code in the question without creating unnecessary temporal containers?
Sigh...
A:
I asked another question about copying lists and the answers, particularly the answer directing me to look at deepcopy caused me to think about some dictionary behavior. The problem I was experiencing had to do with the fact that the original list is never garbage collected because the dictionary maintains references to the list. I need to use the information about weakref in the Python Docs.
objects referenced by dictionaries seem to stay alive. I think (but am not sure) the process of pushing the dictionary out of the function forces the copy process and kills the object. This is not complete I need to do some more research.
|
Are there memory efficiencies gained when code is wrapped in functions?
|
I have been working on some code. My usual approach is to first solve all of the pieces of the problem, creating the loops and other pieces of code I need as I work through the problem and then if I expect to reuse the code I go back through it and group the parts of code together that I think should be grouped to create functions.
I have just noticed that creating functions and calling them seems to be much more efficient than writing lines of code and deleting containers as I am finished with them.
for example:
def someFunction(aList):
do things to aList
that create a dictionary
return aDict
seems to release more memory at the end than
>>do things to alist
>>that create a dictionary
>>del(aList)
Is this expected behavior?
EDIT added example code
When this function finishes running the PF Usage shows an increase of about 100 mb the filingsList has about 8 million lines.
def getAllCIKS(filingList):
cikDICT=defaultdict(int)
for filing in filingList:
if filing.startswith('.'):
del(filing)
continue
cik=filing.split('^')[0].strip()
cikDICT[cik]+=1
del(filing)
ciklist=cikDICT.keys()
ciklist.sort()
return ciklist
allCIKS=getAllCIKS(open(r'c:\filinglist.txt').readlines())
If I run this instead I show an increase of almost 400 mb
cikDICT=defaultdict(int)
for filing in open(r'c:\filinglist.txt').readlines():
if filing.startswith('.'):
del(filing)
continue
cik=filing.split('^')[0].strip()
cikDICT[cik]+=1
del(filing)
ciklist=cikDICT.keys()
ciklist.sort()
del(cikDICT)
EDIT
I have been playing around with this some more today. My observation and question should be refined a bit since my focus has been on the PF Usage. Unfortunately I can only poke at this between my other tasks. However I am starting to wonder about references versus copies. If I create a dictionary from a list does the dictionary container hold a copy of the values that came from the list or do they hold references to the values in the list? My bet is that the values are copied instead of referenced.
Another thing I noticed is that items in the GC list were items from containers that were deleted. Does that make sense? Soo I have a list and suppose each of the items in the list was [(aTuple),anInteger,[another list]]. When I started learning about how to manipulate the gc objects and inspect them I found those objects in the gc even though the list had been forcefully deleted and even though I passed the 0,1 & 2 value to the method that I don't remember to try to still delete them.
I appreciate the insights people have been sharing. Unfortunately I am always interested in figuring out how things work under the hood.
|
[
"Maybe you used some local variables in your function, which are implicitly released by reference counting at the end of the function, while they are not released at the end of your code segment?\n",
"You can use the Python garbage collector interface provided to more closely examine what (if anything) is being left around in the second case. Specifically, you may want to check out gc.get_objects() to see what is left uncollected, or gc.garbage to see if you have any reference cycles.\n",
"Some extra memory is freed when you return from a function, but that's exactly as much extra memory as was allocated to call the function in the first place. In any case - if you seeing a large amount of difference, that's likely an artifact of the state of the runtime, and is not something you should really be worrying about. If you are running low on memory, the way to solve the problem is to keep more data on disk using things like b-trees (or just use a database), or use algorithms that use less memory. Also, keep an eye out for making unnecessary copies of large data structures.\nThe real memory savings in creating functions is in your short-term memory. By moving something into a function, you reduce the amount of detail you need to remember by encapsulating part of the minutia away.\n",
"Maybe you should re-engineer your code to get rid of unnecessary variables (that may not be freed instantly)... how about the following snippet?\nmyfile = file(r\"c:\\fillinglist.txt\")\nciklist = sorted(set(x.split(\"^\")[0].strip() for x in myfile if not x.startswith(\".\")))\n\nEDIT: I don't know why this answer was voted negative... Maybe because it's short? Or maybe because the dude who voted was unable to understand how this one-liner does the same that the code in the question without creating unnecessary temporal containers?\nSigh...\n",
"I asked another question about copying lists and the answers, particularly the answer directing me to look at deepcopy caused me to think about some dictionary behavior. The problem I was experiencing had to do with the fact that the original list is never garbage collected because the dictionary maintains references to the list. I need to use the information about weakref in the Python Docs.\nobjects referenced by dictionaries seem to stay alive. I think (but am not sure) the process of pushing the dictionary out of the function forces the copy process and kills the object. This is not complete I need to do some more research.\n"
] |
[
3,
1,
0,
0,
0
] |
[] |
[] |
[
"function",
"memory_management",
"python"
] |
stackoverflow_0000919103_function_memory_management_python.txt
|
Q:
method for creating a unique validation key/number
I'm using django for a web-magazine with subscriber-content. when a user purchases a subscription, the site will create a validation key, and send it to the user email address.
The validation key would be added to a list of "valid keys" until it is used.
What is the best method for creating a simple yet unique key? Can someone suggest a standard python library for key-creation/validation/ect?
This might be a very simple question, but I'm very new. ;)
A:
I'd recommend using a GUID. They are quickly becoming industry standard for this kind of thing.
See how to create them here: How to create a GUID/UUID in Python
A:
As other posters mentioned, you are looking for a GUID, of which the most popular implemntation UUID (see here) . Django extensions (see here) offer a UUID field just for this purpose.
A:
Well, you can always use a GUID. As you said it would be stored as a valid key.
|
method for creating a unique validation key/number
|
I'm using django for a web-magazine with subscriber-content. when a user purchases a subscription, the site will create a validation key, and send it to the user email address.
The validation key would be added to a list of "valid keys" until it is used.
What is the best method for creating a simple yet unique key? Can someone suggest a standard python library for key-creation/validation/ect?
This might be a very simple question, but I'm very new. ;)
|
[
"I'd recommend using a GUID. They are quickly becoming industry standard for this kind of thing.\nSee how to create them here: How to create a GUID/UUID in Python\n",
"As other posters mentioned, you are looking for a GUID, of which the most popular implemntation UUID (see here) . Django extensions (see here) offer a UUID field just for this purpose.\n",
"Well, you can always use a GUID. As you said it would be stored as a valid key.\n"
] |
[
2,
2,
0
] |
[] |
[] |
[
"django",
"python",
"validation"
] |
stackoverflow_0000948493_django_python_validation.txt
|
Q:
IMAP4_SSL with gmail in python
We are retrieving mails from our gmail account using IMAP4_SSL and python.
The email body is retrieved in html format.
We need to convert that to plaintext.
Can anyone help us with that?
A:
Stand on the shoulders of giants...
Peter Bengtsson has worked out a solution to this exact problem here.
Peter's script uses the awesome BeautifulSoup, by Leonard Richardson,
and Fredrik Lundh's unescape() function.
Using Peter's test case, you get this:
This is a paragraph.
Foobar [1]
http://two.com
Visit http://www.google.com.
Text elsewhere. Elsewhere [2]
[1] http://one.com
[2] http://three.com
...from this:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html>
<body>
<div id="main">
<p>This is a paragraph.</p>
<p><a href="http://one.com">Foobar</a>
<br />
<a href="http://two.com">two.com</a>
</p>
<p>Visit <a href="http://www.google.com">www.google.com</a>.</p>
<br />
Text elsewhere.
<a href="http://three.com">Elsewhere</a>
</div>
</body>
</html>
|
IMAP4_SSL with gmail in python
|
We are retrieving mails from our gmail account using IMAP4_SSL and python.
The email body is retrieved in html format.
We need to convert that to plaintext.
Can anyone help us with that?
|
[
"Stand on the shoulders of giants...\nPeter Bengtsson has worked out a solution to this exact problem here.\nPeter's script uses the awesome BeautifulSoup, by Leonard Richardson, \nand Fredrik Lundh's unescape() function.\nUsing Peter's test case, you get this:\nThis is a paragraph.\n\nFoobar [1]\nhttp://two.com\n\nVisit http://www.google.com.\n\nText elsewhere. Elsewhere [2]\n\n[1] http://one.com\n[2] http://three.com\n\n...from this: \n<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0 Transitional//EN\">\n<html>\n<body>\n\n<div id=\"main\">\n<p>This is a paragraph.</p>\n\n<p><a href=\"http://one.com\">Foobar</a>\n<br />\n\n<a href=\"http://two.com\">two.com</a>\n\n</p>\n <p>Visit <a href=\"http://www.google.com\">www.google.com</a>.</p>\n<br />\nText elsewhere.\n\n<a href=\"http://three.com\">Elsewhere</a>\n\n</div>\n</body>\n</html>\n\n"
] |
[
2
] |
[] |
[] |
[
"gmail",
"html",
"python"
] |
stackoverflow_0000948761_gmail_html_python.txt
|
Q:
How i can send the commands from keyboards using python. I am trying to automate mac app (GUI)
I am trying to automate a app using python. I need help to send keyboard commands through python. I am using powerBook G4.
A:
You could call AppleScript from your python script with osascript tool:
import os
cmd = """
osascript -e 'tell application "System Events" to keystroke "m" using {command down}'
"""
# minimize active window
os.system(cmd)
A:
To the best of my knowledge, python does not contain the ability to simulate keystrokes. You can however use python to call a program which has the functionality that you need for OS X. You could also write said program using Objective C most likely.
Or you could save yourself the pain and use Automator. Perhaps if you posted more details about what you were automating, I could add something further.
|
How i can send the commands from keyboards using python. I am trying to automate mac app (GUI)
|
I am trying to automate a app using python. I need help to send keyboard commands through python. I am using powerBook G4.
|
[
"You could call AppleScript from your python script with osascript tool:\nimport os\ncmd = \"\"\"\nosascript -e 'tell application \"System Events\" to keystroke \"m\" using {command down}' \n\"\"\"\n# minimize active window\nos.system(cmd)\n\n",
"To the best of my knowledge, python does not contain the ability to simulate keystrokes. You can however use python to call a program which has the functionality that you need for OS X. You could also write said program using Objective C most likely.\nOr you could save yourself the pain and use Automator. Perhaps if you posted more details about what you were automating, I could add something further.\n"
] |
[
3,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000939746_python.txt
|
Q:
Error trapping when a user inputs incorect information
So, i have recently started learning python...i am writing a small script that pulls information from a csv and i need to be able to notify a user of an incorrect input
for example
the user is asked for his id number, the id number is anything from r1 to r5
i would like my script to be able to tell the user that they have input something wrong
for example, if the user inputs a1 or r50, the user needs to be notified that they have input the wrong parameters. how do i do this?
i have looked into
def statements, but i cannot seem to grasp all the syntax in python....(i dont know all the commands...parameters and stuff)
any help would be very much appreciated =D
while True:
import csv
DATE, ROOM, COURSE, STAGE = range (4)
csv_in = open("roombookings.csv", "rb")
reader = csv.reader (csv_in)
data = []
for row in reader:
data.append(row)
roomlist = raw_input ("Enter the room number: ")
print "The room you have specified has the following courses running: "
for sub_list in data:
if sub_list[ROOM] == roomlist:
Date, Room, Course, Stage = sub_list
print Date, Course
A:
I'm not sure what are you asking for, but if you wish to check if user entered correct id, you should try regular expressions. Look at Python Documentation on module re. Or ask google for "python re"
Here's an example that will check user's input:
import re
id_patt = re.compile(r'^r[1-5]$')
def checkId(id):
if id_patt.match(id):
return True
return False
HTH, regards.
EDIT: I read you're question again, here's some more code:
(just paste it below previous code fragment)
validId = False
while not validId:
id = raw_input("Enter id: ")
validId = checkId(id)
By the way, it could be written in quite shorter way, but this piece of code should be easier to understand for someone new to Python.
A:
Seriously, read a tutorial. The official one is pretty good. I also like this book for beginners.
import csv
while True:
id_number = raw_input('(enter to quit) ID number:')
if not id_number:
break
# open the csv file
csvfile = csv.reader(open('file.csv'))
for row in csvfile:
# for this simple example I assume that the first column
# on the csv is the ID:
if row[0] == id_number:
print "Found. Here's the data:", row
break
else:
print "ID not found, try again!"
EDIT
Now that you've added code, I update the example:
import csv
DATE, ROOM, COURSE, STAGE = range(4)
while True:
csv_in = open("roombookings.csv", "rb")
reader = csv.reader(csv_in)
roomlist = raw_input("(Enter to quit) Room number: ")
if not roomlist:
break
print "The room you have specified has the following courses running: "
for sub_list in reader:
if sub_list[ROOM] == roomlist:
print sub_list[DATE], sub_list[COURSE]
|
Error trapping when a user inputs incorect information
|
So, i have recently started learning python...i am writing a small script that pulls information from a csv and i need to be able to notify a user of an incorrect input
for example
the user is asked for his id number, the id number is anything from r1 to r5
i would like my script to be able to tell the user that they have input something wrong
for example, if the user inputs a1 or r50, the user needs to be notified that they have input the wrong parameters. how do i do this?
i have looked into
def statements, but i cannot seem to grasp all the syntax in python....(i dont know all the commands...parameters and stuff)
any help would be very much appreciated =D
while True:
import csv
DATE, ROOM, COURSE, STAGE = range (4)
csv_in = open("roombookings.csv", "rb")
reader = csv.reader (csv_in)
data = []
for row in reader:
data.append(row)
roomlist = raw_input ("Enter the room number: ")
print "The room you have specified has the following courses running: "
for sub_list in data:
if sub_list[ROOM] == roomlist:
Date, Room, Course, Stage = sub_list
print Date, Course
|
[
"I'm not sure what are you asking for, but if you wish to check if user entered correct id, you should try regular expressions. Look at Python Documentation on module re. Or ask google for \"python re\"\nHere's an example that will check user's input:\nimport re\n\nid_patt = re.compile(r'^r[1-5]$')\ndef checkId(id):\n if id_patt.match(id):\n return True\n return False\n\nHTH, regards.\nEDIT: I read you're question again, here's some more code:\n(just paste it below previous code fragment)\nvalidId = False\nwhile not validId:\n id = raw_input(\"Enter id: \")\n validId = checkId(id)\n\nBy the way, it could be written in quite shorter way, but this piece of code should be easier to understand for someone new to Python.\n",
"Seriously, read a tutorial. The official one is pretty good. I also like this book for beginners.\nimport csv\n\nwhile True:\n id_number = raw_input('(enter to quit) ID number:')\n\n if not id_number:\n break\n\n # open the csv file\n csvfile = csv.reader(open('file.csv'))\n for row in csvfile:\n # for this simple example I assume that the first column \n # on the csv is the ID:\n if row[0] == id_number:\n print \"Found. Here's the data:\", row\n break\n else:\n print \"ID not found, try again!\"\n\n\nEDIT\nNow that you've added code, I update the example:\nimport csv\nDATE, ROOM, COURSE, STAGE = range(4) \n\nwhile True: \n csv_in = open(\"roombookings.csv\", \"rb\") \n reader = csv.reader(csv_in) \n roomlist = raw_input(\"(Enter to quit) Room number: \") \n if not roomlist:\n break\n print \"The room you have specified has the following courses running: \" \n for sub_list in reader: \n if sub_list[ROOM] == roomlist: \n print sub_list[DATE], sub_list[COURSE]\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"csv",
"python",
"reporting",
"user_input"
] |
stackoverflow_0000949941_csv_python_reporting_user_input.txt
|
Q:
Use Django Framework with Website and Stand-alone App
I'm planning on writing a web crawler and a web-based front end for it (or, at least, the information it finds). I was wondering if it's possible to use the Django framework to let the web crawler use the same MySQL backend as the website (without making the web crawler a "website" in it's self).
A:
Yes, you can use the same database.
Some people use Django on top of a PHP application for its admin functionality, or to build newer features with Django and its ORM.
What I'm trying to say is that if you're putting data from your crawl into the same place that you will let Django store its data, you can access them as long as you create Django models for each table.
However, I don't see why the crawler can't be written within Django itself. I've written some non web based apps (a crawler and an aggregator) in Django and it works quite well.
A:
You can use Django ORM outside of an HTTP server.
Basically you need to set DJANGO_SETTINGS_MODULE environment variable. Then you can import and use your django code. Here's an article on stand-alone Django scripts.
Alternatively you can choose to interact with your Django server via custom management commands. This will be a bit more work. But in the end this method allows for a greater decoupling between the crawler and the controller (Django project).
|
Use Django Framework with Website and Stand-alone App
|
I'm planning on writing a web crawler and a web-based front end for it (or, at least, the information it finds). I was wondering if it's possible to use the Django framework to let the web crawler use the same MySQL backend as the website (without making the web crawler a "website" in it's self).
|
[
"Yes, you can use the same database.\nSome people use Django on top of a PHP application for its admin functionality, or to build newer features with Django and its ORM.\nWhat I'm trying to say is that if you're putting data from your crawl into the same place that you will let Django store its data, you can access them as long as you create Django models for each table.\nHowever, I don't see why the crawler can't be written within Django itself. I've written some non web based apps (a crawler and an aggregator) in Django and it works quite well.\n",
"You can use Django ORM outside of an HTTP server.\nBasically you need to set DJANGO_SETTINGS_MODULE environment variable. Then you can import and use your django code. Here's an article on stand-alone Django scripts.\nAlternatively you can choose to interact with your Django server via custom management commands. This will be a bit more work. But in the end this method allows for a greater decoupling between the crawler and the controller (Django project).\n"
] |
[
4,
3
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0000950790_django_python.txt
|
Q:
Which validating python XML api to use?
I new to xml stuff, so I have no idea which api I should use in python.
Till now I used xmlproc, but I heard it is not developed any more.
I have basically only one requirement: I want to validate against a dtd that I can choose in my program. I can't thrust the doctype thing.
Performance does not really matter, so I would like to use the most easy api that exists.
What to use? Have you guys a simple example?
A:
For my current project, I'm using lxml, which is fairly easy to use. Validation with DTD is described on this page
|
Which validating python XML api to use?
|
I new to xml stuff, so I have no idea which api I should use in python.
Till now I used xmlproc, but I heard it is not developed any more.
I have basically only one requirement: I want to validate against a dtd that I can choose in my program. I can't thrust the doctype thing.
Performance does not really matter, so I would like to use the most easy api that exists.
What to use? Have you guys a simple example?
|
[
"For my current project, I'm using lxml, which is fairly easy to use. Validation with DTD is described on this page\n"
] |
[
3
] |
[] |
[] |
[
"dtd",
"python",
"validation",
"xml"
] |
stackoverflow_0000950974_dtd_python_validation_xml.txt
|
Q:
Efficient python code for printing the product of divisors of a number
I am trying to solve a problem involving printing the product of all divisors of a given number. The number of test cases is a number 1 <= t <= 300000 , and the number itself can range from 1 <= n <= 500000
I wrote the following code, but it always exceeds the time limit of 2 seconds. Are there any ways to speed up the code ?
from math import sqrt
def divisorsProduct(n):
ProductOfDivisors=1
for i in range(2,int(round(sqrt(n)))+1):
if n%i==0:
ProductOfDivisors*=i
if n/i != i:
ProductOfDivisors*=(n/i)
if ProductOfDivisors <= 9999:
print ProductOfDivisors
else:
result = str(ProductOfDivisors)
print result[len(result)-4:]
T = int(raw_input())
for i in range(1,T+1):
num = int(raw_input())
divisorsProduct(num)
Thank You.
A:
You need to clarify by what you mean by "product of divisors." The code posted in the question doesn't work for any definition yet. This sounds like a homework question. If it is, then perhaps your instructor was expecting you to think outside the code to meet the time goals.
If you mean the product of unique prime divisors, e.g., 72 gives 2*3 = 6, then having a list of primes is the way to go. Just run through the list up to the square root of the number, multiplying present primes into the result. There are not that many, so you could even hard code them into your program.
If you mean the product of all the divisors, prime or not, then it is helpful to think of what the divisors are. You can make serious speed gains over the brute force method suggested in the other answers and yours. I suspect this is what your instructor intended.
If the divisors are ordered in a list, then they occur in pairs that multiply to n -- 1 and n, 2 and n/2, etc. -- except for the case where n is a perfect square, where the square root is a divisor that is not paired with any other.
So the result will be n to the power of half the number of divisors, (regardless of whether or not n is a square).
To compute this, find the prime factorization using your list of primes. That is, find the power of 2 that divides n, then the power of 3, etc. To do this, take out all the 2s, then the 3s, etc.
The number you are taking the factors out of will be getting smaller, so you can do the square root test on the smaller intermediate numbers to see if you need to continue up the list of primes. To gain some speed, test p*p <= m, rather than p <= sqrt(m)
Once you have the prime factorization, it is easy to find the number of divisors. For example, suppose the factorization is 2^i * 3^j * 7^k. Then, since each divisor uses the same prime factors, with exponents less than or equal to those in n including the possibility of 0, the number of divisors is (i+1)(j+1)(k+1).
E.g., 72 = 2^3 * 3^2, so the number of divisors is 4*3 = 12, and their product is 72^6 = 139,314,069,504.
By using math, the algorithm can become much better than O(n). But it is hard to estimate your speed gains ahead of time because of the relatively small size of the n in the input.
A:
You could eliminate the if statement in the loop by only looping to less than the square root, and check for square root integer-ness outside the loop.
It is a rather strange question you pose. I have a hard time imagine a use for it, other than it possibly being an assignment in a course. My first thought was to pre-compute a list of primes and only test against those, but I assume you are quite deliberately counting non-prime factors? I.e., if the number has factors 2 and 3, you are also counting 6.
If you do use a table of pre-computed primes, you would then have to also subsequently include all possible combinations of primes in your result, which gets more complex.
C is really a great language for that sort of thing, because even suboptimal algorithms run really fast.
A:
Okay, I think this is close to the optimal algorithm. It produces the product_of_divisors for each number in range(500000).
import math
def number_of_divisors(maxval=500001):
""" Example: the number of divisors of 12 is 6: 1, 2, 3, 4, 6, 12.
Given a prime factoring of n, the number of divisors of n is the
product of each factor's multiplicity plus one (mpo in my variables).
This function works like the Sieve of Eratosthenes, but marks each
composite n with the multiplicity (plus one) of each prime factor. """
numdivs = [1] * maxval # multiplicative identity
currmpo = [0] * maxval
# standard logic for 2 < p < sqrt(maxval)
for p in range(2, int(math.sqrt(maxval))):
if numdivs[p] == 1: # if p is prime
for exp in range(2,50): # assume maxval < 2^50
pexp = p ** exp
if pexp > maxval:
break
exppo = exp + 1
for comp in range(pexp, maxval, pexp):
currmpo[comp] = exppo
for comp in range(p, maxval, p):
thismpo = currmpo[comp] or 2
numdivs[comp] *= thismpo
currmpo[comp] = 0 # reset currmpo array in place
# abbreviated logic for p > sqrt(maxval)
for p in range(int(math.sqrt(maxval)), maxval):
if numdivs[p] == 1: # if p is prime
for comp in range(p, maxval, p):
numdivs[comp] *= 2
return numdivs
# this initialization times at 7s on my machine
NUMDIV = number_of_divisors()
def product_of_divisors(n):
if NUMDIV[n] % 2 == 0:
# each pair of divisors has product equal to n, for example
# 1*12 * 2*6 * 3*4 = 12**3
return n ** (NUMDIV[n] / 2)
else:
# perfect squares have their square root as an unmatched divisor
return n ** (NUMDIV[n] / 2) * int(math.sqrt(n))
# this loop times at 13s on my machine
for n in range(500000):
a = product_of_divisors(n)
On my very slow machine, it takes 7s to compute the numberofdivisors for each number, then 13s to compute the productofdivisors for each. Of course it can be sped up by translating it into C. (@someone with a fast machine: how long does it take on your machine?)
|
Efficient python code for printing the product of divisors of a number
|
I am trying to solve a problem involving printing the product of all divisors of a given number. The number of test cases is a number 1 <= t <= 300000 , and the number itself can range from 1 <= n <= 500000
I wrote the following code, but it always exceeds the time limit of 2 seconds. Are there any ways to speed up the code ?
from math import sqrt
def divisorsProduct(n):
ProductOfDivisors=1
for i in range(2,int(round(sqrt(n)))+1):
if n%i==0:
ProductOfDivisors*=i
if n/i != i:
ProductOfDivisors*=(n/i)
if ProductOfDivisors <= 9999:
print ProductOfDivisors
else:
result = str(ProductOfDivisors)
print result[len(result)-4:]
T = int(raw_input())
for i in range(1,T+1):
num = int(raw_input())
divisorsProduct(num)
Thank You.
|
[
"You need to clarify by what you mean by \"product of divisors.\" The code posted in the question doesn't work for any definition yet. This sounds like a homework question. If it is, then perhaps your instructor was expecting you to think outside the code to meet the time goals.\nIf you mean the product of unique prime divisors, e.g., 72 gives 2*3 = 6, then having a list of primes is the way to go. Just run through the list up to the square root of the number, multiplying present primes into the result. There are not that many, so you could even hard code them into your program.\nIf you mean the product of all the divisors, prime or not, then it is helpful to think of what the divisors are. You can make serious speed gains over the brute force method suggested in the other answers and yours. I suspect this is what your instructor intended.\nIf the divisors are ordered in a list, then they occur in pairs that multiply to n -- 1 and n, 2 and n/2, etc. -- except for the case where n is a perfect square, where the square root is a divisor that is not paired with any other.\nSo the result will be n to the power of half the number of divisors, (regardless of whether or not n is a square). \nTo compute this, find the prime factorization using your list of primes. That is, find the power of 2 that divides n, then the power of 3, etc. To do this, take out all the 2s, then the 3s, etc. \nThe number you are taking the factors out of will be getting smaller, so you can do the square root test on the smaller intermediate numbers to see if you need to continue up the list of primes. To gain some speed, test p*p <= m, rather than p <= sqrt(m) \nOnce you have the prime factorization, it is easy to find the number of divisors. For example, suppose the factorization is 2^i * 3^j * 7^k. Then, since each divisor uses the same prime factors, with exponents less than or equal to those in n including the possibility of 0, the number of divisors is (i+1)(j+1)(k+1).\nE.g., 72 = 2^3 * 3^2, so the number of divisors is 4*3 = 12, and their product is 72^6 = 139,314,069,504.\nBy using math, the algorithm can become much better than O(n). But it is hard to estimate your speed gains ahead of time because of the relatively small size of the n in the input.\n",
"You could eliminate the if statement in the loop by only looping to less than the square root, and check for square root integer-ness outside the loop.\n\nIt is a rather strange question you pose. I have a hard time imagine a use for it, other than it possibly being an assignment in a course. My first thought was to pre-compute a list of primes and only test against those, but I assume you are quite deliberately counting non-prime factors? I.e., if the number has factors 2 and 3, you are also counting 6.\n\nIf you do use a table of pre-computed primes, you would then have to also subsequently include all possible combinations of primes in your result, which gets more complex.\n\nC is really a great language for that sort of thing, because even suboptimal algorithms run really fast.\n",
"Okay, I think this is close to the optimal algorithm. It produces the product_of_divisors for each number in range(500000).\nimport math\n\ndef number_of_divisors(maxval=500001):\n \"\"\" Example: the number of divisors of 12 is 6: 1, 2, 3, 4, 6, 12.\n Given a prime factoring of n, the number of divisors of n is the\n product of each factor's multiplicity plus one (mpo in my variables).\n\n This function works like the Sieve of Eratosthenes, but marks each\n composite n with the multiplicity (plus one) of each prime factor. \"\"\"\n numdivs = [1] * maxval # multiplicative identity\n currmpo = [0] * maxval\n\n # standard logic for 2 < p < sqrt(maxval)\n for p in range(2, int(math.sqrt(maxval))):\n if numdivs[p] == 1: # if p is prime\n for exp in range(2,50): # assume maxval < 2^50\n pexp = p ** exp\n if pexp > maxval:\n break\n exppo = exp + 1\n for comp in range(pexp, maxval, pexp):\n currmpo[comp] = exppo\n for comp in range(p, maxval, p):\n thismpo = currmpo[comp] or 2\n numdivs[comp] *= thismpo\n currmpo[comp] = 0 # reset currmpo array in place\n\n # abbreviated logic for p > sqrt(maxval)\n for p in range(int(math.sqrt(maxval)), maxval):\n if numdivs[p] == 1: # if p is prime\n for comp in range(p, maxval, p):\n numdivs[comp] *= 2\n\n return numdivs\n\n# this initialization times at 7s on my machine\nNUMDIV = number_of_divisors()\n\ndef product_of_divisors(n):\n if NUMDIV[n] % 2 == 0:\n # each pair of divisors has product equal to n, for example\n # 1*12 * 2*6 * 3*4 = 12**3\n return n ** (NUMDIV[n] / 2)\n else:\n # perfect squares have their square root as an unmatched divisor\n return n ** (NUMDIV[n] / 2) * int(math.sqrt(n))\n\n# this loop times at 13s on my machine\nfor n in range(500000):\n a = product_of_divisors(n)\n\nOn my very slow machine, it takes 7s to compute the numberofdivisors for each number, then 13s to compute the productofdivisors for each. Of course it can be sped up by translating it into C. (@someone with a fast machine: how long does it take on your machine?)\n"
] |
[
6,
1,
1
] |
[] |
[] |
[
"math",
"python"
] |
stackoverflow_0000942198_math_python.txt
|
Q:
How to build a fully-customizable application (aka database), without lose performance/good-design?
im in the beginning of the complete restyle of an my web application, and i have some doubt about a good database-design that can be reliable, query-performance, and in the same time fully customizable by the users (users wont customize the database structure, but the funcionality of the application).
So, my actual situation is, for example, a simple user's table:
id | name | surname | nickname | email | phone
1 | foo | bar | foobar | [email protected] | 99999
Thats it.
But, lets say that one of my customer would like to have 2 email addresses, or phone numbers, for one specific user.
Untill now, i used to solve that problem simply adding columns in the users table:
id | name | surname | nickname | email | phone | email_two | phone_two
1 | foo | bar | foobar | [email protected] | 99999 | [email protected] | 999998
But i cant use that way with the new application's version.. i'll like to be drinking mojito after that, dont like costumer's call to edit the structure :)
So, i thought a solution where people can define customs field, simply with another table:
id | table_refer | type_field | id_object | value
1 | users | phone | 1 | 999998
2 | users | email | 1 | [email protected]
keeping the users table unaltered.
But this way have 2 problems:
For what i know, there is no possibility to use foreigns key in that way, that if i delete 1 user automatically the foreign key delete in cascade all the row in the second table that have the 'table_refer' value=users and the id_object=users.id. Sure, i'll can use some triggers function, but i'll lose some of the reliability.
When i'll need to query the database, fore retrieve the users that match '[email protected]', i'll have to check all the... hem.. option_table as well, and that will make my code complex and less-reliable and messy with many joins.. assuming that the users table wont be the only one 'extended' by the 'option_table', seem to be a gray view.
My goal is to let my customers adding as many custom fields as they need, for almost all the object in the application (users, items, invoices, print views, photos, news, etc...), assuming that most of those table would be partitioned (splitted in 2 table, with a 3 table and inheritance gerarchy).
You think my way can be good, do you know some other better, or am i in a big mistake?
Please, every suggest is gold now!
EDIT:
What i'm lookin for could be simplifyed ith the 'articles-custom-fields' in wordpress blogs.
My goal is to let the user to define new fields that he needs, for example, if my users table is the one above, and a customer need a field that i havent prevent, like the web-site url, he must be able to add it dinamically, without edit the database structure, but just the data.
I think that the 2° table (maibe 1 for each object) can be a good solution, but i am still waiting for better ways!
A:
As I said in my Answer to a similar question, "Database Design is Hard." You are going to have to make the decision about which is better for you, normalizing the tables and bringing phone numbers and e-mail addresses into their own tables, with the associated JOIN-ing to retrieve the data, and the extra effort of referential integrity, or having some number n e-mail and phone fields in your table, and the "data-messiness" that that entails.
Database design is always a series of tradeoffs. You need to look at all angles, maybe bodge up some prototypes and do some profiling, etc. There is no "One True Answer™".
A:
Your proposed model is composed of two database patterns: an entity-attribute-value table and a polymorphic association.
Entity-attribute-value has some pretty big issues both in the performance and data integrity department. If you don't need to access the additional attributes in queries, then you can serialize the attribute value mapping to a text field in some standard serialization (JSON, XML). Not "pure" from the database design standpoint, but possibly a good pragmatic choice, given that you are aware of the tradeoffs. On postgres you can also use the hstore contrib module to store key-value pairs to make it usable in queries, if the limitation of string only values is acceptable.
For polymorphic association, you can get referential integrity by introducing an association table:
users attrib_assocs custom_attribs
----- ------------- --------------
attrib_assoc_id --> id <-- assoc_id
... entity_type field
value
To get slightly more integrity, also add the entity_type to the primary key and corresponding foreign keys and a check constraint on users table that the entity_type equals 'user'.
A:
if you do it this way all of your queries will have to join to and use table_refer column, which will kill performance, and make simple queries hard, and hard queries very difficult.
if your want multiple e-mails, split the email out to another table so you can have many rows.
A:
You could design your application to request additional data (like emails list for the user) on demand, using AJAX etc. In those highly customizable and rich applications usually you have no need to display all the data - only a single category.
To store custom records you can create table field_types(id, name, datatype) and a table custom_fields(user_id, field_type_id, value), and then select smth like this:
SELECT * FROM custom_fields WHERE user_id=XXX AND field_type_id IN (X,Y,Z).
so now you can retrieve data in 1 fast query, split fields to categories and parse their values by their respective datatypes with your code without performance issues.
A:
I'm not sure about the specifics of postgresql, but if you want highly customisable data structures in a DB that you don't really want to search on, the serializing the data to a LOB is an option.
In fact this is the way ASP.NET works by default with Personalization, which is per user settings.
I don't recommend this approach if you wish to search the fields for any reason.
|
How to build a fully-customizable application (aka database), without lose performance/good-design?
|
im in the beginning of the complete restyle of an my web application, and i have some doubt about a good database-design that can be reliable, query-performance, and in the same time fully customizable by the users (users wont customize the database structure, but the funcionality of the application).
So, my actual situation is, for example, a simple user's table:
id | name | surname | nickname | email | phone
1 | foo | bar | foobar | [email protected] | 99999
Thats it.
But, lets say that one of my customer would like to have 2 email addresses, or phone numbers, for one specific user.
Untill now, i used to solve that problem simply adding columns in the users table:
id | name | surname | nickname | email | phone | email_two | phone_two
1 | foo | bar | foobar | [email protected] | 99999 | [email protected] | 999998
But i cant use that way with the new application's version.. i'll like to be drinking mojito after that, dont like costumer's call to edit the structure :)
So, i thought a solution where people can define customs field, simply with another table:
id | table_refer | type_field | id_object | value
1 | users | phone | 1 | 999998
2 | users | email | 1 | [email protected]
keeping the users table unaltered.
But this way have 2 problems:
For what i know, there is no possibility to use foreigns key in that way, that if i delete 1 user automatically the foreign key delete in cascade all the row in the second table that have the 'table_refer' value=users and the id_object=users.id. Sure, i'll can use some triggers function, but i'll lose some of the reliability.
When i'll need to query the database, fore retrieve the users that match '[email protected]', i'll have to check all the... hem.. option_table as well, and that will make my code complex and less-reliable and messy with many joins.. assuming that the users table wont be the only one 'extended' by the 'option_table', seem to be a gray view.
My goal is to let my customers adding as many custom fields as they need, for almost all the object in the application (users, items, invoices, print views, photos, news, etc...), assuming that most of those table would be partitioned (splitted in 2 table, with a 3 table and inheritance gerarchy).
You think my way can be good, do you know some other better, or am i in a big mistake?
Please, every suggest is gold now!
EDIT:
What i'm lookin for could be simplifyed ith the 'articles-custom-fields' in wordpress blogs.
My goal is to let the user to define new fields that he needs, for example, if my users table is the one above, and a customer need a field that i havent prevent, like the web-site url, he must be able to add it dinamically, without edit the database structure, but just the data.
I think that the 2° table (maibe 1 for each object) can be a good solution, but i am still waiting for better ways!
|
[
"As I said in my Answer to a similar question, \"Database Design is Hard.\" You are going to have to make the decision about which is better for you, normalizing the tables and bringing phone numbers and e-mail addresses into their own tables, with the associated JOIN-ing to retrieve the data, and the extra effort of referential integrity, or having some number n e-mail and phone fields in your table, and the \"data-messiness\" that that entails. \nDatabase design is always a series of tradeoffs. You need to look at all angles, maybe bodge up some prototypes and do some profiling, etc. There is no \"One True Answer™\".\n",
"Your proposed model is composed of two database patterns: an entity-attribute-value table and a polymorphic association.\nEntity-attribute-value has some pretty big issues both in the performance and data integrity department. If you don't need to access the additional attributes in queries, then you can serialize the attribute value mapping to a text field in some standard serialization (JSON, XML). Not \"pure\" from the database design standpoint, but possibly a good pragmatic choice, given that you are aware of the tradeoffs. On postgres you can also use the hstore contrib module to store key-value pairs to make it usable in queries, if the limitation of string only values is acceptable.\nFor polymorphic association, you can get referential integrity by introducing an association table:\nusers attrib_assocs custom_attribs\n----- ------------- --------------\nattrib_assoc_id --> id <-- assoc_id\n... entity_type field\n value\n\nTo get slightly more integrity, also add the entity_type to the primary key and corresponding foreign keys and a check constraint on users table that the entity_type equals 'user'.\n",
"if you do it this way all of your queries will have to join to and use table_refer column, which will kill performance, and make simple queries hard, and hard queries very difficult.\nif your want multiple e-mails, split the email out to another table so you can have many rows.\n",
"You could design your application to request additional data (like emails list for the user) on demand, using AJAX etc. In those highly customizable and rich applications usually you have no need to display all the data - only a single category. \nTo store custom records you can create table field_types(id, name, datatype) and a table custom_fields(user_id, field_type_id, value), and then select smth like this: \nSELECT * FROM custom_fields WHERE user_id=XXX AND field_type_id IN (X,Y,Z).\nso now you can retrieve data in 1 fast query, split fields to categories and parse their values by their respective datatypes with your code without performance issues.\n",
"I'm not sure about the specifics of postgresql, but if you want highly customisable data structures in a DB that you don't really want to search on, the serializing the data to a LOB is an option.\nIn fact this is the way ASP.NET works by default with Personalization, which is per user settings.\nI don't recommend this approach if you wish to search the fields for any reason.\n"
] |
[
4,
1,
0,
0,
0
] |
[] |
[] |
[
"database_design",
"performance",
"php",
"postgresql",
"python"
] |
stackoverflow_0000951387_database_design_performance_php_postgresql_python.txt
|
Q:
How do you use FCKEditor's image upload and browser with mod-wsgi?
I am using FCKEditor within a Django app served by Apache/mod-wsgi. I don't want to install php just for FCKEditor andI see FCKEditor offers image uploading and image browsing through Python. I just haven't found good instructions on how to set this all up.
So currently Django is running through a wsgi interface using this setup:
import os, sys
DIRNAME = os.sep.join(os.path.abspath(__file__).split(os.sep)[:-3])
sys.path.append(DIRNAME)
os.environ['DJANGO_SETTINGS_MODULE'] = 'myapp.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
In fckeditor in the editor->filemanager->connectors->py directory there is a file called wsgi.py:
from connector import FCKeditorConnector
from upload import FCKeditorQuickUpload
import cgitb
from cStringIO import StringIO
# Running from WSGI capable server (recomended)
def App(environ, start_response):
"WSGI entry point. Run the connector"
if environ['SCRIPT_NAME'].endswith("connector.py"):
conn = FCKeditorConnector(environ)
elif environ['SCRIPT_NAME'].endswith("upload.py"):
conn = FCKeditorQuickUpload(environ)
else:
start_response ("200 Ok", [('Content-Type','text/html')])
yield "Unknown page requested: "
yield environ['SCRIPT_NAME']
return
try:
# run the connector
data = conn.doResponse()
# Start WSGI response:
start_response ("200 Ok", conn.headers)
# Send response text
yield data
except:
start_response("500 Internal Server Error",[("Content-type","text/html")])
file = StringIO()
cgitb.Hook(file = file).handle()
yield file.getvalue()
I need these two things two work together by means of modifying my django wsgi file to serve the fckeditor parts correctly or make apache serve both django and fckeditor correctly on a single domain.
A:
This describes how to embed the FCK editor and enable image uploading.
First you need to edit fckconfig.js to change the image upload
URL to point to some URL inside your server.
FCKConfig.ImageUploadURL = "/myapp/root/imageUploader";
This will point to the server relative URL to receive the upload.
FCK will send the uploaded file to that handler using the CGI variable
name "NewFile" encoded using multipart/form-data. Unfortunately you
will have to implement /myapp/root/imageUploader, because I don't think
the FCK distribution stuff can be easily adapted to other frameworks.
The imageUploader should extract the NewFile and store it
somewhere on the server.
The response generated by /myapp/root/imageUploader should emulate
the HTML constructed in /editor/.../fckoutput.py.
Something like this (whiff template format)
{{env
whiff.content_type: "text/html",
whiff.headers: [
["Expires","Mon, 26 Jul 1997 05:00:00 GMT"],
["Cache-Control","no-store, no-cache, must-revalidate"],
["Cache-Control","post-check=0, pre-check=0"],
["Pragma","no-cache"]
]
/}}
<script>
//alert("!! RESPONSE RECIEVED");
errorNumber = 0;
fileUrl = "fileurl.png";
fileName = "filename.png";
customMsg = "";
window.parent.OnUploadCompleted(errorNumber, fileUrl, fileName, customMsg);
</script>
The {{env ...}} stuff at the top indicate the content type and
recommended HTTP headers to send. The fileUrl should be the Url to
use to find the image on the server.
Here are the basic steps to get the html fragment which
generates the FCK editor widget. The only tricky part is you have to put the
right client indentification into the os.environ -- it's ugly
but that's the way the FCK library works right now (I filed a bug
report).
import fckeditor # you must have the fck editor python support installed to use this module
import os
inputName = "myInputName" # the name to use for the input element in the form
basePath = "/server/relative/path/to/fck/installation/" # the location of FCK static files
if basePath[-1:]!="/":
basePath+="/" # basepath must end in slash
oFCKeditor = fckeditor.FCKeditor(inputName)
oFCKeditor.BasePath = basePath
oFCKeditor.Height = 300 # the height in pixels of the editor
oFCKeditor.Value = "<h1>initial html to be editted</h1>"
os.environ["HTTP_USER_AGENT"] = "Mozilla/5.0 (Macintosh; U;..." # or whatever
# there must be some way to figure out the user agent in Django right?
htmlOut = oFCKeditor.Create()
# insert htmlOut into your page where you want the editor to appear
return htmlOut
The above is untested, but it's based on the below which is tested.
Here is how to use FCK editor using mod-wsgi:
Technically it uses a couple features of WHIFF (see
WHIFF.sourceforge.net),
-- in fact it is part of the WHIFF distribution --
but
the WHIFF features are easily removed.
I don't know how to install it in Django, but if
Django allows wsgi apps to be installed easily, you
should be able to do it.
NOTE: FCK allows the client to inject pretty much anything
into HTML pages -- you will want to filter the returned value for evil
attacks.
(eg: see whiff.middleware.TestSafeHTML middleware for
an example of how to do this).
"""
Introduce an FCK editor input element. (requires FCKeditor http://www.fckeditor.net/).
Note: this implementation can generate values containing code injection attacks if you
don't filter the output generated for evil tags and values.
"""
import fckeditor # you must have the fck editor python support installed to use this module
from whiff.middleware import misc
import os
class FCKInput(misc.utility):
def __init__(self,
inputName, # name for input element
basePath, # server relative URL root for FCK HTTP install
value = ""): # initial value for input
self.inputName = inputName
self.basePath = basePath
self.value = value
def __call__(self, env, start_response):
inputName = self.param_value(self.inputName, env).strip()
basePath = self.param_value(self.basePath, env).strip()
if basePath[-1:]!="/":
basePath+="/"
value = self.param_value(self.value, env)
oFCKeditor = fckeditor.FCKeditor(inputName)
oFCKeditor.BasePath = basePath
oFCKeditor.Height = 300 # this should be a require!
oFCKeditor.Value = value
# hack around a bug in fck python library: need to put the user agent in os.environ
# XXX this hack is not safe for multi threaded servers (theoretically)... need to lock on os.env
os_environ = os.environ
new_os_env = os_environ.copy()
new_os_env.update(env)
try:
os.environ = new_os_env
htmlOut = oFCKeditor.Create()
finally:
# restore the old os.environ
os.environ = os_environ
start_response("200 OK", [('Content-Type', 'text/html')])
return [htmlOut]
__middleware__ = FCKInput
def test():
env = {
"HTTP_USER_AGENT":
"Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.14) Gecko/20080404 Firefox/2.0.0.14"
}
f = FCKInput("INPUTNAME", "/MY/BASE/PATH", "THE HTML VALUE TO START WITH")
r = f(env, misc.ignore)
print "test result"
print "".join(list(r))
if __name__=="__main__":
test()
See this working, for example, at
http://aaron.oirt.rutgers.edu/myapp/docs/W1500.whyIsWhiffCool.
btw: thanks. I needed to look into this anyway.
A:
Edit: Ultimately I was unhappy with this solution also so I made a Django app that takes care of the file uploads and browsing.
This is the solution I finally hacked together after reading the fckeditor code:
import os, sys
def fck_handler(environ, start_response):
path = environ['PATH_INFO']
if path.endswith(('upload.py', 'connector.py')):
sys.path.append('/#correct_path_to#/fckeditor/editor/filemanager/connectors/py/')
if path.endswith('upload.py'):
from upload import FCKeditorQuickUpload
conn = FCKeditorQuickUpload(environ)
else:
from connector import FCKeditorConnector
conn = FCKeditorConnector(environ)
try:
data = conn.doResponse()
start_response('200 Ok', conn.headers)
return data
except:
start_response("500 Internal Server Error",[("Content-type","text/html")])
return "There was an error"
else:
sys.path.append('/path_to_your_django_site/')
os.environ['DJANGO_SETTINGS_MODULE'] = 'your_django_site.settings'
import django.core.handlers.wsgi
handler = django.core.handlers.wsgi.WSGIHandler()
return handler(environ, start_response)
application = fck_handler
|
How do you use FCKEditor's image upload and browser with mod-wsgi?
|
I am using FCKEditor within a Django app served by Apache/mod-wsgi. I don't want to install php just for FCKEditor andI see FCKEditor offers image uploading and image browsing through Python. I just haven't found good instructions on how to set this all up.
So currently Django is running through a wsgi interface using this setup:
import os, sys
DIRNAME = os.sep.join(os.path.abspath(__file__).split(os.sep)[:-3])
sys.path.append(DIRNAME)
os.environ['DJANGO_SETTINGS_MODULE'] = 'myapp.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
In fckeditor in the editor->filemanager->connectors->py directory there is a file called wsgi.py:
from connector import FCKeditorConnector
from upload import FCKeditorQuickUpload
import cgitb
from cStringIO import StringIO
# Running from WSGI capable server (recomended)
def App(environ, start_response):
"WSGI entry point. Run the connector"
if environ['SCRIPT_NAME'].endswith("connector.py"):
conn = FCKeditorConnector(environ)
elif environ['SCRIPT_NAME'].endswith("upload.py"):
conn = FCKeditorQuickUpload(environ)
else:
start_response ("200 Ok", [('Content-Type','text/html')])
yield "Unknown page requested: "
yield environ['SCRIPT_NAME']
return
try:
# run the connector
data = conn.doResponse()
# Start WSGI response:
start_response ("200 Ok", conn.headers)
# Send response text
yield data
except:
start_response("500 Internal Server Error",[("Content-type","text/html")])
file = StringIO()
cgitb.Hook(file = file).handle()
yield file.getvalue()
I need these two things two work together by means of modifying my django wsgi file to serve the fckeditor parts correctly or make apache serve both django and fckeditor correctly on a single domain.
|
[
"This describes how to embed the FCK editor and enable image uploading.\nFirst you need to edit fckconfig.js to change the image upload\nURL to point to some URL inside your server.\nFCKConfig.ImageUploadURL = \"/myapp/root/imageUploader\";\n\nThis will point to the server relative URL to receive the upload.\nFCK will send the uploaded file to that handler using the CGI variable\nname \"NewFile\" encoded using multipart/form-data. Unfortunately you\nwill have to implement /myapp/root/imageUploader, because I don't think\nthe FCK distribution stuff can be easily adapted to other frameworks.\nThe imageUploader should extract the NewFile and store it\nsomewhere on the server.\nThe response generated by /myapp/root/imageUploader should emulate\nthe HTML constructed in /editor/.../fckoutput.py.\nSomething like this (whiff template format)\n{{env\n whiff.content_type: \"text/html\",\n whiff.headers: [\n [\"Expires\",\"Mon, 26 Jul 1997 05:00:00 GMT\"],\n [\"Cache-Control\",\"no-store, no-cache, must-revalidate\"],\n [\"Cache-Control\",\"post-check=0, pre-check=0\"],\n [\"Pragma\",\"no-cache\"]\n ]\n/}}\n\n<script>\n//alert(\"!! RESPONSE RECIEVED\");\nerrorNumber = 0;\nfileUrl = \"fileurl.png\";\nfileName = \"filename.png\";\ncustomMsg = \"\";\nwindow.parent.OnUploadCompleted(errorNumber, fileUrl, fileName, customMsg);\n</script>\n\nThe {{env ...}} stuff at the top indicate the content type and\nrecommended HTTP headers to send. The fileUrl should be the Url to\nuse to find the image on the server.\nHere are the basic steps to get the html fragment which\ngenerates the FCK editor widget. The only tricky part is you have to put the \nright client indentification into the os.environ -- it's ugly\nbut that's the way the FCK library works right now (I filed a bug\nreport).\nimport fckeditor # you must have the fck editor python support installed to use this module\nimport os\n\ninputName = \"myInputName\" # the name to use for the input element in the form\nbasePath = \"/server/relative/path/to/fck/installation/\" # the location of FCK static files\nif basePath[-1:]!=\"/\":\n basePath+=\"/\" # basepath must end in slash\noFCKeditor = fckeditor.FCKeditor(inputName)\noFCKeditor.BasePath = basePath\noFCKeditor.Height = 300 # the height in pixels of the editor\noFCKeditor.Value = \"<h1>initial html to be editted</h1>\"\nos.environ[\"HTTP_USER_AGENT\"] = \"Mozilla/5.0 (Macintosh; U;...\" # or whatever\n# there must be some way to figure out the user agent in Django right?\nhtmlOut = oFCKeditor.Create()\n# insert htmlOut into your page where you want the editor to appear\nreturn htmlOut\n\nThe above is untested, but it's based on the below which is tested.\nHere is how to use FCK editor using mod-wsgi:\nTechnically it uses a couple features of WHIFF (see\nWHIFF.sourceforge.net),\n-- in fact it is part of the WHIFF distribution --\n but\nthe WHIFF features are easily removed.\n\nI don't know how to install it in Django, but if\nDjango allows wsgi apps to be installed easily, you\nshould be able to do it.\n\nNOTE: FCK allows the client to inject pretty much anything\ninto HTML pages -- you will want to filter the returned value for evil\nattacks.\n(eg: see whiff.middleware.TestSafeHTML middleware for\nan example of how to do this).\n \n\"\"\"\nIntroduce an FCK editor input element. (requires FCKeditor http://www.fckeditor.net/).\n\nNote: this implementation can generate values containing code injection attacks if you\n don't filter the output generated for evil tags and values.\n\"\"\"\n\nimport fckeditor # you must have the fck editor python support installed to use this module\nfrom whiff.middleware import misc\nimport os\n\nclass FCKInput(misc.utility):\n def __init__(self,\n inputName, # name for input element\n basePath, # server relative URL root for FCK HTTP install\n value = \"\"): # initial value for input\n self.inputName = inputName\n self.basePath = basePath\n self.value = value\n def __call__(self, env, start_response):\n inputName = self.param_value(self.inputName, env).strip()\n basePath = self.param_value(self.basePath, env).strip()\n if basePath[-1:]!=\"/\":\n basePath+=\"/\"\n value = self.param_value(self.value, env)\n oFCKeditor = fckeditor.FCKeditor(inputName)\n oFCKeditor.BasePath = basePath\n oFCKeditor.Height = 300 # this should be a require!\n oFCKeditor.Value = value\n # hack around a bug in fck python library: need to put the user agent in os.environ\n # XXX this hack is not safe for multi threaded servers (theoretically)... need to lock on os.env\n os_environ = os.environ\n new_os_env = os_environ.copy()\n new_os_env.update(env)\n try:\n os.environ = new_os_env\n htmlOut = oFCKeditor.Create()\n finally:\n # restore the old os.environ\n os.environ = os_environ\n start_response(\"200 OK\", [('Content-Type', 'text/html')])\n return [htmlOut]\n\n__middleware__ = FCKInput\n\ndef test():\n env = {\n \"HTTP_USER_AGENT\":\n \"Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.14) Gecko/20080404 Firefox/2.0.0.14\"\n }\n f = FCKInput(\"INPUTNAME\", \"/MY/BASE/PATH\", \"THE HTML VALUE TO START WITH\")\n r = f(env, misc.ignore)\n print \"test result\"\n print \"\".join(list(r))\n\nif __name__==\"__main__\":\n test()\n\nSee this working, for example, at\n\nhttp://aaron.oirt.rutgers.edu/myapp/docs/W1500.whyIsWhiffCool.\nbtw: thanks. I needed to look into this anyway.\n",
"Edit: Ultimately I was unhappy with this solution also so I made a Django app that takes care of the file uploads and browsing.\nThis is the solution I finally hacked together after reading the fckeditor code:\nimport os, sys\n\ndef fck_handler(environ, start_response):\n path = environ['PATH_INFO']\n if path.endswith(('upload.py', 'connector.py')):\n sys.path.append('/#correct_path_to#/fckeditor/editor/filemanager/connectors/py/')\n if path.endswith('upload.py'):\n from upload import FCKeditorQuickUpload\n conn = FCKeditorQuickUpload(environ)\n else:\n from connector import FCKeditorConnector\n conn = FCKeditorConnector(environ)\n try:\n data = conn.doResponse()\n start_response('200 Ok', conn.headers)\n return data\n except:\n start_response(\"500 Internal Server Error\",[(\"Content-type\",\"text/html\")])\n return \"There was an error\"\n else:\n sys.path.append('/path_to_your_django_site/')\n os.environ['DJANGO_SETTINGS_MODULE'] = 'your_django_site.settings'\n import django.core.handlers.wsgi\n handler = django.core.handlers.wsgi.WSGIHandler()\n return handler(environ, start_response)\n\napplication = fck_handler\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"fckeditor",
"mod_wsgi",
"python"
] |
stackoverflow_0000803613_django_fckeditor_mod_wsgi_python.txt
|
Q:
sudoku obfuscated python -> perl translation
Anybody care to translate this into obfuscated perl? It's written in Python taken from: here
def r(a):i=a.find('0');~i or exit(a);[m
in[(i-j)%9*(i/9^j/9)*(i/27^j/27|i%9/3^j%9/3)or a[j]for
j in range(81)]or r(a[:i]+m+a[i+1:])for m in'%d'%5**18]
from sys import*;r(argv[1])
I realize it's just for fun :)
A:
sub r{($a=shift)=~/0/g?my$i=pos:die$a;T:for$m(1..9){($i-$_)%9*(int($i/9)^int($_/9))*(int($i/27)^int($_/27)|int($i%9/3)^int($_%9/3))||$a=~/^.{$_}$m/&&next T,for 0..80;substr($a,$i,1)=$m;r($a)}}r@ARGV
The braindead translation. Longer, since Python 2's / is integer division while Perl's is floating-point.
A:
There already are a few Sudoku solvers written in Obfuscated Perl, do you really want another (possibly less efficient) one?
If not...
De-obfuscate.
Rewrite in Perl.
Obfuscate.
|
sudoku obfuscated python -> perl translation
|
Anybody care to translate this into obfuscated perl? It's written in Python taken from: here
def r(a):i=a.find('0');~i or exit(a);[m
in[(i-j)%9*(i/9^j/9)*(i/27^j/27|i%9/3^j%9/3)or a[j]for
j in range(81)]or r(a[:i]+m+a[i+1:])for m in'%d'%5**18]
from sys import*;r(argv[1])
I realize it's just for fun :)
|
[
"sub r{($a=shift)=~/0/g?my$i=pos:die$a;T:for$m(1..9){($i-$_)%9*(int($i/9)^int($_/9))*(int($i/27)^int($_/27)|int($i%9/3)^int($_%9/3))||$a=~/^.{$_}$m/&&next T,for 0..80;substr($a,$i,1)=$m;r($a)}}r@ARGV\n\nThe braindead translation. Longer, since Python 2's / is integer division while Perl's is floating-point.\n",
"There already are a few Sudoku solvers written in Obfuscated Perl, do you really want another (possibly less efficient) one?\nIf not...\n\nDe-obfuscate.\nRewrite in Perl.\nObfuscate.\n\n"
] |
[
3,
2
] |
[] |
[] |
[
"perl",
"python",
"translate"
] |
stackoverflow_0000951666_perl_python_translate.txt
|
Q:
caching issues in MySQL response with MySQLdb in Django
I use MySQL with MySQLdb module in Python, in Django.
I'm running in autocommit mode in this case (and Django's transaction.is_managed() actually returns False).
I have several processes interacting with the database.
One process fetches all Task models with Task.objects.all()
Then another process adds a Task model (I can see it in a database management application).
If I call Task.objects.all() on the first process, I don't see anything. But if I call connection._commit() and then Task.objects.all(), I see the new Task.
My question is: Is there any caching involved at connection level? And is it a normal behaviour (it does not seems to me)?
A:
This certainly seems autocommit/table locking - related.
If mysqldb implements the dbapi2 spec it will probably have a connection running as one single continuous transaction. When you say: 'running in autocommit mode': do you mean MySQL itself or the mysqldb module? Or Django?
Not intermittently commiting perfectly explains the behaviour you are getting:
i) a connection implemented as one single transaction in mysqldb (by default, probably)
ii) not opening/closing connections only when needed but (re)using one (or more) persistent database connections (my guess, could be Django-architecture-inherited).
ii) your selects ('reads') cause a 'simple read lock' on a table (which means other connections can still 'read' this table but connections wanting to 'write data' can't (immediately) because this lock prevents them from getting an 'exclusive lock' (needed 'for writing') on this table. The writing is thus postponed indefinitely (until it can get a (short) exclusive lock on the table for writing - when you close the connection or manually commit).
I'd do the following in your case:
find out which table locks are on your database during the scenario above
read about Django and transactions here. A quick skim suggests using standard Django functionality implicitely causes commits. This means sending handcrafted SQL maybe won't (insert, update...).
|
caching issues in MySQL response with MySQLdb in Django
|
I use MySQL with MySQLdb module in Python, in Django.
I'm running in autocommit mode in this case (and Django's transaction.is_managed() actually returns False).
I have several processes interacting with the database.
One process fetches all Task models with Task.objects.all()
Then another process adds a Task model (I can see it in a database management application).
If I call Task.objects.all() on the first process, I don't see anything. But if I call connection._commit() and then Task.objects.all(), I see the new Task.
My question is: Is there any caching involved at connection level? And is it a normal behaviour (it does not seems to me)?
|
[
"This certainly seems autocommit/table locking - related.\nIf mysqldb implements the dbapi2 spec it will probably have a connection running as one single continuous transaction. When you say: 'running in autocommit mode': do you mean MySQL itself or the mysqldb module? Or Django?\nNot intermittently commiting perfectly explains the behaviour you are getting:\ni) a connection implemented as one single transaction in mysqldb (by default, probably)\nii) not opening/closing connections only when needed but (re)using one (or more) persistent database connections (my guess, could be Django-architecture-inherited).\nii) your selects ('reads') cause a 'simple read lock' on a table (which means other connections can still 'read' this table but connections wanting to 'write data' can't (immediately) because this lock prevents them from getting an 'exclusive lock' (needed 'for writing') on this table. The writing is thus postponed indefinitely (until it can get a (short) exclusive lock on the table for writing - when you close the connection or manually commit).\nI'd do the following in your case:\n\nfind out which table locks are on your database during the scenario above\nread about Django and transactions here. A quick skim suggests using standard Django functionality implicitely causes commits. This means sending handcrafted SQL maybe won't (insert, update...).\n\n"
] |
[
1
] |
[] |
[] |
[
"commit",
"connection",
"django",
"mysql",
"python"
] |
stackoverflow_0000952216_commit_connection_django_mysql_python.txt
|
Q:
Referencing a class' method, not an instance's
I'm writing a function that exponentiates an object, i.e. given a and n, returns an. Since a needs not be a built-in type, the function accepts, as a keyword argument, a function to perform multiplications. If undefined, it defaults to the objects __mul__ method, i.e. the object itself is expected to have multiplication defined. That part is sort of easy:
def bin_pow(a, n, **kwargs) :
mul = kwargs.pop('mul',None)
if mul is None :
mul = lambda x,y : x*y
The thing is that in the process of calculating an the are a lot of intermediate squarings, and there often are more efficient ways to compute them than simply multiplying the object by itself. It is easy to define another function that computes the square and pass it as another keyword argument, something like:
def bin_pow(a, n, **kwargs) :
mul = kwargs.pop('mul',None)
sqr = kwargs.pop('sqr',None)
if mul is None :
mul = lambda x,y : x*y
if sqr is None :
sqr = lambda x : mul(x,x)
The problem here comes if the function to square the object is not a standalone function, but is a method of the object being exponentiated, which would be a very reasonable thing to do. The only way of doing this I can think of is something like this:
import inspect
def bin_pow(a, n, **kwargs) :
mul = kwargs.pop('mul',None)
sqr = kwargs.pop('sqr',None)
if mul is None :
mul = lambda x,y : x*y
if sqr is None :
sqr = lambda x : mul(x,x)
elif inspect.isfunction(sqr) == False : # if not a function, it is a string
sqr = lambda x : eval('x.'+sqr+'()')
It does work, but I find it an extremely unelegant way of doing things... My mastery of OOP is limited, but if there was a way to have sqr point to the class' function, not to an instance's one, then I could get away with something like sqr = lambda x : sqr(x), or maybe sqr = lambda x: x.sqr(). Can this be done? Is there any other more pythonic way?
A:
You can call unbound methods with the instance as the first parameter:
class A(int):
def sqr(self):
return A(self*self)
sqr = A.sqr
a = A(5)
print sqr(a) # Prints 25
So in your case you don't actually need to do anything specific, just the following:
bin_pow(a, n, sqr=A.sqr)
Be aware that this is early binding, so if you have a subclass B that overrides sqr then still A.sqr is called. For late binding you can use a lambda at the callsite:
bin_pow(a, n, sqr=lambda x: x.sqr())
A:
here's how I'd do it:
import operator
def bin_pow(a, n, **kwargs) :
pow_function = kwargs.pop('pow' ,None)
if pow_function is None:
pow_function = operator.pow
return pow_function(a, n)
That's the fastest way. See also object.__pow__ and the operator module documentations.
Now, to pass an object method you can pass it directly, no need to pass a string with the name. In fact, never use strings for this kind of thing, using the object directly is much better.
If you want the unbound method, you can pass it just as well:
class MyClass(object):
def mymethod(self, other):
return do_something_with_self_and_other
m = MyClass()
n = MyClass()
bin_pow(m, n, pow=MyClass.mymethod)
If you want the class method, so just pass it instead:
class MyClass(object):
@classmethod
def mymethod(cls, x, y):
return do_something_with_x_and_y
m = MyClass()
n = MyClass()
bin_pow(m, n, pow=MyClass.mymethod)
A:
If you want to call the class's method, and not the (possibly overridden) instance's method, you can do
instance.__class__.method(instance)
instead of
instance.method()
I'm not sure though if that's what you want.
A:
If I understand the design goals of the library function, you want to provide a library "power" function which will raise any object passed to it to the Nth power. But you also want to provide a "shortcut" for efficiency.
The design goals seem a little odd--Python already defines the mul method to allow the designer of a class to multiply it by an arbitrary value, and the pow method to allow the designer of a class to support raising it to a power. If I were building this, I'd expect and require the users to have a mul method, and I'd do something like this:
def bin_or_pow(a, x):
pow_func = getattr(a, '__pow__', None)
if pow_func is None:
def pow_func(n):
v = 1
for i in xrange(n):
v = a * v
return v
return pow_func(x)
That will let you do the following:
class Multable(object):
def __init__(self, x):
self.x = x
def __mul__(self, n):
print 'in mul'
n = getattr(n, 'x', n)
return type(self)(self.x * n)
class Powable(Multable):
def __pow__(self, n):
print 'in pow'
n = getattr(n, 'x', n)
return type(self)(self.x ** n)
print bin_or_pow(5, 3)
print
print bin_or_pow(Multable(5), 5).x
print
print bin_or_pow(Powable(5), 5).x
... and you get ...
125
in mul
in mul
in mul
in mul
in mul
3125
in pow
3125
|
Referencing a class' method, not an instance's
|
I'm writing a function that exponentiates an object, i.e. given a and n, returns an. Since a needs not be a built-in type, the function accepts, as a keyword argument, a function to perform multiplications. If undefined, it defaults to the objects __mul__ method, i.e. the object itself is expected to have multiplication defined. That part is sort of easy:
def bin_pow(a, n, **kwargs) :
mul = kwargs.pop('mul',None)
if mul is None :
mul = lambda x,y : x*y
The thing is that in the process of calculating an the are a lot of intermediate squarings, and there often are more efficient ways to compute them than simply multiplying the object by itself. It is easy to define another function that computes the square and pass it as another keyword argument, something like:
def bin_pow(a, n, **kwargs) :
mul = kwargs.pop('mul',None)
sqr = kwargs.pop('sqr',None)
if mul is None :
mul = lambda x,y : x*y
if sqr is None :
sqr = lambda x : mul(x,x)
The problem here comes if the function to square the object is not a standalone function, but is a method of the object being exponentiated, which would be a very reasonable thing to do. The only way of doing this I can think of is something like this:
import inspect
def bin_pow(a, n, **kwargs) :
mul = kwargs.pop('mul',None)
sqr = kwargs.pop('sqr',None)
if mul is None :
mul = lambda x,y : x*y
if sqr is None :
sqr = lambda x : mul(x,x)
elif inspect.isfunction(sqr) == False : # if not a function, it is a string
sqr = lambda x : eval('x.'+sqr+'()')
It does work, but I find it an extremely unelegant way of doing things... My mastery of OOP is limited, but if there was a way to have sqr point to the class' function, not to an instance's one, then I could get away with something like sqr = lambda x : sqr(x), or maybe sqr = lambda x: x.sqr(). Can this be done? Is there any other more pythonic way?
|
[
"You can call unbound methods with the instance as the first parameter:\nclass A(int):\n def sqr(self):\n return A(self*self)\n\nsqr = A.sqr\na = A(5)\nprint sqr(a) # Prints 25\n\nSo in your case you don't actually need to do anything specific, just the following:\nbin_pow(a, n, sqr=A.sqr)\n\nBe aware that this is early binding, so if you have a subclass B that overrides sqr then still A.sqr is called. For late binding you can use a lambda at the callsite:\nbin_pow(a, n, sqr=lambda x: x.sqr())\n\n",
"here's how I'd do it:\nimport operator\n\ndef bin_pow(a, n, **kwargs) :\n pow_function = kwargs.pop('pow' ,None)\n\n if pow_function is None:\n pow_function = operator.pow\n\n return pow_function(a, n)\n\nThat's the fastest way. See also object.__pow__ and the operator module documentations.\nNow, to pass an object method you can pass it directly, no need to pass a string with the name. In fact, never use strings for this kind of thing, using the object directly is much better.\nIf you want the unbound method, you can pass it just as well:\nclass MyClass(object):\n def mymethod(self, other):\n return do_something_with_self_and_other\n\nm = MyClass()\nn = MyClass()\n\nbin_pow(m, n, pow=MyClass.mymethod)\n\nIf you want the class method, so just pass it instead:\nclass MyClass(object):\n @classmethod\n def mymethod(cls, x, y):\n return do_something_with_x_and_y\n\nm = MyClass()\nn = MyClass()\n\nbin_pow(m, n, pow=MyClass.mymethod)\n\n",
"If you want to call the class's method, and not the (possibly overridden) instance's method, you can do\ninstance.__class__.method(instance)\n\ninstead of\ninstance.method()\n\nI'm not sure though if that's what you want.\n",
"If I understand the design goals of the library function, you want to provide a library \"power\" function which will raise any object passed to it to the Nth power. But you also want to provide a \"shortcut\" for efficiency.\nThe design goals seem a little odd--Python already defines the mul method to allow the designer of a class to multiply it by an arbitrary value, and the pow method to allow the designer of a class to support raising it to a power. If I were building this, I'd expect and require the users to have a mul method, and I'd do something like this:\ndef bin_or_pow(a, x):\n pow_func = getattr(a, '__pow__', None)\n if pow_func is None:\n def pow_func(n):\n v = 1\n for i in xrange(n):\n v = a * v\n\n return v\n\n return pow_func(x)\n\nThat will let you do the following:\nclass Multable(object):\n def __init__(self, x):\n self.x = x\n\n def __mul__(self, n):\n print 'in mul'\n n = getattr(n, 'x', n)\n return type(self)(self.x * n)\n\nclass Powable(Multable):\n def __pow__(self, n):\n print 'in pow'\n n = getattr(n, 'x', n)\n return type(self)(self.x ** n)\n\nprint bin_or_pow(5, 3)\nprint\nprint bin_or_pow(Multable(5), 5).x\nprint\nprint bin_or_pow(Powable(5), 5).x\n\n... and you get ...\n\n125 \nin mul\n in mul\n in mul\n in mul\n in mul\n 3125 \nin pow\n 3125 \n\n"
] |
[
4,
3,
1,
0
] |
[
"I understand it's the sqr-bit at the end you want to fix. If so, I suggest getattr. Example:\nclass SquarableThingy:\n def __init__(self, i):\n self.i = i\n def squarify(self):\n return self.i**2\n\nclass MultipliableThingy:\n def __init__(self, i):\n self.i = i\n def __mul__(self, other):\n return self.i * other.i\n\nx = SquarableThingy(3)\ny = MultipliableThingy(4)\nz = 5\nsqr = 'squarify'\nsqrFunX = getattr(x, sqr, lambda: x*x)\nsqrFunY = getattr(y, sqr, lambda: y*y)\nsqrFunZ = getattr(z, sqr, lambda: z*z)\nassert sqrFunX() == 9\nassert sqrFunY() == 16\nassert sqrFunZ() == 25\n\n"
] |
[
-1
] |
[
"python"
] |
stackoverflow_0000950053_python.txt
|
Q:
How to sort based on dependencies?
I have an class that has a list of "dependencies" pointing to other classes of the same base type.
class Foo(Base):
dependencies = []
class Bar(Base):
dependencies = [Foo]
class Baz(Base):
dependencies = [Bar]
I'd like to sort the instances these classes generate based on their dependencies. In my example I'd expect instances of Foo to come first, then Bar, then Baz.
What's the best way to sort this?
A:
It's called a topological sort.
def sort_deps(objs):
queue = [objs with no dependencies]
while queue:
obj = queue.pop()
yield obj
for obj in objs:
if dependencies are now satisfied:
queue.append(obj)
if not all dependencies are satisfied:
error
return result
A:
I had a similar question just last week - wish I'd know about Stack Overflow then! I hunted around a bit until I realized that I had a DAG (Directed Acyclic Graph since my dependencies couldn't be recursive or circular). Then I found a few references for algorithms to sort them. I used a depth-first traversal to get to the leaf nodes and add them to sorted list first.
Here's a page that I found useful:
Directed Acyclic Graphs
|
How to sort based on dependencies?
|
I have an class that has a list of "dependencies" pointing to other classes of the same base type.
class Foo(Base):
dependencies = []
class Bar(Base):
dependencies = [Foo]
class Baz(Base):
dependencies = [Bar]
I'd like to sort the instances these classes generate based on their dependencies. In my example I'd expect instances of Foo to come first, then Bar, then Baz.
What's the best way to sort this?
|
[
"It's called a topological sort.\ndef sort_deps(objs):\n queue = [objs with no dependencies]\n while queue:\n obj = queue.pop()\n yield obj\n for obj in objs:\n if dependencies are now satisfied:\n queue.append(obj)\n if not all dependencies are satisfied:\n error\n return result\n\n",
"I had a similar question just last week - wish I'd know about Stack Overflow then! I hunted around a bit until I realized that I had a DAG (Directed Acyclic Graph since my dependencies couldn't be recursive or circular). Then I found a few references for algorithms to sort them. I used a depth-first traversal to get to the leaf nodes and add them to sorted list first.\nHere's a page that I found useful:\nDirected Acyclic Graphs\n"
] |
[
20,
5
] |
[] |
[] |
[
"dependencies",
"python",
"sorting"
] |
stackoverflow_0000952302_dependencies_python_sorting.txt
|
Q:
Why won't python allow me to delete files?
I've created a python script that gets a list of files from a text file and deletes them if they're empty. It correctly detects empty files but it doesn't want to delete them. It gives me:
(32, 'The process cannot access the file because it is being used by another process')
I've used two different tools to check whether the files are locked or not and I'm certain that they are not. I used sysinternals process explorer and LockHunter. Furthermore, I'm able to just manually delete the files myself. I obviously don't want to do that for all of them as there are hundreds in various locations.
The script:
import os.path
import sys
def DeleteFilesFromListIfBlank(PathToListOfFiles):
ListOfFiles = open(PathToListOfFiles)
FilesToCheck = [];
for line in ListOfFiles.readlines():
if(len(line) > 1):
line = line.rstrip();
FilesToCheck.append(line)
print "Found %s files to check. Starting check." % len(FilesToCheck)
FilesToRemove = [];
for line in FilesToCheck:
#print "Opening %s" % line
try:
ActiveFile = open(line);
Length = len(ActiveFile.read())
if(Length < 691 and ActiveFile.read() == ""):
print "Deleting %s" % line
os.unlink(line);
else:
print "Keeping %s" % line
except IOError,message:
print "Could not open file: $s" % message
except Exception as inst:
print inst.args
DeleteFilesFromListIfBlank("C:\\ListOfResx.txt")
I've tried using both os.unlink and os.remove. I'm running Python 2.6 on Vista64
Thanks
A:
You need to call .close() on the file object before you try and delete it.
Edit: And really you shouldn't be opening the file at all. os.stat() will tell you the size of a file (and 9 other values) without ever opening the file.
This (I think) does the same thing but is a little cleaner (IMHO):
import os
_MAX_SIZE = 691
def delete_if_blank(listFile):
# Make a list of files to check.
with open(listFile) as listFile:
filesToCheck = filter(None, (line.rstrip() for line in listFile.readlines()))
# listFile is automatically closed now because we're out of the 'with' statement.
print "Found %u files to check. Starting check." % len(filesToCheck)
# Remove each file.
for filename in filesToCheck:
if os.stat(filename).st_size < _MAX_SIZE:
print "Deleting %s" % filename
os.remove(filename)
else:
print "Keeping %s" % filename
A:
Try ActiveFile.close() before doing the unlink.
Also, reading the whole file isn't necessary, you can use os.path.getsize(filename) == 0.
A:
It's you that has the file open - you need to close it before trying to delete it:
ActiveFile = open(line);
Length = len(ActiveFile.read())
ActiveFile.close() # Insert this line!
or just get the filesize without opening the file:
Length = os.path.getsize(line)
A:
Are you opening each file and then trying to delete it? If so, try closing it first.
A:
You probably need to close the filehandle ActiveFile before you try to delete it.
|
Why won't python allow me to delete files?
|
I've created a python script that gets a list of files from a text file and deletes them if they're empty. It correctly detects empty files but it doesn't want to delete them. It gives me:
(32, 'The process cannot access the file because it is being used by another process')
I've used two different tools to check whether the files are locked or not and I'm certain that they are not. I used sysinternals process explorer and LockHunter. Furthermore, I'm able to just manually delete the files myself. I obviously don't want to do that for all of them as there are hundreds in various locations.
The script:
import os.path
import sys
def DeleteFilesFromListIfBlank(PathToListOfFiles):
ListOfFiles = open(PathToListOfFiles)
FilesToCheck = [];
for line in ListOfFiles.readlines():
if(len(line) > 1):
line = line.rstrip();
FilesToCheck.append(line)
print "Found %s files to check. Starting check." % len(FilesToCheck)
FilesToRemove = [];
for line in FilesToCheck:
#print "Opening %s" % line
try:
ActiveFile = open(line);
Length = len(ActiveFile.read())
if(Length < 691 and ActiveFile.read() == ""):
print "Deleting %s" % line
os.unlink(line);
else:
print "Keeping %s" % line
except IOError,message:
print "Could not open file: $s" % message
except Exception as inst:
print inst.args
DeleteFilesFromListIfBlank("C:\\ListOfResx.txt")
I've tried using both os.unlink and os.remove. I'm running Python 2.6 on Vista64
Thanks
|
[
"You need to call .close() on the file object before you try and delete it.\nEdit: And really you shouldn't be opening the file at all. os.stat() will tell you the size of a file (and 9 other values) without ever opening the file.\nThis (I think) does the same thing but is a little cleaner (IMHO):\nimport os\n\n_MAX_SIZE = 691\n\ndef delete_if_blank(listFile):\n # Make a list of files to check.\n with open(listFile) as listFile:\n filesToCheck = filter(None, (line.rstrip() for line in listFile.readlines()))\n\n # listFile is automatically closed now because we're out of the 'with' statement.\n\n print \"Found %u files to check. Starting check.\" % len(filesToCheck)\n\n # Remove each file.\n for filename in filesToCheck:\n if os.stat(filename).st_size < _MAX_SIZE:\n print \"Deleting %s\" % filename\n os.remove(filename)\n else:\n print \"Keeping %s\" % filename\n\n",
"Try ActiveFile.close() before doing the unlink.\nAlso, reading the whole file isn't necessary, you can use os.path.getsize(filename) == 0.\n",
"It's you that has the file open - you need to close it before trying to delete it:\nActiveFile = open(line);\nLength = len(ActiveFile.read())\nActiveFile.close() # Insert this line!\n\nor just get the filesize without opening the file:\nLength = os.path.getsize(line)\n\n",
"Are you opening each file and then trying to delete it? If so, try closing it first.\n",
"You probably need to close the filehandle ActiveFile before you try to delete it.\n"
] |
[
17,
9,
6,
4,
3
] |
[] |
[] |
[
"file_io",
"python"
] |
stackoverflow_0000953040_file_io_python.txt
|
Q:
Explicit access to Python's built in scope
How do you explicitly access name in Python's built in scope?
One situation where I ran in to this was a in module, say called foo, which happened to have an open function. In another module foo's open function would be accessible as foo.open which works well. In foo itself though, open blocks the built in open. How can you access the built in version of a name like open explicitly?
I am aware it is probably practically bad idea to block any built in name, but I am still curious to know if there is a way to explicitly access the built in scope.
A:
Use __builtin__.
def open():
pass
import __builtin__
print open
print __builtin__.open
... gives you ...
<function open at 0x011E8670>
<built-in function open>
|
Explicit access to Python's built in scope
|
How do you explicitly access name in Python's built in scope?
One situation where I ran in to this was a in module, say called foo, which happened to have an open function. In another module foo's open function would be accessible as foo.open which works well. In foo itself though, open blocks the built in open. How can you access the built in version of a name like open explicitly?
I am aware it is probably practically bad idea to block any built in name, but I am still curious to know if there is a way to explicitly access the built in scope.
|
[
"Use __builtin__.\ndef open():\n pass\n\nimport __builtin__\n\nprint open\nprint __builtin__.open\n\n... gives you ...\n\n<function open at 0x011E8670>\n<built-in function open> \n\n"
] |
[
12
] |
[
"It's something like\n__builtins__.open()\n\n"
] |
[
-2
] |
[
"python"
] |
stackoverflow_0000953027_python.txt
|
Q:
Measure Path Length in Blender Script?
In Blender (v2.48), how can I determine the length of a path (in Blender units) from a Python script?
The value is available from the GUI: With the path selected, the Editing panel contains a PrintLen button. The length appears to the right when the button is pressed.
How can I obtain this value programmatically from a Python script running in Blender?
Note: I'm not interested in the PathLen value which is in frames, not Blender units.
A:
The best idea I've found is to create a mesh from the path and sum the length of the segments (edges).
import Blender
def get_length(path):
"""
Return the length (in Blender distance units) of the path.
"""
mesh = Blender.Mesh.New()
mesh.getFromObject(path)
return sum(edge.length for edge in mesh.edges)
|
Measure Path Length in Blender Script?
|
In Blender (v2.48), how can I determine the length of a path (in Blender units) from a Python script?
The value is available from the GUI: With the path selected, the Editing panel contains a PrintLen button. The length appears to the right when the button is pressed.
How can I obtain this value programmatically from a Python script running in Blender?
Note: I'm not interested in the PathLen value which is in frames, not Blender units.
|
[
"The best idea I've found is to create a mesh from the path and sum the length of the segments (edges).\nimport Blender\n\ndef get_length(path):\n \"\"\"\n Return the length (in Blender distance units) of the path.\n \"\"\"\n mesh = Blender.Mesh.New()\n mesh.getFromObject(path)\n\n return sum(edge.length for edge in mesh.edges)\n\n"
] |
[
2
] |
[] |
[] |
[
"blender",
"python"
] |
stackoverflow_0000848499_blender_python.txt
|
Q:
How do I update an object's members using a dict?
I'm writing a Django app that performs various functions, including inserting, or updating new records into the database via the URL.
So some internal application sends off a request to /import/?a=1&b=2&c=3, for example.
In the view, I want to create a new object, foo = Foo() and have the members of foo set to the data in the request.GET dictionary.
Here is what I'm doing now:
Request sent to /import/?a=1&b=2&c=3
View creates new object: foo = Foo()
Object is updated with data.
Here is what I got thus far:
foo.a = request['a']
foo.b = request['b']
foo.c = request['c']
Obviously this is tedious and error prone. The data in the URL has the exact same name as the object's members so it is a simple 1-to-1 mapping.
Ideally, I would like to do able to do something like this:
foo = Foo()
foo.update(request.GET)
or something to that effect.
Thanks!
A:
You can use the setattr function to dynamically set attributes:
for key,value in request.GET.items():
setattr(foo, key, value)
A:
If request.GET is a dictionary and class Foo does not use __slots__, then this should also work:
# foo is a Foo instance
foo.__dict__.update(request.GET)
A:
You've almost got it.
foo = Foo(**request.GET)
should do the trick.
A:
If you are using this to create a model object that then gets persisted, I'd strongly recommend using a ModelForm. This would do what you described, in the canonical way for Django, with the addition of validation.
To expand -- I didn't mean to use it for form output, just form input. As follows:
class Foo(models.Model):
a = models.CharField(max_length=255),
b = models.PositiveIntegerField()
class FooForm(forms.ModelForm):
class Meta:
model = Foo
def insert_foo(request):
form = FooForm(request.GET)
if not form.is_valid():
# Handle error conditions here
pass
else:
form.save()
return HttpResponse('Your response')
Then, assuming it's bound to /import/, a GET to /import/?a=Test&b=1 would insert a new Foo with a = "Test" and b="1".
|
How do I update an object's members using a dict?
|
I'm writing a Django app that performs various functions, including inserting, or updating new records into the database via the URL.
So some internal application sends off a request to /import/?a=1&b=2&c=3, for example.
In the view, I want to create a new object, foo = Foo() and have the members of foo set to the data in the request.GET dictionary.
Here is what I'm doing now:
Request sent to /import/?a=1&b=2&c=3
View creates new object: foo = Foo()
Object is updated with data.
Here is what I got thus far:
foo.a = request['a']
foo.b = request['b']
foo.c = request['c']
Obviously this is tedious and error prone. The data in the URL has the exact same name as the object's members so it is a simple 1-to-1 mapping.
Ideally, I would like to do able to do something like this:
foo = Foo()
foo.update(request.GET)
or something to that effect.
Thanks!
|
[
"You can use the setattr function to dynamically set attributes:\nfor key,value in request.GET.items():\n setattr(foo, key, value)\n\n",
"If request.GET is a dictionary and class Foo does not use __slots__, then this should also work:\n# foo is a Foo instance\nfoo.__dict__.update(request.GET)\n\n",
"You've almost got it.\nfoo = Foo(**request.GET)\n\nshould do the trick.\n",
"If you are using this to create a model object that then gets persisted, I'd strongly recommend using a ModelForm. This would do what you described, in the canonical way for Django, with the addition of validation.\nTo expand -- I didn't mean to use it for form output, just form input. As follows:\nclass Foo(models.Model):\n a = models.CharField(max_length=255),\n b = models.PositiveIntegerField()\n\nclass FooForm(forms.ModelForm):\n class Meta:\n model = Foo\n\ndef insert_foo(request):\n form = FooForm(request.GET)\n if not form.is_valid():\n # Handle error conditions here\n pass\n else:\n form.save()\n\n return HttpResponse('Your response')\n\nThen, assuming it's bound to /import/, a GET to /import/?a=Test&b=1 would insert a new Foo with a = \"Test\" and b=\"1\".\n"
] |
[
19,
3,
1,
1
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0000940089_django_python.txt
|
Q:
Is there a standalone alternative to activerecord-like database schema migrations?
Is there any standalone alternative to activerecord-like migrations. Something like a script that is able to track current schema version and apply outstanding migrations. Basically, these migration files could be just a plain SQL files, something like:
[timestamp]_create_users.sql
reverse_[timestamp]_create_users.sql
Language of implementation isn't very important — it could be anything that is usually installed/pre-installed on *nix systems.
I tried to find something out — but failed. I can certainly develop my own in an hour or two, but I am just curious — may be something nice is out there already.
A:
Try http://freshmeat.net/projects/liquibase/
If you are using MySQL specifically, have a look at: http://www.mysqldiff.org/
I used this to synchronize the schema of two databases (so you would have to apply the changes to a "master").
There's also http://phpmyversion.sourceforge.net/
A:
http://code.google.com/p/sqlalchemy-migrate/
A:
Not a linux option, but might answer this question for some people:
SQLYog can do this for MySQL - its a windows GUI tool:
http://www.webyog.com/en/
It can (amongst other things) compare schemas and make one schema look like another, or generate the sql required to do this if you want to do that instead. We use it for making sql patch files that we can use for upgrades. Its easier than manually maintaining a file as you make changes in development.
A:
Check out AutoPatch
A:
The ezComponents library has a database schema component that can compare and apply schema differences between two databases (or a database and a file).
A:
https://sourceforge.net/projects/migrations/
This is a tool for managing structural changes to database schemas based on Active Record migrations from Rails. Features multiple schema interactions, runtime substitution of values, script generations, and more.
Plus it has both a command line interface and a graphical interface. Also is actively developed and our company has been using it for over a year now and it has greatly improved the lives of everyone from development to dbas to operations to streamlined deployments and rollbacks. Management takes db changes for granted now (sometimes not a good thing) because it is so automated now thanks to this tool.
|
Is there a standalone alternative to activerecord-like database schema migrations?
|
Is there any standalone alternative to activerecord-like migrations. Something like a script that is able to track current schema version and apply outstanding migrations. Basically, these migration files could be just a plain SQL files, something like:
[timestamp]_create_users.sql
reverse_[timestamp]_create_users.sql
Language of implementation isn't very important — it could be anything that is usually installed/pre-installed on *nix systems.
I tried to find something out — but failed. I can certainly develop my own in an hour or two, but I am just curious — may be something nice is out there already.
|
[
"Try http://freshmeat.net/projects/liquibase/\nIf you are using MySQL specifically, have a look at: http://www.mysqldiff.org/\nI used this to synchronize the schema of two databases (so you would have to apply the changes to a \"master\").\nThere's also http://phpmyversion.sourceforge.net/\n",
"http://code.google.com/p/sqlalchemy-migrate/\n",
"Not a linux option, but might answer this question for some people:\nSQLYog can do this for MySQL - its a windows GUI tool:\nhttp://www.webyog.com/en/\nIt can (amongst other things) compare schemas and make one schema look like another, or generate the sql required to do this if you want to do that instead. We use it for making sql patch files that we can use for upgrades. Its easier than manually maintaining a file as you make changes in development.\n",
"Check out AutoPatch\n",
"The ezComponents library has a database schema component that can compare and apply schema differences between two databases (or a database and a file).\n",
"https://sourceforge.net/projects/migrations/\nThis is a tool for managing structural changes to database schemas based on Active Record migrations from Rails. Features multiple schema interactions, runtime substitution of values, script generations, and more.\nPlus it has both a command line interface and a graphical interface. Also is actively developed and our company has been using it for over a year now and it has greatly improved the lives of everyone from development to dbas to operations to streamlined deployments and rollbacks. Management takes db changes for granted now (sometimes not a good thing) because it is so automated now thanks to this tool.\n"
] |
[
2,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"mysql",
"php",
"python",
"shell",
"sql"
] |
stackoverflow_0000362334_mysql_php_python_shell_sql.txt
|
Q:
Error running tutorial that came along wxPython2.8 Docs and Demos
I tried the following example code from the tutorial that came along "wxPython2.8 Docs and Demos" package.
import wx
from frame import Frame
class App(wx.App):
"""Application class."""
def OnInit(self):
self.frame = Frame()
self.frame.Show()
self.SetTopWindow(self.frame)
return True
def main():
app = App()
app.MainLoop()
if __name__ == '__main__':
main()
but its giving me the following error
Traceback (most recent call last):
File "C:/Documents and Settings/umair.ahmed/Desktop/wxpy.py", line 3, in <module>
from frame import Frame
ImportError: No module named frame
kindly help i am just a newbie with python
A:
I think you should skip the "from frame import Frame" and change:
self.frame = Frame()
to:
self.frame = wx.Frame()
A:
Yeah, it's an ancient doc bug, see for example this 5-years-old post:-(. Fix:
delete the line that says from frame
import Frame
change the line that says self.frame
= Frame() to say instead self.frame = wx.Frame()
|
Error running tutorial that came along wxPython2.8 Docs and Demos
|
I tried the following example code from the tutorial that came along "wxPython2.8 Docs and Demos" package.
import wx
from frame import Frame
class App(wx.App):
"""Application class."""
def OnInit(self):
self.frame = Frame()
self.frame.Show()
self.SetTopWindow(self.frame)
return True
def main():
app = App()
app.MainLoop()
if __name__ == '__main__':
main()
but its giving me the following error
Traceback (most recent call last):
File "C:/Documents and Settings/umair.ahmed/Desktop/wxpy.py", line 3, in <module>
from frame import Frame
ImportError: No module named frame
kindly help i am just a newbie with python
|
[
"I think you should skip the \"from frame import Frame\" and change:\nself.frame = Frame()\n\nto:\nself.frame = wx.Frame()\n\n",
"Yeah, it's an ancient doc bug, see for example this 5-years-old post:-(. Fix:\n\ndelete the line that says from frame\nimport Frame\nchange the line that says self.frame\n= Frame() to say instead self.frame = wx.Frame()\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"python",
"windows",
"wxpython"
] |
stackoverflow_0000954132_python_windows_wxpython.txt
|
Q:
Getting object's parent namespace in python?
In python it's possible to use '.' in order to access object's dictionary items. For example:
class test( object ) :
def __init__( self ) :
self.b = 1
def foo( self ) :
pass
obj = test()
a = obj.foo
From above example, having 'a' object, is it possible to get from it reference to 'obj' that is a parent namespace for 'foo' method assigned? For example, to change obj.b into 2?
A:
On bound methods, you can use three special read-only parameters:
im_func which returns the (unbound) function object
im_self which returns the object the function is bound to (class instance)
im_class which returns the class of im_self
Testing around:
class Test(object):
def foo(self):
pass
instance = Test()
instance.foo # <bound method Test.foo of <__main__.Test object at 0x1>>
instance.foo.im_func # <function foo at 0x2>
instance.foo.im_self # <__main__.Test object at 0x1>
instance.foo.im_class # <__main__.Test class at 0x3>
# A few remarks
instance.foo.im_self.__class__ == instance.foo.im_class # True
instance.foo.__name__ == instance.foo.im_func.__name__ # True
instance.foo.__doc__ == instance.foo.im_func.__doc__ # True
# Now, note this:
Test.foo.im_func != Test.foo # unbound method vs function
Test.foo.im_self is None
# Let's play with classmethods
class Extend(Test):
@classmethod
def bar(cls):
pass
extended = Extend()
# Be careful! Because it's a class method, the class is returned, not the instance
extended.bar.im_self # <__main__.Extend class at ...>
There is an interesting thing to note here, that gives you a hint on how the methods are being called:
class Hint(object):
def foo(self, *args, **kwargs):
pass
@classmethod
def bar(cls, *args, **kwargs):
pass
instance = Hint()
# this will work with both class methods and instance methods:
for name in ['foo', 'bar']:
method = instance.__getattribute__(name)
# call the method
method.im_func(method.im_self, 1, 2, 3, fruit='banana')
Basically, im_self attribute of a bound method changes, to allow using it as the first parameter when calling im_func
A:
Python 2.6+ (including Python 3)
You can use the __self__ property of a bound method to access the instance that the method is bound to.
>> a.__self__
<__main__.test object at 0x782d0>
>> a.__self__.b = 2
>> obj.b
2
Python 2.2+ (Python 2.x only)
You can also use the im_self property, but this is not forward compatible with Python 3.
>> a.im_self
<__main__.test object at 0x782d0>
A:
since python2.6 synonyms for im_self and im_func are __self__ and __func__, respectively. im* attributes are completely gone in py3k. so you would need to change it to:
>> a.__self__
<__main__.test object at 0xb7b7d9ac>
>> a.__self__.b = 2
>> obj.b
2
|
Getting object's parent namespace in python?
|
In python it's possible to use '.' in order to access object's dictionary items. For example:
class test( object ) :
def __init__( self ) :
self.b = 1
def foo( self ) :
pass
obj = test()
a = obj.foo
From above example, having 'a' object, is it possible to get from it reference to 'obj' that is a parent namespace for 'foo' method assigned? For example, to change obj.b into 2?
|
[
"On bound methods, you can use three special read-only parameters:\n\nim_func which returns the (unbound) function object\nim_self which returns the object the function is bound to (class instance)\nim_class which returns the class of im_self\n\nTesting around:\nclass Test(object):\n def foo(self):\n pass\n\ninstance = Test()\ninstance.foo # <bound method Test.foo of <__main__.Test object at 0x1>>\ninstance.foo.im_func # <function foo at 0x2>\ninstance.foo.im_self # <__main__.Test object at 0x1>\ninstance.foo.im_class # <__main__.Test class at 0x3>\n\n# A few remarks\ninstance.foo.im_self.__class__ == instance.foo.im_class # True\ninstance.foo.__name__ == instance.foo.im_func.__name__ # True\ninstance.foo.__doc__ == instance.foo.im_func.__doc__ # True\n\n# Now, note this:\nTest.foo.im_func != Test.foo # unbound method vs function\nTest.foo.im_self is None\n\n# Let's play with classmethods\nclass Extend(Test):\n @classmethod\n def bar(cls): \n pass\n\nextended = Extend()\n\n# Be careful! Because it's a class method, the class is returned, not the instance\nextended.bar.im_self # <__main__.Extend class at ...>\n\nThere is an interesting thing to note here, that gives you a hint on how the methods are being called:\nclass Hint(object):\n def foo(self, *args, **kwargs):\n pass\n\n @classmethod\n def bar(cls, *args, **kwargs):\n pass\n\ninstance = Hint()\n\n# this will work with both class methods and instance methods:\nfor name in ['foo', 'bar']:\n method = instance.__getattribute__(name)\n # call the method\n method.im_func(method.im_self, 1, 2, 3, fruit='banana')\n\nBasically, im_self attribute of a bound method changes, to allow using it as the first parameter when calling im_func\n",
"Python 2.6+ (including Python 3)\nYou can use the __self__ property of a bound method to access the instance that the method is bound to.\n>> a.__self__\n<__main__.test object at 0x782d0>\n>> a.__self__.b = 2\n>> obj.b\n2\n\nPython 2.2+ (Python 2.x only)\nYou can also use the im_self property, but this is not forward compatible with Python 3.\n>> a.im_self\n<__main__.test object at 0x782d0>\n\n",
"since python2.6 synonyms for im_self and im_func are __self__ and __func__, respectively. im* attributes are completely gone in py3k. so you would need to change it to:\n>> a.__self__\n<__main__.test object at 0xb7b7d9ac>\n>> a.__self__.b = 2\n>> obj.b\n2\n\n"
] |
[
17,
14,
8
] |
[] |
[] |
[
"python",
"python_datamodel"
] |
stackoverflow_0000954340_python_python_datamodel.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.