content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Django Formset without instance
In this Django Doc explain how to create a formset that allows you to edit books belonging to a particular author.
What I want to do is: Create a formset that allows you to ADD new book belonging to a NEW author... Add the Book and their Authors in the same formset.
Can you gime a light? thanks.
A:
When you're instantiating the form and formset for the initial display, you don't need to provide an instance - so you will just get blank forms.
When you pass in the data on POST, you can do the form first, save it, and get an instance. Then you can pass that instance into the formset, so that it correctly saves the related objects.
A:
This depends on whether you're doing it yourself, or using the built-in admin.
If you're using the admin, you can use inlines.
If you're doing this in your own application, then it's up to you. Create a single form which has fields for a new author and book. When the user submits the form, it's your job to create the new records.
|
Django Formset without instance
|
In this Django Doc explain how to create a formset that allows you to edit books belonging to a particular author.
What I want to do is: Create a formset that allows you to ADD new book belonging to a NEW author... Add the Book and their Authors in the same formset.
Can you gime a light? thanks.
|
[
"When you're instantiating the form and formset for the initial display, you don't need to provide an instance - so you will just get blank forms.\nWhen you pass in the data on POST, you can do the form first, save it, and get an instance. Then you can pass that instance into the formset, so that it correctly saves the related objects.\n",
"This depends on whether you're doing it yourself, or using the built-in admin.\nIf you're using the admin, you can use inlines.\nIf you're doing this in your own application, then it's up to you. Create a single form which has fields for a new author and book. When the user submits the form, it's your job to create the new records.\n"
] |
[
13,
0
] |
[] |
[] |
[
"django",
"forms",
"formset",
"inline_formset",
"python"
] |
stackoverflow_0001267810_django_forms_formset_inline_formset_python.txt
|
Q:
Python http proxy library based on libevent or comparable technology?
I'm looking to build an intelligent reverse http proxy capable of routing, header examination and enrichment (eg. examine and build cookies and http headers), and various other fanciness. For a general idea of what I'm looking to build see Ruby Proxies for Scale and Monitoring - except in Python.
I realize that Twisted is an exceedingly good answer for this, and that eventmachine was inspired by Twisted, but I'm looking for something other than Twisted.
Ideally a library or package that includes http proxying capabilities I could modify with my own little plugins.
I remember seeing something based on eventlib that had http server capabilities built in, but I can't seem to find it.
I'm also taking a deep look at perlbal; that looks almost like the perfect solution, except it's in Perl.
Any recommendations?
A:
Not sure if meets all your needs, but proxylet is a reverse proxy based on Linden Lab's eventlet.
|
Python http proxy library based on libevent or comparable technology?
|
I'm looking to build an intelligent reverse http proxy capable of routing, header examination and enrichment (eg. examine and build cookies and http headers), and various other fanciness. For a general idea of what I'm looking to build see Ruby Proxies for Scale and Monitoring - except in Python.
I realize that Twisted is an exceedingly good answer for this, and that eventmachine was inspired by Twisted, but I'm looking for something other than Twisted.
Ideally a library or package that includes http proxying capabilities I could modify with my own little plugins.
I remember seeing something based on eventlib that had http server capabilities built in, but I can't seem to find it.
I'm also taking a deep look at perlbal; that looks almost like the perfect solution, except it's in Perl.
Any recommendations?
|
[
"Not sure if meets all your needs, but proxylet is a reverse proxy based on Linden Lab's eventlet.\n"
] |
[
3
] |
[] |
[] |
[
"http",
"libevent",
"proxy",
"python"
] |
stackoverflow_0001268984_http_libevent_proxy_python.txt
|
Q:
How can I change the font size in GTK?
Is there an easy way to change the font size of text elements in GTK? Right now the best I can do is do set_markup on a label, with something silly like:
lbl.set_markup("<span font_desc='Tahoma 5.4'>%s</span>" % text)
This 1) requires me to set the font , 2) seems like a lot of overhead (having to parse the markup), and 3) would make it annoying to change the font size of buttons and such. Is there a better way?
A:
If you want to change font overall in your app(s), I'd leave this job to gtkrc (then becomes a google question, and "gtkrc font" query brings us to this ubuntu forums link which has the following snippet of the the gtkrc file):
style "font"
{
font_name = "Corbel 8"
}
widget_class "*" style "font"
gtk-font-name = "Corbel 8"
(replace the font with the one you/user need)
Then the user will get consistent experience and will be able to change the settings easily without need for them to poke in the code and without you needing to handle the overhead of maintaining your personal configuration-related code. I understand you can make this setting more specific if you have a more precise definition for the widget_class.
YMMV for different platforms, but AFAIK this file is always present at some location if GTK is being used, and allows to the user to be in charge of presentation details.
A:
In C, you can do:
gtk_widget_modify_font(lbl, pango_font_description_from_string("Tahoma 5.4"));
In PyGTK, I believe it's something like:
pangoFont = pango.FontDescription("Tahoma 5.4")
lbl.modify_font(pangoFont)
|
How can I change the font size in GTK?
|
Is there an easy way to change the font size of text elements in GTK? Right now the best I can do is do set_markup on a label, with something silly like:
lbl.set_markup("<span font_desc='Tahoma 5.4'>%s</span>" % text)
This 1) requires me to set the font , 2) seems like a lot of overhead (having to parse the markup), and 3) would make it annoying to change the font size of buttons and such. Is there a better way?
|
[
"If you want to change font overall in your app(s), I'd leave this job to gtkrc (then becomes a google question, and \"gtkrc font\" query brings us to this ubuntu forums link which has the following snippet of the the gtkrc file):\nstyle \"font\"\n{\nfont_name = \"Corbel 8\"\n}\nwidget_class \"*\" style \"font\"\ngtk-font-name = \"Corbel 8\"\n\n(replace the font with the one you/user need)\nThen the user will get consistent experience and will be able to change the settings easily without need for them to poke in the code and without you needing to handle the overhead of maintaining your personal configuration-related code. I understand you can make this setting more specific if you have a more precise definition for the widget_class.\nYMMV for different platforms, but AFAIK this file is always present at some location if GTK is being used, and allows to the user to be in charge of presentation details.\n",
"In C, you can do:\ngtk_widget_modify_font(lbl, pango_font_description_from_string(\"Tahoma 5.4\"));\n\nIn PyGTK, I believe it's something like:\npangoFont = pango.FontDescription(\"Tahoma 5.4\")\nlbl.modify_font(pangoFont)\n\n"
] |
[
9,
3
] |
[] |
[] |
[
"fonts",
"gtk",
"gtk2",
"pygtk",
"python"
] |
stackoverflow_0001269326_fonts_gtk_gtk2_pygtk_python.txt
|
Q:
Python threading question - returning control to parent
Basically, I have a python program which listens for DeviceAdded DBus events (e.g. when someone plugs in a USB drive), and when an event occurs, I want to create a thread which collects metadata on that newly connected device. However, I want to do this asynchronously - that is, allow one thread to keep collecting metadata on the device while returning control to the parent which can keep listening for these events. At the moment, my thread blocks until the collection is finished. Here is a sample of my code:
class DeviceAddedListener:
def __init__(self):
self.bus = dbus.SystemBus()
self.hal_manager_obj = self.bus.get_object("org.freedesktop.Hal", "/org$
self.hal_manager = dbus.Interface(self.hal_manager_obj, "org.freedeskto$
self.hal_manager.connect_to_signal("DeviceAdded", self._filter)
def _filter(self, udi):
device_obj = self.bus.get_object ("org.freedesktop.Hal", udi)
device = dbus.Interface(device_obj, "org.freedesktop.Hal.Device")
if device.QueryCapability("volume"):
return self.capture(device)
def capture(self,volume):
self.device_file = volume.GetProperty("block.device")
self.label = volume.GetProperty("volume.label")
self.fstype = volume.GetProperty("volume.fstype")
self.mounted = volume.GetProperty("volume.is_mounted")
self.mount_point = volume.GetProperty("volume.mount_point")
try:
self.size = volume.GetProperty("volume.size")
except:
self.size = 0
print "New storage device detected:"
print " device_file: %s" % self.device_file
print " label: %s" % self.label
print " fstype: %s" % self.fstype
if self.mounted:
print " mount_point: %s" % self.mount_point
response = raw_input("\nWould you like to acquire %s [y/N]? " % self.device_file)
if (response == "y"):
self.get_meta()
thread.start_new_thread(DoSomething(self.device_file))
else:
print "Returning to idle"
if __name__ == '__main__':
from dbus.mainloop.glib import DBusGMainLoop
DBusGMainLoop(set_as_default=True)
loop = gobject.MainLoop()
DeviceAddedListener()
loop.run()
Any thoughts would be greatly appreciated :) I have excluded the import list to save space
A:
Try spawning a thread just for the capture stuff, by changing the following lines in your _filter() function to this:
if device.QueryCapability("volume"):
threading.start_new_thread(self.capture, (device))
This is assuming that the bulk of the work is happening in the capture() function. If not, then just spawn the thread a little earlier, possibly on the whole _filter() function.
This should then spawn a new thread for every filtered device detected. Bear in mind that I haven't done any dbus stuff and can't really test this, but it's an idea.
Also, you're trying to get user input from the capture function which, using the app as you've defined it, isn't really a nice thing to do in threads. What if a second device is connected while the first prompt is still on screen? Might not play nicely.
The design of this thing might be exactly the way you want it for specific reasons, but I can't help feeling like it could be a lot slicker. It's not really designed with threads in mind from what I can tell.
|
Python threading question - returning control to parent
|
Basically, I have a python program which listens for DeviceAdded DBus events (e.g. when someone plugs in a USB drive), and when an event occurs, I want to create a thread which collects metadata on that newly connected device. However, I want to do this asynchronously - that is, allow one thread to keep collecting metadata on the device while returning control to the parent which can keep listening for these events. At the moment, my thread blocks until the collection is finished. Here is a sample of my code:
class DeviceAddedListener:
def __init__(self):
self.bus = dbus.SystemBus()
self.hal_manager_obj = self.bus.get_object("org.freedesktop.Hal", "/org$
self.hal_manager = dbus.Interface(self.hal_manager_obj, "org.freedeskto$
self.hal_manager.connect_to_signal("DeviceAdded", self._filter)
def _filter(self, udi):
device_obj = self.bus.get_object ("org.freedesktop.Hal", udi)
device = dbus.Interface(device_obj, "org.freedesktop.Hal.Device")
if device.QueryCapability("volume"):
return self.capture(device)
def capture(self,volume):
self.device_file = volume.GetProperty("block.device")
self.label = volume.GetProperty("volume.label")
self.fstype = volume.GetProperty("volume.fstype")
self.mounted = volume.GetProperty("volume.is_mounted")
self.mount_point = volume.GetProperty("volume.mount_point")
try:
self.size = volume.GetProperty("volume.size")
except:
self.size = 0
print "New storage device detected:"
print " device_file: %s" % self.device_file
print " label: %s" % self.label
print " fstype: %s" % self.fstype
if self.mounted:
print " mount_point: %s" % self.mount_point
response = raw_input("\nWould you like to acquire %s [y/N]? " % self.device_file)
if (response == "y"):
self.get_meta()
thread.start_new_thread(DoSomething(self.device_file))
else:
print "Returning to idle"
if __name__ == '__main__':
from dbus.mainloop.glib import DBusGMainLoop
DBusGMainLoop(set_as_default=True)
loop = gobject.MainLoop()
DeviceAddedListener()
loop.run()
Any thoughts would be greatly appreciated :) I have excluded the import list to save space
|
[
"Try spawning a thread just for the capture stuff, by changing the following lines in your _filter() function to this:\nif device.QueryCapability(\"volume\"):\n threading.start_new_thread(self.capture, (device))\n\nThis is assuming that the bulk of the work is happening in the capture() function. If not, then just spawn the thread a little earlier, possibly on the whole _filter() function.\nThis should then spawn a new thread for every filtered device detected. Bear in mind that I haven't done any dbus stuff and can't really test this, but it's an idea.\nAlso, you're trying to get user input from the capture function which, using the app as you've defined it, isn't really a nice thing to do in threads. What if a second device is connected while the first prompt is still on screen? Might not play nicely.\nThe design of this thing might be exactly the way you want it for specific reasons, but I can't help feeling like it could be a lot slicker. It's not really designed with threads in mind from what I can tell.\n"
] |
[
2
] |
[] |
[] |
[
"dbus",
"multithreading",
"python"
] |
stackoverflow_0001269466_dbus_multithreading_python.txt
|
Q:
Get information from related object in generic list view
So, I've been noodling about with Django's generic views, specifically the object_list view. I have this in my urls.py:
from django.conf.urls.defaults import *
from django.views.generic import list_detail
from diplomacy.engine.models import Game
game_info = {
"queryset": Game.objects.filter(state__in=('A', 'P')),
"template_object_name": "game",
}
urlpatterns = patterns('',
(r'^$', list_detail.object_list, game_info),
)
and this fairly rough template that it is going to:
{% block content %}
<table>
<tr>
<th>Name</th>
<th>Turn</th>
<th>Last Generated</th>
</tr>
{% for game in game_list %}
<tr>
<td>{{ game.name }}</td>
</tr>
{% endfor %}
</table>
{% endblock %}
What I'm looking for is the best idiomatic way of including in this view the unicode representation and generated field (a DateTimeField) from the most recent Turn that points to the current Game in the loop, based on the value of generated. Turn.game is the field that points to the Game the turn belongs to (a ForeignKey).
Update:
My Turn model is as follows:
SEASON_CHOICES = (
('S', 'Spring'),
('SR', 'Spring Retreat'),
('F', 'Fall'),
('FR', 'Fall Retreat'),
('FB', 'Fall Build')
)
class Turn(models.Model):
game = models.ForeignKey(Game)
year = models.PositiveIntegerField()
season = models.CharField(max_length=2, choices=SEASON_CHOICES)
generated = models.DateTimeField(auto_now_add=True)
def __unicode__(self):
return "%s %s" % (self.season, self.year)
The Game model has not appreciably changed from the way I specified it in this other question.
A:
If Turn.game points to the associated Game object, then {{game.turn_set.all}} should return the set of Turn objects for that game.
You may need to add a Meta class to the Turn model to order from newest to oldest.
Class Meta:
ordering = ['-generated']
Then, {{game.turn_set.all.0}} should return the unicode representation for the newest turn for that game, and {{game.turn_set.all.0.generated}} will return the associated datetime object.
Note: This is untested code.
|
Get information from related object in generic list view
|
So, I've been noodling about with Django's generic views, specifically the object_list view. I have this in my urls.py:
from django.conf.urls.defaults import *
from django.views.generic import list_detail
from diplomacy.engine.models import Game
game_info = {
"queryset": Game.objects.filter(state__in=('A', 'P')),
"template_object_name": "game",
}
urlpatterns = patterns('',
(r'^$', list_detail.object_list, game_info),
)
and this fairly rough template that it is going to:
{% block content %}
<table>
<tr>
<th>Name</th>
<th>Turn</th>
<th>Last Generated</th>
</tr>
{% for game in game_list %}
<tr>
<td>{{ game.name }}</td>
</tr>
{% endfor %}
</table>
{% endblock %}
What I'm looking for is the best idiomatic way of including in this view the unicode representation and generated field (a DateTimeField) from the most recent Turn that points to the current Game in the loop, based on the value of generated. Turn.game is the field that points to the Game the turn belongs to (a ForeignKey).
Update:
My Turn model is as follows:
SEASON_CHOICES = (
('S', 'Spring'),
('SR', 'Spring Retreat'),
('F', 'Fall'),
('FR', 'Fall Retreat'),
('FB', 'Fall Build')
)
class Turn(models.Model):
game = models.ForeignKey(Game)
year = models.PositiveIntegerField()
season = models.CharField(max_length=2, choices=SEASON_CHOICES)
generated = models.DateTimeField(auto_now_add=True)
def __unicode__(self):
return "%s %s" % (self.season, self.year)
The Game model has not appreciably changed from the way I specified it in this other question.
|
[
"If Turn.game points to the associated Game object, then {{game.turn_set.all}} should return the set of Turn objects for that game. \nYou may need to add a Meta class to the Turn model to order from newest to oldest.\nClass Meta:\n ordering = ['-generated']\n\nThen, {{game.turn_set.all.0}} should return the unicode representation for the newest turn for that game, and {{game.turn_set.all.0.generated}} will return the associated datetime object.\nNote: This is untested code.\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_templates",
"django_views",
"python"
] |
stackoverflow_0001269625_django_django_templates_django_views_python.txt
|
Q:
How to print Python installation directory to the output?
Let's say Python is installed in the location
C:\TOOLS\COMMON\python\python252
I want to print this location in the output of my program. Please let me know can I do this.
A:
you can use
import sys, os
os.path.dirname(sys.executable)
but remember than in Unix systems the "installation" of a program is usually distributed along the following folders:
/usr/bin (this is what you'll probably get)
/usr/lib
/usr/share
etc.
A:
Maybe either of these will satisfy you:
>>> import sys
>>> print(sys.prefix)
/usr
>>> print(sys.path)
['', '/usr/lib/python25.zip', '/usr/lib/python2.5', '/usr/lib/python2.5/plat-linux2',
'/usr/lib/python2.5/lib-tk', '/usr/lib/python2.5/lib-dynload',
'/usr/local/lib/python2.5/site-packages', '/usr/lib/python2.5/site-packages',
'/usr/lib/python2.5/site-packages/Numeric', '/usr/lib/python2.5/site-packages/gst-0.10',
'/var/lib/python-support/python2.5', '/usr/lib/python2.5/site-packages/gtk-2.0',
'/var/lib/python-support/python2.5/gtk-2.0']
A:
Try:
>>> import sys
>>> print sys.prefix
See the documentation for the sys module for more details.
|
How to print Python installation directory to the output?
|
Let's say Python is installed in the location
C:\TOOLS\COMMON\python\python252
I want to print this location in the output of my program. Please let me know can I do this.
|
[
"you can use\nimport sys, os\nos.path.dirname(sys.executable)\n\nbut remember than in Unix systems the \"installation\" of a program is usually distributed along the following folders:\n\n/usr/bin (this is what you'll probably get)\n/usr/lib\n/usr/share\netc.\n\n",
"Maybe either of these will satisfy you:\n>>> import sys\n>>> print(sys.prefix)\n/usr\n>>> print(sys.path)\n['', '/usr/lib/python25.zip', '/usr/lib/python2.5', '/usr/lib/python2.5/plat-linux2',\n'/usr/lib/python2.5/lib-tk', '/usr/lib/python2.5/lib-dynload', \n'/usr/local/lib/python2.5/site-packages', '/usr/lib/python2.5/site-packages', \n'/usr/lib/python2.5/site-packages/Numeric', '/usr/lib/python2.5/site-packages/gst-0.10',\n'/var/lib/python-support/python2.5', '/usr/lib/python2.5/site-packages/gtk-2.0', \n'/var/lib/python-support/python2.5/gtk-2.0']\n\n",
"Try:\n>>> import sys\n>>> print sys.prefix\n\nSee the documentation for the sys module for more details.\n"
] |
[
36,
6,
4
] |
[] |
[] |
[
"path",
"python"
] |
stackoverflow_0001270537_path_python.txt
|
Q:
About Python's Mixed Numeric Data Types converting results up to the most complicated operand
A little background: I'm in the process of learning Python through O'Reilly's, "Learning Python" book, I've had some experience in Java.
Anyway, upon reading Chapter 5 (I'm still in the middle of it, actually) I have come across a question with the way Python treats results of Mixed Numeric expressions. In the book, they use an example of mixing an integer and a floating-point number (40 + 3.14) and proceed to explain that the result of this expression would be a floating-point number because Python converts operands up to the type of the most complicated operand.
My question is this: Instead of programmers having to remember which Numeric operand is the highest and remember that the that the result will be "upped" to that format, wouldn't it be simpler to create a special Numeric Literal for result types?
My logic is this: If you have a decimal place in your expression, you know it's going to be a floating point number, if you have something like 3+4j, you know it's going to be a complex number. Why should you have to remember the hierarchy of Numeric Literals just to know what your result is going to be treated as? In my opinion, it seems like it would be a much simpler process to assign results to a single, uninformed Literal to know that regardless of whether or not the expression has Mixed Numerics, it will be treated as a specific Data Type.
Follow up question: Is there a language where this kind of thing is currently being preformed?
Again, my knowledge of Python is limited, so this may be a silly question, but I would like to know why programmers have to put themselves through this process. The only reason why I could imagine that there isn't a system of some kind in place already is that perhaps the specific Numeric Type of a result isn't as important as it is in some other languages (Java).
A:
"If you have a decimal place in your expression, you know it's going to be a floating point number, if you have something like 3+4j, you know it's going to be a complex number."
That is the "hierarchy of Numeric Literals". I'm not really sure what more you want. Furthermore, the result will always be a subclass of numbers.Number, so you actually do have some guarantee about what type the resulting object will be.
A:
Suppose you had a unified numeric type and you typed the following statements:
a = 42
b = 42.24
c = 4 + 2j
How would this be any different from what you get today?
This is already valid Python. The only difference is that type(a), type(b), type(c) return int, float and complex, and I guess you want them all to return something like number. But you never really deal with that unless you want/have to.
There are reasons for having something like a scheme-like numerical tower. You may take advantage of the hardware for integer calculations if you know you're only dealing with itnegers. Or you may derive from the int type when you know you want to restrict some kind of user input to an integer.
I'm sure you can find a language with a type system that has some kind of unified number. But I'm not sure I've understood your argument. Perhaps I've missed something in your question?
A:
Don't think i really understood question:
Do you mean that operation result should be explictly specified?
you can do it with explict cast
float(1+0.5)
int(1+0.5)
complex(1+0.5)
Do you mean that operators should accept only same type operands?
1+2
1+0.5 -> raises exception
1+int(0.5)
float(1)+0.5
while having sense, it would introduce too much verbosity and int->float casts are always successful and don't lead to precision loss (except really large numbers)
Separate operands for different return types:
1 + 0.5 -> int
1 `float_plus` 2 -> float
Duplicates explict cast functionality and is plain sick
|
About Python's Mixed Numeric Data Types converting results up to the most complicated operand
|
A little background: I'm in the process of learning Python through O'Reilly's, "Learning Python" book, I've had some experience in Java.
Anyway, upon reading Chapter 5 (I'm still in the middle of it, actually) I have come across a question with the way Python treats results of Mixed Numeric expressions. In the book, they use an example of mixing an integer and a floating-point number (40 + 3.14) and proceed to explain that the result of this expression would be a floating-point number because Python converts operands up to the type of the most complicated operand.
My question is this: Instead of programmers having to remember which Numeric operand is the highest and remember that the that the result will be "upped" to that format, wouldn't it be simpler to create a special Numeric Literal for result types?
My logic is this: If you have a decimal place in your expression, you know it's going to be a floating point number, if you have something like 3+4j, you know it's going to be a complex number. Why should you have to remember the hierarchy of Numeric Literals just to know what your result is going to be treated as? In my opinion, it seems like it would be a much simpler process to assign results to a single, uninformed Literal to know that regardless of whether or not the expression has Mixed Numerics, it will be treated as a specific Data Type.
Follow up question: Is there a language where this kind of thing is currently being preformed?
Again, my knowledge of Python is limited, so this may be a silly question, but I would like to know why programmers have to put themselves through this process. The only reason why I could imagine that there isn't a system of some kind in place already is that perhaps the specific Numeric Type of a result isn't as important as it is in some other languages (Java).
|
[
"\n\"If you have a decimal place in your expression, you know it's going to be a floating point number, if you have something like 3+4j, you know it's going to be a complex number.\"\n\nThat is the \"hierarchy of Numeric Literals\". I'm not really sure what more you want. Furthermore, the result will always be a subclass of numbers.Number, so you actually do have some guarantee about what type the resulting object will be.\n",
"Suppose you had a unified numeric type and you typed the following statements:\na = 42\nb = 42.24\nc = 4 + 2j\n\nHow would this be any different from what you get today?\nThis is already valid Python. The only difference is that type(a), type(b), type(c) return int, float and complex, and I guess you want them all to return something like number. But you never really deal with that unless you want/have to. \nThere are reasons for having something like a scheme-like numerical tower. You may take advantage of the hardware for integer calculations if you know you're only dealing with itnegers. Or you may derive from the int type when you know you want to restrict some kind of user input to an integer. \nI'm sure you can find a language with a type system that has some kind of unified number. But I'm not sure I've understood your argument. Perhaps I've missed something in your question?\n",
"Don't think i really understood question:\n\nDo you mean that operation result should be explictly specified?\nyou can do it with explict cast\nfloat(1+0.5)\nint(1+0.5)\ncomplex(1+0.5)\nDo you mean that operators should accept only same type operands?\n1+2\n1+0.5 -> raises exception\n1+int(0.5)\nfloat(1)+0.5\nwhile having sense, it would introduce too much verbosity and int->float casts are always successful and don't lead to precision loss (except really large numbers)\nSeparate operands for different return types:\n1 + 0.5 -> int\n1 `float_plus` 2 -> float\nDuplicates explict cast functionality and is plain sick\n\n"
] |
[
5,
2,
0
] |
[] |
[] |
[
"numeric",
"python"
] |
stackoverflow_0001270403_numeric_python.txt
|
Q:
Print out the line with the longest length, the line with the highest sum of ASCII values, or the line with the greatest number of words
I need some help to print out the line with the longest length, the line with the highest sum of ASCII values, or the line with the greatest number of words from a text file. This is my first time programming and I'm really struggling with python and don't know how to calculate want is required for my lab this week. I 've try to work it out but have had no luck so far. CAN ANYONE PLEASE HELP ME?
A:
First work out how to open the file and read a line of text from the file to a string.
Read one line inside a loop and each time you loop work out the length of the string (easy), the number of words (split the string by the ' ' (space) character and count how many words you get) and the sum of the ASCII values (loop through each character in the string keeping a running total of the ascii value of each character).
Once you have your 3 values for the line you can see if they are bigger than any previously found values. You can do that by declaring some variables before the loop to hold the maximum value found so far, and then updating those variables whenever you find a bigger value. You will also need 3 variables to hold the strings which you have found to have the highest of those values.
When your loop finishes you will have read the whole file and found the 3 strings. Print them out.
|
Print out the line with the longest length, the line with the highest sum of ASCII values, or the line with the greatest number of words
|
I need some help to print out the line with the longest length, the line with the highest sum of ASCII values, or the line with the greatest number of words from a text file. This is my first time programming and I'm really struggling with python and don't know how to calculate want is required for my lab this week. I 've try to work it out but have had no luck so far. CAN ANYONE PLEASE HELP ME?
|
[
"First work out how to open the file and read a line of text from the file to a string.\nRead one line inside a loop and each time you loop work out the length of the string (easy), the number of words (split the string by the ' ' (space) character and count how many words you get) and the sum of the ASCII values (loop through each character in the string keeping a running total of the ascii value of each character).\nOnce you have your 3 values for the line you can see if they are bigger than any previously found values. You can do that by declaring some variables before the loop to hold the maximum value found so far, and then updating those variables whenever you find a bigger value. You will also need 3 variables to hold the strings which you have found to have the highest of those values.\nWhen your loop finishes you will have read the whole file and found the 3 strings. Print them out.\n"
] |
[
5
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001270652_python.txt
|
Q:
Why does import of ctypes raise ImportError?
Python 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import ctypes
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python26\lib\ctypes\__init__.py", line 17, in <module>
from struct import calcsize as _calcsize
ImportError: cannot import name calcsize
>>> from ctypes import *
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python26\lib\ctypes\__init__.py", line 17, in <module>
from struct import calcsize as _calcsize
ImportError: cannot import name calcsize
>>>
A:
It seems you have another struct.py in your path somewhere.
Try this to see where python finds your struct module:
>>> import inspect
>>> import struct
>>> inspect.getabsfile(struct)
'c:\\python26\\lib\\struct.py'
|
Why does import of ctypes raise ImportError?
|
Python 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import ctypes
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python26\lib\ctypes\__init__.py", line 17, in <module>
from struct import calcsize as _calcsize
ImportError: cannot import name calcsize
>>> from ctypes import *
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python26\lib\ctypes\__init__.py", line 17, in <module>
from struct import calcsize as _calcsize
ImportError: cannot import name calcsize
>>>
|
[
"It seems you have another struct.py in your path somewhere.\nTry this to see where python finds your struct module:\n>>> import inspect\n>>> import struct\n>>> inspect.getabsfile(struct)\n'c:\\\\python26\\\\lib\\\\struct.py'\n\n"
] |
[
11
] |
[] |
[] |
[
"ctypes",
"importerror",
"python",
"python_2.6"
] |
stackoverflow_0001270738_ctypes_importerror_python_python_2.6.txt
|
Q:
Most concise way to check whether a list is empty or contains only None?
Most concise way to check whether a list is empty or contains only None?
I understand that I can test:
if MyList:
pass
and:
if not MyList:
pass
but what if the list has an item (or multiple items), but those item/s are None:
MyList = [None, None, None]
if ???:
pass
A:
One way is to use all and a list comprehension:
if all(e is None for e in myList):
print('all empty or None')
This works for empty lists as well. More generally, to test whether the list only contains things that evaluate to False, you can use any:
if not any(myList):
print('all empty or evaluating to False')
A:
You can use the all() function to test is all elements are None:
a = []
b = [None, None, None]
all(e is None for e in a) # True
all(e is None for e in b) # True
A:
You can directly compare lists with ==:
if x == [None,None,None]:
if x == [1,2,3]
A:
If you are concerned with elements in the list which evaluate as true:
if mylist and filter(None, mylist):
print "List is not empty and contains some true values"
else:
print "Either list is empty, or it contains no true values"
If you want to strictly check for None, use filter(lambda x: x is not None, mylist) instead of filter(None, mylist) in the if statement above.
|
Most concise way to check whether a list is empty or contains only None?
|
Most concise way to check whether a list is empty or contains only None?
I understand that I can test:
if MyList:
pass
and:
if not MyList:
pass
but what if the list has an item (or multiple items), but those item/s are None:
MyList = [None, None, None]
if ???:
pass
|
[
"One way is to use all and a list comprehension:\nif all(e is None for e in myList):\n print('all empty or None')\n\nThis works for empty lists as well. More generally, to test whether the list only contains things that evaluate to False, you can use any:\nif not any(myList):\n print('all empty or evaluating to False')\n\n",
"You can use the all() function to test is all elements are None:\na = []\nb = [None, None, None]\nall(e is None for e in a) # True\nall(e is None for e in b) # True\n\n",
"You can directly compare lists with ==:\nif x == [None,None,None]:\n\nif x == [1,2,3]\n\n",
"If you are concerned with elements in the list which evaluate as true:\nif mylist and filter(None, mylist):\n print \"List is not empty and contains some true values\"\nelse:\n print \"Either list is empty, or it contains no true values\"\n\nIf you want to strictly check for None, use filter(lambda x: x is not None, mylist) instead of filter(None, mylist) in the if statement above.\n"
] |
[
15,
9,
4,
2
] |
[] |
[] |
[
"list",
"python",
"types"
] |
stackoverflow_0001270920_list_python_types.txt
|
Q:
How to change the "Event" portlet in Plone 3
I am trying to customize the "Event" portlet in Plone 3 that shows the upcoming events. The "view" link in the footer of that portlet goes to the /events URL. But my site is multi-lingual so that URL is not always correct. For example, the correct URL for Dutch events should be /evenementen.
In my setup I use one folder per language. /en holds all English content, /nl holds all Dutch content, etcetera. The plone root has no portlets so I add the "Event" portlet to both the /nl and /en folder separately. I was looking in the ZMI at the events.pt template and it seems that it takes the URL from a property, but where is that property defines and how can I change it? I can't find the portlet configurations in the ZMI. Here is the snippet from plone.app.portlets.portlets/events.pt:
<dd class="portletFooter">
<a href=""
class="tile"
tal:attributes="href view/all_events_link"
i18n:translate="box_upcoming_events">
Upcoming events…
</a>
<span class="portletBottomLeft"></span>
<span class="portletBottomRight"></span>
</dd>
So, can I somewhere change that all_events_link property in the ZMI? If so, where?
As an alternative I have also tried to add a "Collection" portlet with a collection that lists all events. But the problem is that the collection portlet doesn't want to show the start and end dates for the events.
A:
The events portlet uses a view to provide it with data, and the expression 'view/all_events_link' calls a method on that view to provide it with a link. You have 2 options to replace that link:
Register your own event portlet that subclasses the old one, and replaces the all_events_link method. This in the heavy customization option, and requires Python coding. See this mail thread on some general pointers on how to achieve this.
Replace just the template with a portlet renderer. Martin Aspeli has documented this method on Plone.org; this only requires some ZCML configuration to get working. You can then copy the events.pt template and replace the portlet footer with one that links to the right location.
|
How to change the "Event" portlet in Plone 3
|
I am trying to customize the "Event" portlet in Plone 3 that shows the upcoming events. The "view" link in the footer of that portlet goes to the /events URL. But my site is multi-lingual so that URL is not always correct. For example, the correct URL for Dutch events should be /evenementen.
In my setup I use one folder per language. /en holds all English content, /nl holds all Dutch content, etcetera. The plone root has no portlets so I add the "Event" portlet to both the /nl and /en folder separately. I was looking in the ZMI at the events.pt template and it seems that it takes the URL from a property, but where is that property defines and how can I change it? I can't find the portlet configurations in the ZMI. Here is the snippet from plone.app.portlets.portlets/events.pt:
<dd class="portletFooter">
<a href=""
class="tile"
tal:attributes="href view/all_events_link"
i18n:translate="box_upcoming_events">
Upcoming events…
</a>
<span class="portletBottomLeft"></span>
<span class="portletBottomRight"></span>
</dd>
So, can I somewhere change that all_events_link property in the ZMI? If so, where?
As an alternative I have also tried to add a "Collection" portlet with a collection that lists all events. But the problem is that the collection portlet doesn't want to show the start and end dates for the events.
|
[
"The events portlet uses a view to provide it with data, and the expression 'view/all_events_link' calls a method on that view to provide it with a link. You have 2 options to replace that link:\n\nRegister your own event portlet that subclasses the old one, and replaces the all_events_link method. This in the heavy customization option, and requires Python coding. See this mail thread on some general pointers on how to achieve this.\nReplace just the template with a portlet renderer. Martin Aspeli has documented this method on Plone.org; this only requires some ZCML configuration to get working. You can then copy the events.pt template and replace the portlet footer with one that links to the right location.\n\n"
] |
[
1
] |
[] |
[] |
[
"plone",
"portlet",
"python",
"zope"
] |
stackoverflow_0001271057_plone_portlet_python_zope.txt
|
Q:
python db insert
I am in facing a performance problem in my code.I am making db connection a making a select query and then inserting in a table.Around 500 rows in one select query ids populated .Before inserting i am running select query around 8-9 times first and then inserting then all using cursor.executemany.But it is taking 2 miuntes to insert which is not qood .Any idea
def insert1(id,state,cursor):
cursor.execute("select * from qwert where asd_id =%s",[id])
if sometcondition:
adding.append(rd[i])
cursor.executemany(indata, adding)
where rd[i] is a aray for records making and indata is a insert statement
#prog start here
cursor.execute("select * from assd")
for rows in cursor.fetchall()
if rows[1]=='aq':
insert1(row[1],row[2],cursor)
if rows[1]=='qw':
insert2(row[1],row[2],cursor)
A:
I don't really understand why you're doing this.
It seems that you want to insert a subset of rows from "assd" into one table, and another subset into another table?
Why not just do it with two SQL statements, structured like this:
insert into tab1 select * from assd where asd_id = 42 and cond1 = 'set';
insert into tab2 select * from assd where asd_id = 42 and cond2 = 'set';
That'd dramatically reduce your number of roundtrips to the database and your client-server traffic. It'd also be an order of magnitude faster.
Of course, I'd also strongly recommend that you specify your column names in both the insert and select parts of the code.
|
python db insert
|
I am in facing a performance problem in my code.I am making db connection a making a select query and then inserting in a table.Around 500 rows in one select query ids populated .Before inserting i am running select query around 8-9 times first and then inserting then all using cursor.executemany.But it is taking 2 miuntes to insert which is not qood .Any idea
def insert1(id,state,cursor):
cursor.execute("select * from qwert where asd_id =%s",[id])
if sometcondition:
adding.append(rd[i])
cursor.executemany(indata, adding)
where rd[i] is a aray for records making and indata is a insert statement
#prog start here
cursor.execute("select * from assd")
for rows in cursor.fetchall()
if rows[1]=='aq':
insert1(row[1],row[2],cursor)
if rows[1]=='qw':
insert2(row[1],row[2],cursor)
|
[
"I don't really understand why you're doing this.\nIt seems that you want to insert a subset of rows from \"assd\" into one table, and another subset into another table?\nWhy not just do it with two SQL statements, structured like this:\ninsert into tab1 select * from assd where asd_id = 42 and cond1 = 'set';\ninsert into tab2 select * from assd where asd_id = 42 and cond2 = 'set';\n\nThat'd dramatically reduce your number of roundtrips to the database and your client-server traffic. It'd also be an order of magnitude faster.\nOf course, I'd also strongly recommend that you specify your column names in both the insert and select parts of the code.\n"
] |
[
3
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001271502_python.txt
|
Q:
accessing remote url's in Google App Engine
I don't know the correct terminology to do a google search.
What I want is to make a POST http request to other url, for example
twitter, from within mi app in google app engine.
If the question is unclear please comment.
Thanks!
Manuel
A:
Google provides urlfetch for this.
|
accessing remote url's in Google App Engine
|
I don't know the correct terminology to do a google search.
What I want is to make a POST http request to other url, for example
twitter, from within mi app in google app engine.
If the question is unclear please comment.
Thanks!
Manuel
|
[
"Google provides urlfetch for this.\n"
] |
[
3
] |
[] |
[] |
[
"google_app_engine",
"httprequest",
"python",
"twitter"
] |
stackoverflow_0001271908_google_app_engine_httprequest_python_twitter.txt
|
Q:
parsing CSV files backwards
I have csv files with the following format:
CSV FILE
"a" , "b" , "c" , "d"
hello, world , 1 , 2 , 3
1,2,3,4,5,6,7 , 2 , 456 , 87
h,1231232,3 , 3 , 45 , 44
The problem is that the first field has commas "," in it. I have no control over file generation, as that's the format I receive them in. Is there a way to read a CSV file backwards, from the end of line to the beginning?
I don't mind writing a little python script to do so, if I’m guided in the right direction.
A:
The rsplit string method splits a string starting from the right instead of the left, and so it's probably what you're looking for (it takes an argument specifying the max number of times to split):
line = "hello, world , 1 , 2 , 3"
parts = line.rsplit(",", 3)
print parts # prints ['hello, world ', ' 1 ', ' 2 ', ' 3']
If you want to strip the whitespace from the beginning and end of each item in your splitted list, then you can just use the strip method with a list comprehension
parts = [s.strip() for s in parts]
print parts # prints ['hello, world', '1', '2', '3']
A:
I don't fully understand why you want to read each line in reverse, but you could do this:
import csv
file = open("mycsvfile.csv")
reversedLines = [line[::-1] for line in file]
file.close()
reader = csv.reader(reversedLines)
for backwardRow in reader:
lastField = backwardRow[0][::-1]
secondField = backwardRow[1][::-1]
A:
You could always do something with regex's, like (perl regex)
#!/usr/bin/perl
use IO::File;
if (my $file = new IO::File("test.csv"))
{
foreach my $line (<$file>) {
$line =~ m/^(.*),(.*?),(.*?),(.*?)$/;
print "[$1][$2][$3][$4]\n";
}
} else {
print "Unable to open test.csv\n";
}
(The first is a greedy search, the last 3 are not)
Edit: posted full code instead of just the regex
A:
Reverse the string first and then process it.
tmp = tmp[::-1]
A:
From the sample You have provided, it looks like "columns" are fixed size. First (the one with commas) is 16 characters long, so why don't You try reading the file line by line, then for each line reading the first 16 characters (as a value of first column), and the rest accordingly? After You have each value, You can go and parse it further (trim whitespaces, and so on...).
A:
That's not then a CSV file, comma separated means just that.
How can you be certain that is not:
CSV FILE
"a" , "b" , "c" , "d"
hello , world , 1 , 2 , 3
1 , 2 , 3 , 4 , 5,6,7,2,456,87
h , 1231232 , 3 , 3 , 45,44
If the file is as you indicate then the first group should be surrounded by quotes, looks as though the field names are so odd that fields containing commas are not.
I'm not a fan of fixing errors away from their source, I'd push back to the data generator to deliver proper CSV if that's what they are claiming it is.
A:
If you always expect the same number of columns, and only the first column can contain commas, just read anything and concatenate excess columns at the beginning.
The problem is that the interface is ambiguous, and you can try to circumvent this, but the better solution is to try to get the interface fixed (which is often harder than creating several patches...).
A:
I agree with mr beer. That is a badly formed csv file. Your best bet is to find other delimiters or stop overloading the commas or quote/escape the non field separating commas
|
parsing CSV files backwards
|
I have csv files with the following format:
CSV FILE
"a" , "b" , "c" , "d"
hello, world , 1 , 2 , 3
1,2,3,4,5,6,7 , 2 , 456 , 87
h,1231232,3 , 3 , 45 , 44
The problem is that the first field has commas "," in it. I have no control over file generation, as that's the format I receive them in. Is there a way to read a CSV file backwards, from the end of line to the beginning?
I don't mind writing a little python script to do so, if I’m guided in the right direction.
|
[
"The rsplit string method splits a string starting from the right instead of the left, and so it's probably what you're looking for (it takes an argument specifying the max number of times to split):\nline = \"hello, world , 1 , 2 , 3\"\nparts = line.rsplit(\",\", 3)\nprint parts # prints ['hello, world ', ' 1 ', ' 2 ', ' 3']\n\nIf you want to strip the whitespace from the beginning and end of each item in your splitted list, then you can just use the strip method with a list comprehension\nparts = [s.strip() for s in parts]\nprint parts # prints ['hello, world', '1', '2', '3']\n\n",
"I don't fully understand why you want to read each line in reverse, but you could do this:\nimport csv\nfile = open(\"mycsvfile.csv\")\nreversedLines = [line[::-1] for line in file]\nfile.close()\nreader = csv.reader(reversedLines)\nfor backwardRow in reader:\n lastField = backwardRow[0][::-1]\n secondField = backwardRow[1][::-1]\n\n",
"You could always do something with regex's, like (perl regex)\n#!/usr/bin/perl\n\nuse IO::File;\n\nif (my $file = new IO::File(\"test.csv\"))\n{\n foreach my $line (<$file>) {\n $line =~ m/^(.*),(.*?),(.*?),(.*?)$/;\n print \"[$1][$2][$3][$4]\\n\";\n }\n} else {\n print \"Unable to open test.csv\\n\";\n}\n\n(The first is a greedy search, the last 3 are not)\nEdit: posted full code instead of just the regex\n",
"Reverse the string first and then process it. \ntmp = tmp[::-1]\n",
"From the sample You have provided, it looks like \"columns\" are fixed size. First (the one with commas) is 16 characters long, so why don't You try reading the file line by line, then for each line reading the first 16 characters (as a value of first column), and the rest accordingly? After You have each value, You can go and parse it further (trim whitespaces, and so on...).\n",
"That's not then a CSV file, comma separated means just that.\nHow can you be certain that is not:\nCSV FILE\n\"a\" , \"b\" , \"c\" , \"d\"\nhello , world , 1 , 2 , 3\n1 , 2 , 3 , 4 , 5,6,7,2,456,87\nh , 1231232 , 3 , 3 , 45,44\n\nIf the file is as you indicate then the first group should be surrounded by quotes, looks as though the field names are so odd that fields containing commas are not.\nI'm not a fan of fixing errors away from their source, I'd push back to the data generator to deliver proper CSV if that's what they are claiming it is.\n",
"If you always expect the same number of columns, and only the first column can contain commas, just read anything and concatenate excess columns at the beginning.\nThe problem is that the interface is ambiguous, and you can try to circumvent this, but the better solution is to try to get the interface fixed (which is often harder than creating several patches...).\n",
"I agree with mr beer. That is a badly formed csv file. Your best bet is to find other delimiters or stop overloading the commas or quote/escape the non field separating commas\n"
] |
[
16,
4,
1,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"csv",
"parsing",
"python",
"readline"
] |
stackoverflow_0001272315_csv_parsing_python_readline.txt
|
Q:
Alternatives to mod_python's CGI handler
I'm looking for the simplest way of using python and SQLAlchemy to produce some XML for a jQuery based HTTP client. Right now I'm using mod_python's CGI handler but I'm unhappy with the fact that I can't persist stuff like the SQLAlchemy session.
The mod_python publisher handler that is apparently capable of persisting stuff does not allow requests with XML content type (as used by jQuery's ajax stuff) so I can't use it.
What other options are there?
A:
You could always write your own handler, which is the way mod_python is normally intended to be used. You would have to set some HTTP headers (and you could have a look at the publisher handler's source code for inspiration on that), but otherwise I don't think it's much more complicated than what you've been trying to do.
Though as long as you're at it, I would suggest trying mod_wsgi instead of mod_python, which is probably eventually going to supersede mod_python. WSGI is a Python standard for writing web applications.
|
Alternatives to mod_python's CGI handler
|
I'm looking for the simplest way of using python and SQLAlchemy to produce some XML for a jQuery based HTTP client. Right now I'm using mod_python's CGI handler but I'm unhappy with the fact that I can't persist stuff like the SQLAlchemy session.
The mod_python publisher handler that is apparently capable of persisting stuff does not allow requests with XML content type (as used by jQuery's ajax stuff) so I can't use it.
What other options are there?
|
[
"You could always write your own handler, which is the way mod_python is normally intended to be used. You would have to set some HTTP headers (and you could have a look at the publisher handler's source code for inspiration on that), but otherwise I don't think it's much more complicated than what you've been trying to do.\nThough as long as you're at it, I would suggest trying mod_wsgi instead of mod_python, which is probably eventually going to supersede mod_python. WSGI is a Python standard for writing web applications.\n"
] |
[
2
] |
[] |
[] |
[
"cgi",
"mod_python",
"python"
] |
stackoverflow_0001272325_cgi_mod_python_python.txt
|
Q:
wxPython: Changing the color scheme of a wx.stc.StyledTextCtrl
I have a PyShell, which is supposed to be derived from wx.stc.StyledTextCtrl. How do I change the color scheme that it currently uses?
A:
You can use
styledTextCtrl.StyleSetSpec(wx.stc.STC_STYLE_INDENTGUIDE, "fore:#CDCDCD")
(Bunch of .StyleSetSpec properties)
...
...
...
styCtrl.SetCaretForeground("BLUE")
styCtrl.SetSelBackground(True, wx.SystemSettings_GetColour(wx.SYS_COLOUR_HIGHLIGHT))
styCtrl.SetSelForeground(True, wx.SystemSettings_GetColour(wx.SYS_COLOUR_HIGHLIGHTTEXT))
...
(Bunch of Set*() commands)
Don't know if there is a way to load a pre-defined color scheme.
You could define it in YAML and load it up via the commands above and more.
|
wxPython: Changing the color scheme of a wx.stc.StyledTextCtrl
|
I have a PyShell, which is supposed to be derived from wx.stc.StyledTextCtrl. How do I change the color scheme that it currently uses?
|
[
"You can use \nstyledTextCtrl.StyleSetSpec(wx.stc.STC_STYLE_INDENTGUIDE, \"fore:#CDCDCD\")\n(Bunch of .StyleSetSpec properties)\n...\n...\n...\nstyCtrl.SetCaretForeground(\"BLUE\")\nstyCtrl.SetSelBackground(True, wx.SystemSettings_GetColour(wx.SYS_COLOUR_HIGHLIGHT))\nstyCtrl.SetSelForeground(True, wx.SystemSettings_GetColour(wx.SYS_COLOUR_HIGHLIGHTTEXT))\n...\n(Bunch of Set*() commands)\nDon't know if there is a way to load a pre-defined color scheme.\nYou could define it in YAML and load it up via the commands above and more.\n"
] |
[
0
] |
[] |
[] |
[
"color_scheme",
"python",
"user_interface",
"wxpython"
] |
stackoverflow_0001211380_color_scheme_python_user_interface_wxpython.txt
|
Q:
Using Python (Bash?) to get OS-level system information (CPU Speed)
I want to repeat this question using python. Reason is I have access to 10 nodes in a cluster and each node is not identical. They range in performance and I want to find which is the best computer to use remotely based on memory and cpu-speed/cores available.
EDIT: Heck, even just a command line interface would be useful. Any quick and dirty solutions?
A:
Take a look at the SIGAR library which has an extensive API for collecting system data cross-platform. It also has libraries available in many languages (Python, Java, Erlang, Ruby, etc).
|
Using Python (Bash?) to get OS-level system information (CPU Speed)
|
I want to repeat this question using python. Reason is I have access to 10 nodes in a cluster and each node is not identical. They range in performance and I want to find which is the best computer to use remotely based on memory and cpu-speed/cores available.
EDIT: Heck, even just a command line interface would be useful. Any quick and dirty solutions?
|
[
"Take a look at the SIGAR library which has an extensive API for collecting system data cross-platform. It also has libraries available in many languages (Python, Java, Erlang, Ruby, etc).\n"
] |
[
1
] |
[] |
[] |
[
"cpu_speed",
"performance",
"python"
] |
stackoverflow_0001272903_cpu_speed_performance_python.txt
|
Q:
django problem uploading and saving documents
I am working on a django app. One part would involve uploading files (e.g. spreadsheet or whatever). I am getting this error:
IOError at /fileupload/
[Errno 13] Permission denied: 'fyi.xml'
Where 'fileupload' was the django app name and 'fyi.xml' was the test document I was uploading.
So, I used chmod and chown to make the [project directory]/static/documents/ folder writable to apache. Actually I even tried just making it chmod 777, still no luck.
So, in my settings.py I just changed where my MEDIA_ROOT was:
MEDIA_ROOT = '/var/www/static/'
Then, in case it was an SELinux thing, I created the new documents directory in /var/www/static'...
drwxr-xr-x 2 apache root 4096 Aug 13 11:20 documents
Then I did these commands to try to change the context so apache would be allowed to write here. I'm not too familiar with this distro, it's the flavor of Red Hat we're given, so I've never had to go beyond chmod and/or chown to fix a permissions problem.
sudo chcon -h system_u:object_r:httpd_sys_content_t /var/www/static
sudo chcon -R -h root:object_r:httpd_sys_content_t /var/www/static
sudo chcon -R -h root:object_r:httpd_sys_content_t /var/www/static/*
None of this made any difference. To be honest, I'm not positive that I even have SELinux here but since normal unix permissions didn't seem to work I thought I'd try it.
So, does anyone have an idea on what to look at next? Not sure how much code I should post here, but in case it would be helpful here's what's in my views.py:
views.py
from django.http import HttpResponseRedirect
from django.shortcuts import render_to_response
from forms import UploadFileForm
from fyi.models import Materials
def handle_uploaded_file(f):
destination = open('fyi.xml', 'wb+')
for chunk in f.chunks():
destination.write(chunk)
destination.close()
def upload_file(request):
if request.method == 'POST':
form = UploadFileForm(request.POST, request.FILES)
if form.is_valid():
handle_uploaded_file(request.FILES['document'])
form.save()
template = 'upload_success.html'
else:
form = UploadFileForm()
template = 'fileupload.html'
return render_to_response( template, {'form': form})
...any help would be appreciated.
A:
Maybe try changing:
destination = open('fyi.xml', 'wb+')
to something like:
upload_dir = settings.MEDIA_ROOT # or wherever
destination = open(os.path.join(upload_dir, 'fyi.xml'), 'wb+')
If it is an SELinux issue, perhaps this page would help:
http://blog.chrisramsay.co.uk/2009/05/22/writing-files-with-django-under-selinux/
|
django problem uploading and saving documents
|
I am working on a django app. One part would involve uploading files (e.g. spreadsheet or whatever). I am getting this error:
IOError at /fileupload/
[Errno 13] Permission denied: 'fyi.xml'
Where 'fileupload' was the django app name and 'fyi.xml' was the test document I was uploading.
So, I used chmod and chown to make the [project directory]/static/documents/ folder writable to apache. Actually I even tried just making it chmod 777, still no luck.
So, in my settings.py I just changed where my MEDIA_ROOT was:
MEDIA_ROOT = '/var/www/static/'
Then, in case it was an SELinux thing, I created the new documents directory in /var/www/static'...
drwxr-xr-x 2 apache root 4096 Aug 13 11:20 documents
Then I did these commands to try to change the context so apache would be allowed to write here. I'm not too familiar with this distro, it's the flavor of Red Hat we're given, so I've never had to go beyond chmod and/or chown to fix a permissions problem.
sudo chcon -h system_u:object_r:httpd_sys_content_t /var/www/static
sudo chcon -R -h root:object_r:httpd_sys_content_t /var/www/static
sudo chcon -R -h root:object_r:httpd_sys_content_t /var/www/static/*
None of this made any difference. To be honest, I'm not positive that I even have SELinux here but since normal unix permissions didn't seem to work I thought I'd try it.
So, does anyone have an idea on what to look at next? Not sure how much code I should post here, but in case it would be helpful here's what's in my views.py:
views.py
from django.http import HttpResponseRedirect
from django.shortcuts import render_to_response
from forms import UploadFileForm
from fyi.models import Materials
def handle_uploaded_file(f):
destination = open('fyi.xml', 'wb+')
for chunk in f.chunks():
destination.write(chunk)
destination.close()
def upload_file(request):
if request.method == 'POST':
form = UploadFileForm(request.POST, request.FILES)
if form.is_valid():
handle_uploaded_file(request.FILES['document'])
form.save()
template = 'upload_success.html'
else:
form = UploadFileForm()
template = 'fileupload.html'
return render_to_response( template, {'form': form})
...any help would be appreciated.
|
[
"Maybe try changing:\ndestination = open('fyi.xml', 'wb+')\n\nto something like:\nupload_dir = settings.MEDIA_ROOT # or wherever\ndestination = open(os.path.join(upload_dir, 'fyi.xml'), 'wb+')\n\nIf it is an SELinux issue, perhaps this page would help:\n\nhttp://blog.chrisramsay.co.uk/2009/05/22/writing-files-with-django-under-selinux/\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"file",
"permissions",
"python"
] |
stackoverflow_0001273285_django_file_permissions_python.txt
|
Q:
What is the importance of an IDE when programming in Python?
I'm a beginning Python programmer, just getting my feet wet in the language and its tools and native practices. In the past, I've used languages that were tightly integrated into IDEs, and indeed I had never before considered that it was even possible to program outside of such a tool.
However, much of the documentation and tutorials for Python eschew any sort of IDE, relying instead on powerful editors and interactive interpreters for writing and teaching the language.
How important is an IDE to normal Python development?
Are there good IDEs available for the language?
If you do use an IDE for Python, how do you use it effectively?
A:
IDEs arent very useful in Python; powerful editors such as Emacs and Vim seem very popular among Python programmers.
This may confuse e.g. Java programmers, because in Java each file generally requires boilerplate code, such as a package statement, getters and setters.
Python is much more lightweight in comparison.
If you're looking for an equivalent to Visual Studio or Eclipse, there is... Eclipse, with Pydev.
Emacs and Vim are very powerful and general, but have a steep learning curve.
If you want to use Emacs, I highly recommend python mode; it's much better than the default Python mode.
A:
A matter of habit and personal preferences. Me, I use vim (I have to admit emacs is at least as powerful, but my fingers are deeply trained by over 30 years of vi, and any other editor gives me the jitters, especially when it tries to imitate vi and never really manages to get it 100% right;-), occasionally an interactive environment (python itself, sometimes ipython), and on even rarer occasions a debugger (pdb). A good editor gives me all I need in term of word completion, lookup, &c.
I've tried Eclipse, its plugins, eric, and Kommodo, but I just don't like them -- Wing, I think I could get used to, and I have to admit its debugger is absolutely out of this world... but, I very rarely use (or need!) advanced debugging functionality, so after every rare occasion I'd forget, and have to learn it all over again a few months later when the need arose again... nah!-)
A:
How important is an IDE to normal Python development?
Not very, IMHO. It's a lightweight language with much less boilerplate and simpler idioms than in some other languages, so there's less need for an IDE for that part.
The standard interactive interpreter provides help and introspection functionality and a reasonable debugger (pdb). When I want a graphical look at my class hierarchies, I use epydoc to generate it.
The only IDE-like functionality I sometimes wish I had is something that would help automate refactoring.
Are there good IDEs available for the language?
So I hear. Some of my coworkers use Wing.
If you do use an IDE for Python, how do you use it effectively?
N/A. I tried using Wing a few times but found that it interfered with my normal development process rather than supporting it.
A:
The IDE you use is a personal and subjective thing, but it definitely matters. Personally, for writing short scripts or working with python interactively, I use PyDee available at http://pydee.googlecode.com/ . It is well done, fairly lightweight, but with good introspection capabilities.
For larger projects involving multiple components, I prefer Eclipse with appropriate plugins. It has very sophisticated management and introspection capabilities. You can download it separately or get it as part of Python (X,Y) at http://www.pythonxy.com/ .
A:
In contrast to the other answers i think that IDE's are very important especially for script languages. Almost all code is bad documentated and an IDE with a good debugger gives you much insides about what is really going on what datatypes are assigned to this values. Is this a hash of lists of hashes or a list of hashs of hashs.
And the easy documentation lookup will save you time.
But this is only important for people who need to count there time, this normally excludes beginners or hobbyists.
A:
(1) IDEs are less important than for other languages, but if you find one that is useful, it still makes things easier. Without IDEs -- what are doing? Always running Python from command line?
(2-3) On my Mac there's included IDLE which I keep always open for its Python shell (it's colored unlike the one in Terminal) and I use free Komodo Edit which I consider to be well-suited for Python as it doesn't go into the language deeply but rather focuses on coloring, tab management, parsing Python output, running frequent commands etc.
|
What is the importance of an IDE when programming in Python?
|
I'm a beginning Python programmer, just getting my feet wet in the language and its tools and native practices. In the past, I've used languages that were tightly integrated into IDEs, and indeed I had never before considered that it was even possible to program outside of such a tool.
However, much of the documentation and tutorials for Python eschew any sort of IDE, relying instead on powerful editors and interactive interpreters for writing and teaching the language.
How important is an IDE to normal Python development?
Are there good IDEs available for the language?
If you do use an IDE for Python, how do you use it effectively?
|
[
"IDEs arent very useful in Python; powerful editors such as Emacs and Vim seem very popular among Python programmers.\nThis may confuse e.g. Java programmers, because in Java each file generally requires boilerplate code, such as a package statement, getters and setters.\nPython is much more lightweight in comparison.\nIf you're looking for an equivalent to Visual Studio or Eclipse, there is... Eclipse, with Pydev.\nEmacs and Vim are very powerful and general, but have a steep learning curve.\nIf you want to use Emacs, I highly recommend python mode; it's much better than the default Python mode.\n",
"A matter of habit and personal preferences. Me, I use vim (I have to admit emacs is at least as powerful, but my fingers are deeply trained by over 30 years of vi, and any other editor gives me the jitters, especially when it tries to imitate vi and never really manages to get it 100% right;-), occasionally an interactive environment (python itself, sometimes ipython), and on even rarer occasions a debugger (pdb). A good editor gives me all I need in term of word completion, lookup, &c.\nI've tried Eclipse, its plugins, eric, and Kommodo, but I just don't like them -- Wing, I think I could get used to, and I have to admit its debugger is absolutely out of this world... but, I very rarely use (or need!) advanced debugging functionality, so after every rare occasion I'd forget, and have to learn it all over again a few months later when the need arose again... nah!-)\n",
"\nHow important is an IDE to normal Python development?\n\nNot very, IMHO. It's a lightweight language with much less boilerplate and simpler idioms than in some other languages, so there's less need for an IDE for that part. \nThe standard interactive interpreter provides help and introspection functionality and a reasonable debugger (pdb). When I want a graphical look at my class hierarchies, I use epydoc to generate it. \nThe only IDE-like functionality I sometimes wish I had is something that would help automate refactoring. \n\nAre there good IDEs available for the language?\n\nSo I hear. Some of my coworkers use Wing.\n\nIf you do use an IDE for Python, how do you use it effectively?\n\nN/A. I tried using Wing a few times but found that it interfered with my normal development process rather than supporting it.\n",
"The IDE you use is a personal and subjective thing, but it definitely matters. Personally, for writing short scripts or working with python interactively, I use PyDee available at http://pydee.googlecode.com/ . It is well done, fairly lightweight, but with good introspection capabilities. \nFor larger projects involving multiple components, I prefer Eclipse with appropriate plugins. It has very sophisticated management and introspection capabilities. You can download it separately or get it as part of Python (X,Y) at http://www.pythonxy.com/ .\n",
"In contrast to the other answers i think that IDE's are very important especially for script languages. Almost all code is bad documentated and an IDE with a good debugger gives you much insides about what is really going on what datatypes are assigned to this values. Is this a hash of lists of hashes or a list of hashs of hashs. \nAnd the easy documentation lookup will save you time.\nBut this is only important for people who need to count there time, this normally excludes beginners or hobbyists.\n",
"(1) IDEs are less important than for other languages, but if you find one that is useful, it still makes things easier. Without IDEs -- what are doing? Always running Python from command line?\n(2-3) On my Mac there's included IDLE which I keep always open for its Python shell (it's colored unlike the one in Terminal) and I use free Komodo Edit which I consider to be well-suited for Python as it doesn't go into the language deeply but rather focuses on coloring, tab management, parsing Python output, running frequent commands etc.\n"
] |
[
9,
4,
3,
1,
1,
0
] |
[] |
[] |
[
"ide",
"python"
] |
stackoverflow_0001250295_ide_python.txt
|
Q:
Porting a Python app that uses Psyco to Mac
I'm trying to port my Python app from Windows to Mac. My app uses Psyco. How exactly do I install Psyco on Mac?
Keep in mind I'm a Mac newbie.
A:
First, you need Apple's XCode installed (well, specifically you only need the gcc compiler that comes with it, but installing the whole thing is simpler;-). If you want the latest and greatest, sign up for ADC at the lowest (free!-) level and download from there; otherwise it should be in your OSX DVD (or, depending on OSX level and how you installed the OS, the installer might already be on your hard disk).
To verify XCode's properly installed, at a Terminal.app enter gcc and you should see a message such as i686-apple-darwin9-gcc-4.0.1: no input files.
Once that works, download psyco's sources from here, unpack them (you probably can get it done during the download, worst case use tar xzf psyco-1.6-src.tar.gz in Terminal.app after cd'ing to the directory you've downloaded that tar.gz to), cd into the new psyco-1.6 directory.
Then do python setup.py install at the Terminal.app shell prompt. Depending on how exactly you installed things, you may need to use sudo python setup.py install and give your password to enable writing into the system directories.
|
Porting a Python app that uses Psyco to Mac
|
I'm trying to port my Python app from Windows to Mac. My app uses Psyco. How exactly do I install Psyco on Mac?
Keep in mind I'm a Mac newbie.
|
[
"First, you need Apple's XCode installed (well, specifically you only need the gcc compiler that comes with it, but installing the whole thing is simpler;-). If you want the latest and greatest, sign up for ADC at the lowest (free!-) level and download from there; otherwise it should be in your OSX DVD (or, depending on OSX level and how you installed the OS, the installer might already be on your hard disk).\nTo verify XCode's properly installed, at a Terminal.app enter gcc and you should see a message such as i686-apple-darwin9-gcc-4.0.1: no input files.\nOnce that works, download psyco's sources from here, unpack them (you probably can get it done during the download, worst case use tar xzf psyco-1.6-src.tar.gz in Terminal.app after cd'ing to the directory you've downloaded that tar.gz to), cd into the new psyco-1.6 directory.\nThen do python setup.py install at the Terminal.app shell prompt. Depending on how exactly you installed things, you may need to use sudo python setup.py install and give your password to enable writing into the system directories.\n"
] |
[
2
] |
[
"The News page shows that a Mac port is still being written. Learn how to install from source code using Make. Apply that patch, compile, and install.\n"
] |
[
-1
] |
[
"macos",
"psyco",
"python"
] |
stackoverflow_0001273546_macos_psyco_python.txt
|
Q:
What is the correct way to generate a json from file in GoogleAppEngine?
I'm quite new to python and GAE, can anyone please provide some help/sample code for doing the following simple task? I managed to read a simple file and output it as a webpage but I need some slightly more complicated logic. Here is the pseudo code:
open file;
for each line in file {
store first line as album title;
for each song read {
store first line as song title;
store second line as song URL;
}
}
Output the read in data as a json;
The file format will be something like this
Album title1 song1 title song1 url song2 title song2 url Album title2 song1 title song1 url song2 title song2 url ..
A:
Here's a generator-based solution with a few nice features:
Tolerates multiple blank lines between albums in text file
Tolerates leading/trailing blank lines in text file
Uses only an album's worth of memory at a time
Demonstrates a lot of neato things you can do with Python :)
albums.txt
Album title1
song1 title
song1 url
song2 title
song2 url
Album title2
song1 title
song1 url
song2 title
song2 url
Code
from django.utils import simplejson
def gen_groups(lines):
""" Returns contiguous groups of lines in a file """
group = []
for line in lines:
line = line.strip()
if not line and group:
yield group
group = []
elif line:
group.append(line)
def gen_albums(groups):
""" Given groups of lines in an album file, returns albums """
for group in groups:
title = group.pop(0)
songinfo = zip(*[iter(group)]*2)
songs = [dict(title=title,url=url) for title,url in songinfo]
album = dict(title=title, songs=songs)
yield album
input = open('albums.txt')
groups = gen_groups(input)
albums = gen_albums(groups)
print simplejson.dumps(list(albums))
Output
[{"songs": [{"url": "song1 url", "title": "song1 title"}, {"url": "song2 url", "title": "song2 title"}], "title": "song2
title"},
{"songs": [{"url": "song1 url", "title": "song1 title"}, {"url": "song2 url", "title": "song2 title"}], "title": "song2
title"}]
Album information could then be accessed in Javascript like so:
var url = albums[1].songs[0].url;
Lastly, here's a note about that tricky zip line.
A:
from django.utils import simplejson
def albums(f):
"" yields lists of strings which are the
stripped lines for an album (blocks of
nonblank lines separated by blocks of
blank ones.
"""
while True:
# skip leading blank lines if any
for line in f:
if not line: return
line = line.strip()
if line: break
result = [line]
# read up to next blank line or EOF
for line in f:
if not line:
yield result
return
line = line.strip()
if not line: break
result.append(line)
yield result
def songs(album):
""" yields lists of 2 lines, one list per song.
"""
for i in xrange(1, len(album), 2):
yield (album[i:i+2] + ['??'])[:2]
result = dict()
f = open('thefile.txt')
for albumlines in albums(f):
current = result[albumlines[0]] = []
for songlines in songs(albumlines):
current.append( {
'songtitle': songlines[0],
'songurl': songlines[1]
} )
response.out.write(simplejson.dumps(result))
|
What is the correct way to generate a json from file in GoogleAppEngine?
|
I'm quite new to python and GAE, can anyone please provide some help/sample code for doing the following simple task? I managed to read a simple file and output it as a webpage but I need some slightly more complicated logic. Here is the pseudo code:
open file;
for each line in file {
store first line as album title;
for each song read {
store first line as song title;
store second line as song URL;
}
}
Output the read in data as a json;
The file format will be something like this
Album title1 song1 title song1 url song2 title song2 url Album title2 song1 title song1 url song2 title song2 url ..
|
[
"Here's a generator-based solution with a few nice features:\n\nTolerates multiple blank lines between albums in text file \nTolerates leading/trailing blank lines in text file \nUses only an album's worth of memory at a time \nDemonstrates a lot of neato things you can do with Python :) \n\nalbums.txt\nAlbum title1\nsong1 title\nsong1 url\nsong2 title\nsong2 url\n\nAlbum title2\nsong1 title\nsong1 url\nsong2 title\nsong2 url\n\nCode\nfrom django.utils import simplejson\n\ndef gen_groups(lines):\n \"\"\" Returns contiguous groups of lines in a file \"\"\"\n\n group = []\n\n for line in lines:\n line = line.strip()\n if not line and group:\n yield group\n group = []\n elif line:\n group.append(line)\n\n\ndef gen_albums(groups):\n \"\"\" Given groups of lines in an album file, returns albums \"\"\"\n\n for group in groups:\n title = group.pop(0)\n songinfo = zip(*[iter(group)]*2)\n songs = [dict(title=title,url=url) for title,url in songinfo]\n album = dict(title=title, songs=songs)\n\n yield album\n\n\ninput = open('albums.txt')\ngroups = gen_groups(input)\nalbums = gen_albums(groups)\n\nprint simplejson.dumps(list(albums))\n\nOutput\n[{\"songs\": [{\"url\": \"song1 url\", \"title\": \"song1 title\"}, {\"url\": \"song2 url\", \"title\": \"song2 title\"}], \"title\": \"song2\ntitle\"},\n{\"songs\": [{\"url\": \"song1 url\", \"title\": \"song1 title\"}, {\"url\": \"song2 url\", \"title\": \"song2 title\"}], \"title\": \"song2\ntitle\"}]\n\nAlbum information could then be accessed in Javascript like so:\nvar url = albums[1].songs[0].url;\n\nLastly, here's a note about that tricky zip line.\n",
"from django.utils import simplejson\n\ndef albums(f):\n \"\" yields lists of strings which are the\n stripped lines for an album (blocks of\n nonblank lines separated by blocks of\n blank ones.\n \"\"\"\n while True:\n # skip leading blank lines if any\n for line in f:\n if not line: return\n line = line.strip()\n if line: break\n result = [line]\n # read up to next blank line or EOF\n for line in f:\n if not line:\n yield result\n return\n line = line.strip()\n if not line: break\n result.append(line)\n yield result\n\ndef songs(album):\n \"\"\" yields lists of 2 lines, one list per song.\n \"\"\"\n for i in xrange(1, len(album), 2):\n yield (album[i:i+2] + ['??'])[:2]\n\nresult = dict()\nf = open('thefile.txt')\nfor albumlines in albums(f):\n current = result[albumlines[0]] = []\n for songlines in songs(albumlines):\n current.append( {\n 'songtitle': songlines[0],\n 'songurl': songlines[1]\n } )\n\nresponse.out.write(simplejson.dumps(result))\n\n"
] |
[
3,
1
] |
[] |
[] |
[
"file_io",
"google_app_engine",
"python"
] |
stackoverflow_0001274035_file_io_google_app_engine_python.txt
|
Q:
How to create new folder?
I want to put output information of my program to a folder. if given folder does not exist, then the program should create a new folder with folder name as given in the program. Is this possible? If yes, please let me know how.
Suppose I have given folder path like "C:\Program Files\alex" and alex folder doesn't exist then program should create alex folder and should put output information in the alex folder.
A:
You can create a folder with os.makedirs()
and use os.path.exists() to see if it already exists:
newpath = r'C:\Program Files\arbitrary'
if not os.path.exists(newpath):
os.makedirs(newpath)
If you're trying to make an installer: Windows Installer does a lot of work for you.
A:
Have you tried os.mkdir?
You might also try this little code snippet:
mypath = ...
if not os.path.isdir(mypath):
os.makedirs(mypath)
makedirs creates multiple levels of directories, if needed.
A:
You probably want os.makedirs as it will create intermediate directories as well, if needed.
import os
#dir is not keyword
def makemydir(whatever):
try:
os.makedirs(whatever)
except OSError:
pass
# let exception propagate if we just can't
# cd into the specified directory
os.chdir(whatever)
|
How to create new folder?
|
I want to put output information of my program to a folder. if given folder does not exist, then the program should create a new folder with folder name as given in the program. Is this possible? If yes, please let me know how.
Suppose I have given folder path like "C:\Program Files\alex" and alex folder doesn't exist then program should create alex folder and should put output information in the alex folder.
|
[
"You can create a folder with os.makedirs()\nand use os.path.exists() to see if it already exists:\nnewpath = r'C:\\Program Files\\arbitrary' \nif not os.path.exists(newpath):\n os.makedirs(newpath)\n\nIf you're trying to make an installer: Windows Installer does a lot of work for you.\n",
"Have you tried os.mkdir?\nYou might also try this little code snippet:\nmypath = ...\nif not os.path.isdir(mypath):\n os.makedirs(mypath)\n\nmakedirs creates multiple levels of directories, if needed.\n",
"You probably want os.makedirs as it will create intermediate directories as well, if needed.\nimport os\n\n#dir is not keyword\ndef makemydir(whatever):\n try:\n os.makedirs(whatever)\n except OSError:\n pass\n # let exception propagate if we just can't\n # cd into the specified directory\n os.chdir(whatever)\n\n"
] |
[
432,
57,
40
] |
[] |
[] |
[
"mkdir",
"python"
] |
stackoverflow_0001274405_mkdir_python.txt
|
Q:
How can I create a list of files in the current directory and its subdirectories with a given extension?
I'm trying to generate a text file that has a list of all files in the current directory and all of its sub-directories with the extension ".asp". What would be the best way to do this?
A:
You'll want to use os.walk which will make that trivial.
import os
asps = []
for root, dirs, files in os.walk(r'C:\web'):
for file in files:
if file.endswith('.asp'):
asps.append(file)
A:
walk the tree with os.walk and filter content with glob:
import os
import glob
asps = []
for root, dirs, files in os.walk('/path/to/dir'):
asps += glob.glob(os.path.join(root, '*.asp'))
or with fnmatch.filter:
import fnmatch
for root, dirs, files in os.walk('/path/to/dir'):
asps += fnmatch.filter(files, '*.asp')
|
How can I create a list of files in the current directory and its subdirectories with a given extension?
|
I'm trying to generate a text file that has a list of all files in the current directory and all of its sub-directories with the extension ".asp". What would be the best way to do this?
|
[
"You'll want to use os.walk which will make that trivial.\nimport os\n\nasps = []\nfor root, dirs, files in os.walk(r'C:\\web'):\n for file in files:\n if file.endswith('.asp'):\n asps.append(file)\n\n",
"walk the tree with os.walk and filter content with glob:\nimport os\nimport glob\n\nasps = []\nfor root, dirs, files in os.walk('/path/to/dir'):\n asps += glob.glob(os.path.join(root, '*.asp'))\n\nor with fnmatch.filter:\nimport fnmatch\nfor root, dirs, files in os.walk('/path/to/dir'):\n asps += fnmatch.filter(files, '*.asp')\n\n"
] |
[
20,
4
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001274506_python.txt
|
Q:
How can I write 'n <<= 1' (Python) in PHP?
I have the Python expression n <<= 1
How do you express this in PHP?
A:
That statement is short for
n = n << 1;
the << operator is means a bitwise shift left, by n positions. Its counterpart is >>, which means shift right by n. To visualize, say you have the value 5, and you want to shift it left by 2 positions. In binary:
0000 0101 -> 5
shift left by 2:
0001 0100 -> 20
Basically, you shift all bits in the given direction, and pad with zeroes. More or less equivalent, if you don't have a bitwise shift operator (which is common in most, if not all languages), is multiplying by 2^n for shift left, and dividing by 2^n for shift right.
In the example, you can see that: 5 * 2^2 = 5 * 4 = 20.
A:
It's the same operator in php. $n <<= 1;
A:
$n <<= 1; is valid php
|
How can I write 'n <<= 1' (Python) in PHP?
|
I have the Python expression n <<= 1
How do you express this in PHP?
|
[
"That statement is short for\nn = n << 1;\n\nthe << operator is means a bitwise shift left, by n positions. Its counterpart is >>, which means shift right by n. To visualize, say you have the value 5, and you want to shift it left by 2 positions. In binary:\n0000 0101 -> 5\nshift left by 2:\n0001 0100 -> 20\n\nBasically, you shift all bits in the given direction, and pad with zeroes. More or less equivalent, if you don't have a bitwise shift operator (which is common in most, if not all languages), is multiplying by 2^n for shift left, and dividing by 2^n for shift right.\nIn the example, you can see that: 5 * 2^2 = 5 * 4 = 20.\n",
"It's the same operator in php. $n <<= 1;\n",
"$n <<= 1; is valid php\n"
] |
[
6,
5,
2
] |
[] |
[] |
[
"bitwise_operators",
"operators",
"php",
"python"
] |
stackoverflow_0001274493_bitwise_operators_operators_php_python.txt
|
Q:
Python: remove lots of items from a list
I am in the final stretch of a project I have been working on. Everything is running smoothly but I have a bottleneck that I am having trouble working around.
I have a list of tuples. The list ranges in length from say 40,000 - 1,000,000 records. Now I have a dictionary where each and every (value, key) is a tuple in the list.
So, I might have
myList = [(20000, 11), (16000, 4), (14000, 9)...]
myDict = {11:20000, 9:14000, ...}
I want to remove each (v, k) tuple from the list.
Currently I am doing:
for k, v in myDict.iteritems():
myList.remove((v, k))
Removing 838 tuples from the list containing 20,000 tuples takes anywhere from 3 - 4 seconds. I will most likely be removing more like 10,000 tuples from a list of 1,000,000 so I need this to be faster.
Is there a better way to do this?
I can provide code used to test, plus pickled data from the actual application if needed.
A:
You'll have to measure, but I can imagine this to be more performant:
myList = filter(lambda x: myDict.get(x[1], None) != x[0], myList)
because the lookup happens in the dict, which is more suited for this kind of thing. Note, though, that this will create a new list before removing the old one; so there's a memory tradeoff. If that's an issue, rethinking your container type as jkp suggest might be in order.
Edit: Be careful, though, if None is actually in your list -- you'd have to use a different "placeholder."
A:
To remove about 10,000 tuples from a list of about 1,000,000, if the values are hashable, the fastest approach should be:
totoss = set((v,k) for (k,v) in myDict.iteritems())
myList[:] = [x for x in myList if x not in totoss]
The preparation of the set is a small one-time cost, wich saves doing tuple unpacking and repacking, or tuple indexing, a lot of times. Assignign to myList[:] instead of assigning to myList is also semantically important (in case there are any other references to myList around, it's not enough to rebind just the name -- you really want to rebind the contents!-).
I don't have your test-data around to do the time measurement myself, alas!, but, let me know how it plays our on your test data!
If the values are not hashable (e.g. they're sub-lists, for example), fastest is probably:
sentinel = object()
myList[:] = [x for x in myList if myDict.get(x[0], sentinel) != x[1]]
or maybe (shouldn't make a big difference either way, but I suspect the previous one is better -- indexing is cheaper than unpacking and repacking):
sentinel = object()
myList[:] = [(a,b) for (a,b) in myList if myDict.get(a, sentinel) != b]
In these two variants the sentinel idiom is used to ward against values of None (which is not a problem for the preferred set-based approach -- if values are hashable!) as it's going to be way cheaper than if a not in myDict or myDict[a] != b (which requires two indexings into myDict).
A:
Every time you call myList.remove, Python has to scan over the entire list to search for that item and remove it. In the worst case scenario, every item you look for would be at the end of the list each time.
Have you tried doing the "inverse" operation of:
newMyList = [(v,k) for (v,k) in myList if not k in myDict]
But I'm really not sure how well that would scale, either, since you would be making a copy of the original list -- could potentially be a lot of memory usage there.
Probably the best alternative here is to wait for Alex Martelli to post some mind-blowingly intuitive, simple, and efficient approach.
A:
The problem looks to me to be the fact you are using a list as the container you are trying to remove from, and it is a totally unordered type. So to find each item in the list is a linear operation (O(n)), it has to iterate over the whole list until it finds a match.
If you could swap the list for some other container (set?) which uses a hash() of each item to order them, then each match could be performed much quicker.
The following code shows how you could do this using a combination of ideas offered by myself and Nick on this thread:
list_set = set(original_list)
dict_set = set(zip(original_dict.values(), original_dict.keys()))
difference_set = list(list_set - dict_set)
final_list = []
for item in original_list:
if item in difference_set:
final_list.append(item)
A:
[(i, j) for i, j in myList if myDict.get(j) != i]
A:
Try something like this:
myListSet = set(myList)
myDictSet = set(zip(myDict.values(), myDict.keys()))
myList = list(myListSet - myDictSet)
This will convert myList to a set, will swap the keys/values in myDict and put them into a set, and will then find the difference, turn it back into a list, and assign it back to myList. :)
A:
[i for i in myList if i not in list(zip(myDict.values(), myDict.keys()))]
A:
A list containing a million 2-tuples is not large on most machines running Python. However if you absolutely must do the removal in situ, here is a clean way of doing it properly:
def filter_by_dict(my_list, my_dict):
sentinel = object()
for i in xrange(len(my_list) - 1, -1, -1):
key = my_list[i][1]
if my_dict.get(key, sentinel) is not sentinel:
del my_list[i]
Update Actually each del costs O(n) shuffling the list pointers down using C's memmove(), so if there are d dels, it's O(n*d) not O(n**2). Note that (1) the OP suggests that d approx == 0.01 * n and (2) the O(n*d) effort is copying one pointer to somewhere else in memory ... so this method could in fact be somewhat faster than a quick glance would indicate. Benchmarks, anyone?
What are you going to do with the list after you have removed the items that are in the dict? Is it possible to piggy-back the dict-filtering onto the next step?
|
Python: remove lots of items from a list
|
I am in the final stretch of a project I have been working on. Everything is running smoothly but I have a bottleneck that I am having trouble working around.
I have a list of tuples. The list ranges in length from say 40,000 - 1,000,000 records. Now I have a dictionary where each and every (value, key) is a tuple in the list.
So, I might have
myList = [(20000, 11), (16000, 4), (14000, 9)...]
myDict = {11:20000, 9:14000, ...}
I want to remove each (v, k) tuple from the list.
Currently I am doing:
for k, v in myDict.iteritems():
myList.remove((v, k))
Removing 838 tuples from the list containing 20,000 tuples takes anywhere from 3 - 4 seconds. I will most likely be removing more like 10,000 tuples from a list of 1,000,000 so I need this to be faster.
Is there a better way to do this?
I can provide code used to test, plus pickled data from the actual application if needed.
|
[
"You'll have to measure, but I can imagine this to be more performant:\nmyList = filter(lambda x: myDict.get(x[1], None) != x[0], myList)\n\nbecause the lookup happens in the dict, which is more suited for this kind of thing. Note, though, that this will create a new list before removing the old one; so there's a memory tradeoff. If that's an issue, rethinking your container type as jkp suggest might be in order.\nEdit: Be careful, though, if None is actually in your list -- you'd have to use a different \"placeholder.\"\n",
"To remove about 10,000 tuples from a list of about 1,000,000, if the values are hashable, the fastest approach should be:\ntotoss = set((v,k) for (k,v) in myDict.iteritems())\nmyList[:] = [x for x in myList if x not in totoss]\n\nThe preparation of the set is a small one-time cost, wich saves doing tuple unpacking and repacking, or tuple indexing, a lot of times. Assignign to myList[:] instead of assigning to myList is also semantically important (in case there are any other references to myList around, it's not enough to rebind just the name -- you really want to rebind the contents!-).\nI don't have your test-data around to do the time measurement myself, alas!, but, let me know how it plays our on your test data!\nIf the values are not hashable (e.g. they're sub-lists, for example), fastest is probably:\nsentinel = object()\nmyList[:] = [x for x in myList if myDict.get(x[0], sentinel) != x[1]]\n\nor maybe (shouldn't make a big difference either way, but I suspect the previous one is better -- indexing is cheaper than unpacking and repacking):\nsentinel = object()\nmyList[:] = [(a,b) for (a,b) in myList if myDict.get(a, sentinel) != b]\n\nIn these two variants the sentinel idiom is used to ward against values of None (which is not a problem for the preferred set-based approach -- if values are hashable!) as it's going to be way cheaper than if a not in myDict or myDict[a] != b (which requires two indexings into myDict).\n",
"Every time you call myList.remove, Python has to scan over the entire list to search for that item and remove it. In the worst case scenario, every item you look for would be at the end of the list each time.\nHave you tried doing the \"inverse\" operation of:\nnewMyList = [(v,k) for (v,k) in myList if not k in myDict]\n\nBut I'm really not sure how well that would scale, either, since you would be making a copy of the original list -- could potentially be a lot of memory usage there.\nProbably the best alternative here is to wait for Alex Martelli to post some mind-blowingly intuitive, simple, and efficient approach.\n",
"The problem looks to me to be the fact you are using a list as the container you are trying to remove from, and it is a totally unordered type. So to find each item in the list is a linear operation (O(n)), it has to iterate over the whole list until it finds a match.\nIf you could swap the list for some other container (set?) which uses a hash() of each item to order them, then each match could be performed much quicker.\nThe following code shows how you could do this using a combination of ideas offered by myself and Nick on this thread:\nlist_set = set(original_list)\ndict_set = set(zip(original_dict.values(), original_dict.keys()))\ndifference_set = list(list_set - dict_set)\nfinal_list = []\nfor item in original_list:\n if item in difference_set:\n final_list.append(item)\n\n",
"[(i, j) for i, j in myList if myDict.get(j) != i]\n\n",
"Try something like this:\nmyListSet = set(myList)\nmyDictSet = set(zip(myDict.values(), myDict.keys()))\nmyList = list(myListSet - myDictSet)\n\nThis will convert myList to a set, will swap the keys/values in myDict and put them into a set, and will then find the difference, turn it back into a list, and assign it back to myList. :)\n",
"[i for i in myList if i not in list(zip(myDict.values(), myDict.keys()))]\n\n",
"A list containing a million 2-tuples is not large on most machines running Python. However if you absolutely must do the removal in situ, here is a clean way of doing it properly:\ndef filter_by_dict(my_list, my_dict):\n sentinel = object()\n for i in xrange(len(my_list) - 1, -1, -1):\n key = my_list[i][1]\n if my_dict.get(key, sentinel) is not sentinel:\n del my_list[i]\n\nUpdate Actually each del costs O(n) shuffling the list pointers down using C's memmove(), so if there are d dels, it's O(n*d) not O(n**2). Note that (1) the OP suggests that d approx == 0.01 * n and (2) the O(n*d) effort is copying one pointer to somewhere else in memory ... so this method could in fact be somewhat faster than a quick glance would indicate. Benchmarks, anyone?\nWhat are you going to do with the list after you have removed the items that are in the dict? Is it possible to piggy-back the dict-filtering onto the next step?\n"
] |
[
20,
9,
5,
2,
2,
2,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001267260_python.txt
|
Q:
Would Like Open Source RSS / News Reader Code or Widget Python or Javascript
I would like to be able to massage certain categories of news feeds to make their entries more consistent. For example, when a job seeker subscribes to two different job sites the feeds s/he gets will differ markedly. One would like to be able to perform lookups and other work in the news reader, process the incoming feed on the basis of any extra information found and then present the massaged job information to the user.
Have you seen any open source plug-ins, widgets or codes for news readers that invite modification?
Thanks for any suggestions.
A:
You might have a look at the Planet Venus software, which has a filter system that might be useful for what you want.
A:
I don't know if this is quite what you want, but you could look into Yahoo Pipes. You could also parse the feeds with PyRSS2Gen.
A:
I'd still be interested in any responses that people might have to my question. However, in case anyone sees the question and is also looking for something like this, let me mention the existence of rawdog.
A claim made at that link is that, "rawdog is an RSS Aggregator Without Delusions Of Grandeur."
|
Would Like Open Source RSS / News Reader Code or Widget Python or Javascript
|
I would like to be able to massage certain categories of news feeds to make their entries more consistent. For example, when a job seeker subscribes to two different job sites the feeds s/he gets will differ markedly. One would like to be able to perform lookups and other work in the news reader, process the incoming feed on the basis of any extra information found and then present the massaged job information to the user.
Have you seen any open source plug-ins, widgets or codes for news readers that invite modification?
Thanks for any suggestions.
|
[
"You might have a look at the Planet Venus software, which has a filter system that might be useful for what you want.\n",
"I don't know if this is quite what you want, but you could look into Yahoo Pipes. You could also parse the feeds with PyRSS2Gen.\n",
"I'd still be interested in any responses that people might have to my question. However, in case anyone sees the question and is also looking for something like this, let me mention the existence of rawdog.\nA claim made at that link is that, \"rawdog is an RSS Aggregator Without Delusions Of Grandeur.\"\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"javascript",
"open_source",
"python",
"rss"
] |
stackoverflow_0001080149_javascript_open_source_python_rss.txt
|
Q:
Python looping to read and parse all in a directory
class __init__:
path = "articles/"
files = os.listdir(path)
files.reverse()
def iterate(Files, Path):
def handleXml(content):
months = ['', 'January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']
parse = re.compile('<(.*?)>(.*?)<(.*?)>').findall(content)
day = parse[1][1]
month = months[int(parse[2][1])]
dayN = parse[3][1]
year = parse[4][1]
hour = parse[5][1]
min = parse[6][1]
amPM = parse[7][1]
title = parse[9][1]
author = parse[10][1]
article = parse[11][1]
category = parse[12][1]
if len(Files) > 5:
del Files[5:]
for file in Files:
file = "%s%s" % (Path, file)
f = open(file, 'r')
handleXml(f.read())
f.close()
iterate(files, path)
It runs on start, and if I check the files array it contains all the file names.
But when I loop through them they just do not work, only displays the first one.
If I return file I only get the first two, and if I return parse even on duplicate files it is not identical.
None of this makes any sense.
I am trying to make a simple blog using Python, and because my server has a very old version of Python I cannot use modules like glob, everything needs to be as basic as possible.
The files array contains all the files in the directory, which is good enough for me. I do not need to go through other directories inside the articles directory.
But when I try to output parse, even on duplicate files I get different results.
Thanks,
Tom
A:
Could it be because of:
del Files[5:]
It deletes the last 5 entries from the original list as well. Instead of using del, you can try:
for file in Files[:5]:
#...
A:
As stated in the comments, the actual recursion is missing.
Even if it is there in some other place of the code, the recursion call is the typical place where the things are wrong, and for this reason I would suggest you to double check it.
However, why don't you use os.walk? It iterates through all the path, without the need of reinventing the (recursive) wheel. It has been introduced in 2.3, though, and I do not know how old your python is.
|
Python looping to read and parse all in a directory
|
class __init__:
path = "articles/"
files = os.listdir(path)
files.reverse()
def iterate(Files, Path):
def handleXml(content):
months = ['', 'January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']
parse = re.compile('<(.*?)>(.*?)<(.*?)>').findall(content)
day = parse[1][1]
month = months[int(parse[2][1])]
dayN = parse[3][1]
year = parse[4][1]
hour = parse[5][1]
min = parse[6][1]
amPM = parse[7][1]
title = parse[9][1]
author = parse[10][1]
article = parse[11][1]
category = parse[12][1]
if len(Files) > 5:
del Files[5:]
for file in Files:
file = "%s%s" % (Path, file)
f = open(file, 'r')
handleXml(f.read())
f.close()
iterate(files, path)
It runs on start, and if I check the files array it contains all the file names.
But when I loop through them they just do not work, only displays the first one.
If I return file I only get the first two, and if I return parse even on duplicate files it is not identical.
None of this makes any sense.
I am trying to make a simple blog using Python, and because my server has a very old version of Python I cannot use modules like glob, everything needs to be as basic as possible.
The files array contains all the files in the directory, which is good enough for me. I do not need to go through other directories inside the articles directory.
But when I try to output parse, even on duplicate files I get different results.
Thanks,
Tom
|
[
"Could it be because of:\ndel Files[5:]\n\nIt deletes the last 5 entries from the original list as well. Instead of using del, you can try:\nfor file in Files[:5]:\n #...\n\n",
"As stated in the comments, the actual recursion is missing.\nEven if it is there in some other place of the code, the recursion call is the typical place where the things are wrong, and for this reason I would suggest you to double check it.\nHowever, why don't you use os.walk? It iterates through all the path, without the need of reinventing the (recursive) wheel. It has been introduced in 2.3, though, and I do not know how old your python is.\n"
] |
[
1,
0
] |
[] |
[] |
[
"blogs",
"file_io",
"python",
"xml"
] |
stackoverflow_0001272405_blogs_file_io_python_xml.txt
|
Q:
Eliminate part of a file in python
In the below file I have 3 occurrences of '.1'. I want to eliminate the last one and write the rest of file to a new file. Kindly suggest some way to do it in PYTHON and thank you all.
d1dlwa_ a.1.1.1 (A:) Protozoan/bacterial hemoglobin {Ciliate (Paramecium caudatum) [TaxId: 5885]}
slfeqlggqaavqavtaqfyaniqadatvatffngidmpnqtnktaaflcaalggpnawt
A:
If the file's not too horrendously huge, by far the simplest approach is:
f = open('oldfile', 'r')
data = f.read()
f.close()
data = data.replace('.1.1.1', '.1.1')
f = open('newfile', 'w')
f.write(data)
f.close()
If the file IS horrendously huge, you'll need to read it and write it by pieces. For example, if each line ISN'T too horrendously huge:
inf = open('oldfile', 'r')
ouf = open('newfile', 'w')
for line in inf:
line = line.replace('.1.1.1', '.1.1')
ouf.write(line)
ouf.close()
inf.close()
A:
Works with any size file:
open('newfile', 'w').writelines(line.replace('.1.1.1', '.1.1')
for line in open('oldfile'))
A:
You can have something like this :
line = line.split(" ")
line[0] = line[0][0:line[0].rindex(".")]
print " ".join(line)
Not the prettiest code, but from my console tests, it works.
|
Eliminate part of a file in python
|
In the below file I have 3 occurrences of '.1'. I want to eliminate the last one and write the rest of file to a new file. Kindly suggest some way to do it in PYTHON and thank you all.
d1dlwa_ a.1.1.1 (A:) Protozoan/bacterial hemoglobin {Ciliate (Paramecium caudatum) [TaxId: 5885]}
slfeqlggqaavqavtaqfyaniqadatvatffngidmpnqtnktaaflcaalggpnawt
|
[
"If the file's not too horrendously huge, by far the simplest approach is:\nf = open('oldfile', 'r')\ndata = f.read()\nf.close()\n\ndata = data.replace('.1.1.1', '.1.1')\n\nf = open('newfile', 'w')\nf.write(data)\nf.close()\n\nIf the file IS horrendously huge, you'll need to read it and write it by pieces. For example, if each line ISN'T too horrendously huge:\ninf = open('oldfile', 'r')\nouf = open('newfile', 'w')\nfor line in inf:\n line = line.replace('.1.1.1', '.1.1')\n ouf.write(line)\nouf.close()\ninf.close()\n\n",
"Works with any size file:\nopen('newfile', 'w').writelines(line.replace('.1.1.1', '.1.1') \n for line in open('oldfile'))\n\n",
"You can have something like this :\n\nline = line.split(\" \")\nline[0] = line[0][0:line[0].rindex(\".\")]\nprint \" \".join(line)\n\nNot the prettiest code, but from my console tests, it works.\n"
] |
[
7,
4,
0
] |
[] |
[] |
[
"file",
"python"
] |
stackoverflow_0001274941_file_python.txt
|
Q:
technology recommendation for LAN Dashboard
I'm about to start a fairly large project for a mid-sized business
with a lot of integration with other systems (POS, accounting,
website, inventory, purchasing, etc.) The purpose of the system is to
try to reduce current data siloing and give employees role-based
access to the specific data entry and reports they need, as well as to
replace some manual and redundant business processes. The system needs
to be cross-platform (Windows/Linux), open source and is primarily for
LAN use.
My experience is mostly PHP/web/app development, but I have developed
a few LAN apps using Java/Servoy (like Filemaker). I found Servoy to be very rapid and to easily make use of different data providers (DB products), but it's not open source, and any non-standard development is in Java/Swing (which is verbose and takes a lot of time).
I'm interested in learning Python/Django or Ruby/Rails - but I'm not sure if these are the best solutions for building a mission critical data entry/reporting LAN app. Is a web client/server really a good choice for this type of application?
Thanks in advance for any tips/ advice.
A:
If you're comfortable with a LAMP-style stack with PHP, then there's no reason you can't use either Django or Rails. Both are mature, well documented platforms with active, helpful communities.
Based on what you've described, there's no reason that you can't use either technology.
A:
Both of these technologies are certainly mature enough to run Mission Critical applications, you just need to look at the number of big sites already on the internet that are running these technologies, so from that point of view you shouldn't be concerned.
You only need to worry about your learning curve, if you feel confident in learning them well enough to write quality code for your client then go for it. Have a look at each of them, decide which technology you would prefer and get coding.
Hope your application goes well :)
A:
You could also take a look at ExtJS for the frontend. I've made an ExtJS frontend for a company Dashboard, and using a Django backend managing the URL dispatching, the ORM and the data retrieval (communicating with the frontend with JSON webservices) and users love it, because it's almost as interactive as a local application (use something modern like Firefox 3.5, Chrome, Safari 4 or Explorer 8 for better javascript performance) but easy to manage for programmers and administrators (no installations, no local backups, no upgrade problems, etc.)
A:
Thank you everyone for your helpful answers! I think they address most of the issues raised by the question. But I think the key to the "final answer" (IMO) rests on the "multiple database" aspect. Railsninja suggested a piece of software he used for a project to extend rails functionality in this manner - thank you for the link! That could have been a possible solution - but it sounds like it was used for one project, and I worry about the testing since it is not a part of the mainstream Rails build.
Then I found out that multi-db support is just around the corner for a Django core update (eta late August 2009). So I think I am going to dive in to the project with Django.
|
technology recommendation for LAN Dashboard
|
I'm about to start a fairly large project for a mid-sized business
with a lot of integration with other systems (POS, accounting,
website, inventory, purchasing, etc.) The purpose of the system is to
try to reduce current data siloing and give employees role-based
access to the specific data entry and reports they need, as well as to
replace some manual and redundant business processes. The system needs
to be cross-platform (Windows/Linux), open source and is primarily for
LAN use.
My experience is mostly PHP/web/app development, but I have developed
a few LAN apps using Java/Servoy (like Filemaker). I found Servoy to be very rapid and to easily make use of different data providers (DB products), but it's not open source, and any non-standard development is in Java/Swing (which is verbose and takes a lot of time).
I'm interested in learning Python/Django or Ruby/Rails - but I'm not sure if these are the best solutions for building a mission critical data entry/reporting LAN app. Is a web client/server really a good choice for this type of application?
Thanks in advance for any tips/ advice.
|
[
"If you're comfortable with a LAMP-style stack with PHP, then there's no reason you can't use either Django or Rails. Both are mature, well documented platforms with active, helpful communities. \nBased on what you've described, there's no reason that you can't use either technology. \n",
"Both of these technologies are certainly mature enough to run Mission Critical applications, you just need to look at the number of big sites already on the internet that are running these technologies, so from that point of view you shouldn't be concerned.\nYou only need to worry about your learning curve, if you feel confident in learning them well enough to write quality code for your client then go for it. Have a look at each of them, decide which technology you would prefer and get coding.\nHope your application goes well :)\n",
"You could also take a look at ExtJS for the frontend. I've made an ExtJS frontend for a company Dashboard, and using a Django backend managing the URL dispatching, the ORM and the data retrieval (communicating with the frontend with JSON webservices) and users love it, because it's almost as interactive as a local application (use something modern like Firefox 3.5, Chrome, Safari 4 or Explorer 8 for better javascript performance) but easy to manage for programmers and administrators (no installations, no local backups, no upgrade problems, etc.)\n",
"Thank you everyone for your helpful answers! I think they address most of the issues raised by the question. But I think the key to the \"final answer\" (IMO) rests on the \"multiple database\" aspect. Railsninja suggested a piece of software he used for a project to extend rails functionality in this manner - thank you for the link! That could have been a possible solution - but it sounds like it was used for one project, and I worry about the testing since it is not a part of the mainstream Rails build. \nThen I found out that multi-db support is just around the corner for a Django core update (eta late August 2009). So I think I am going to dive in to the project with Django.\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"dashboard",
"django",
"filemaker",
"python",
"ruby_on_rails"
] |
stackoverflow_0001263756_dashboard_django_filemaker_python_ruby_on_rails.txt
|
Q:
JQuery "get" failure (using Google App Engine on the back-end)
What I am trying to do is pretty simple: yet something has clearly gone awry.
On the Front-End:
function eval() {
var x = 'Unchanged X'
$.get("/", { entry: document.getElementById('entry').value },
function(data){
x = data;
}
);
$("#result").html(x);
}
On the Back-End:
class MainHandler(webapp.RequestHandler):
def get(self):
path = os.path.join(os.path.dirname(__file__), 'index.html')
if self.request.get('entry') != '':
#self.response.out.write({'evalresult': self.request.get('entry')})
self.response.out.write(request.get('entry'))
else:
self.response.out.write(template.render(path, {'result': 'Welcome!!'}))
def main():
application = webapp.WSGIApplication([('/', MainHandler)],
debug=True)
wsgiref.handlers.CGIHandler().run(application)
Yet, apparently the function is never being called and #result gets set to 'Unchanged X'. What am I missing?
NOTE: The callback is NOT being called. I have verified this by placing an alert("Test") within the callback function. Any ideas anyone?
A:
$("#result").html(x); goes in the get() callback
A:
If the callback is not running you can try changing the $.get into a $.ajax() call, and adding an error callback, to see if the server is returning an error.
Or better yet, check in the "net" panel in firebug to see what the server response is, which might help you track down what the issue is on the back end.
Also once the issue is fixed, you might want to replace the $.get with a simple $().load which would take the data and place it into the div automatically:
$('#result').load('/', { entry: document.getElementById('entry').value });
EDIT: I suppose the following would be a more jQueryish way of writing it:
$('#result').load('/', { entry: $('#entry').val() });
A:
First we have the silly mistake:
<font size="3" face="Trebuchet MS">Speak Your Mind: </font><input type="text"
size="60" id="entry"/> <img valign="bottom" src='/assets/cognifyup.png'
onMouseOver="over()" onMouseOut="out()" onMouseDown="out(); evaluate();"
onMouseUp="over()"><br><br>
Semicolons are required after the calls to over() and out() (roger that? --- sorry couldn't resist)
Secondly (the much more subtle problem):
If we ever need intend to translate the get() into a getJSON() call, (which you might have noted was my original intent from the commented python code that returns a dict), then we need to wrap a str() call around self.request.get('entry'). Hence,
self.response.out.write({'evalresult': self.request.get('entry')})
becomes:
self.response.out.write({'evalresult': str(self.request.get('entry'))})
As strings from an HTML field translate to unicode text in Python, at the back-end, we apparently need to convert it to a Python string (as getJSON() apparently doesn't like Python's representation of a unicode string -- any ideas why this this is the case anyone?).
At any rate, the original problem has been solved. In conclusion: any JSON object with a Python unicode string will not be accepted as a valid JSON object and will fail silently -- a nasty gotcha that I can see biting anyone using JQuery with Python on the server-side.
|
JQuery "get" failure (using Google App Engine on the back-end)
|
What I am trying to do is pretty simple: yet something has clearly gone awry.
On the Front-End:
function eval() {
var x = 'Unchanged X'
$.get("/", { entry: document.getElementById('entry').value },
function(data){
x = data;
}
);
$("#result").html(x);
}
On the Back-End:
class MainHandler(webapp.RequestHandler):
def get(self):
path = os.path.join(os.path.dirname(__file__), 'index.html')
if self.request.get('entry') != '':
#self.response.out.write({'evalresult': self.request.get('entry')})
self.response.out.write(request.get('entry'))
else:
self.response.out.write(template.render(path, {'result': 'Welcome!!'}))
def main():
application = webapp.WSGIApplication([('/', MainHandler)],
debug=True)
wsgiref.handlers.CGIHandler().run(application)
Yet, apparently the function is never being called and #result gets set to 'Unchanged X'. What am I missing?
NOTE: The callback is NOT being called. I have verified this by placing an alert("Test") within the callback function. Any ideas anyone?
|
[
"$(\"#result\").html(x); goes in the get() callback\n",
"If the callback is not running you can try changing the $.get into a $.ajax() call, and adding an error callback, to see if the server is returning an error.\nOr better yet, check in the \"net\" panel in firebug to see what the server response is, which might help you track down what the issue is on the back end.\nAlso once the issue is fixed, you might want to replace the $.get with a simple $().load which would take the data and place it into the div automatically:\n$('#result').load('/', { entry: document.getElementById('entry').value });\n\nEDIT: I suppose the following would be a more jQueryish way of writing it:\n$('#result').load('/', { entry: $('#entry').val() });\n\n",
"First we have the silly mistake:\n<font size=\"3\" face=\"Trebuchet MS\">Speak Your Mind: </font><input type=\"text\" \nsize=\"60\" id=\"entry\"/> <img valign=\"bottom\" src='/assets/cognifyup.png' \nonMouseOver=\"over()\" onMouseOut=\"out()\" onMouseDown=\"out(); evaluate();\" \nonMouseUp=\"over()\"><br><br>\n\nSemicolons are required after the calls to over() and out() (roger that? --- sorry couldn't resist)\nSecondly (the much more subtle problem):\nIf we ever need intend to translate the get() into a getJSON() call, (which you might have noted was my original intent from the commented python code that returns a dict), then we need to wrap a str() call around self.request.get('entry'). Hence, \nself.response.out.write({'evalresult': self.request.get('entry')}) \nbecomes: \nself.response.out.write({'evalresult': str(self.request.get('entry'))}) \nAs strings from an HTML field translate to unicode text in Python, at the back-end, we apparently need to convert it to a Python string (as getJSON() apparently doesn't like Python's representation of a unicode string -- any ideas why this this is the case anyone?).\nAt any rate, the original problem has been solved. In conclusion: any JSON object with a Python unicode string will not be accepted as a valid JSON object and will fail silently -- a nasty gotcha that I can see biting anyone using JQuery with Python on the server-side.\n"
] |
[
2,
2,
1
] |
[] |
[] |
[
"google_app_engine",
"javascript",
"jquery",
"python"
] |
stackoverflow_0001275708_google_app_engine_javascript_jquery_python.txt
|
Q:
Take screenshots **quickly** from python
A PIL.Image.grab() takes about 0.5 seconds. That's just to get data from the screen to my app, without any processing on my part. FRAPS, on the other hand, can take screenshots up to 30 FPS. Is there any way for me to do the same from a Python program? If not, how about from a C program? (I could interface it w/ the Python program, potentially...)
A:
If you want fast screenshots, you must use a lower level API, like DirectX or GTK. There are Python wrappers for those, like DirectPython and PyGTK. Some samples I've found follow:
PyGTK sample
Windows and DirectX samples
|
Take screenshots **quickly** from python
|
A PIL.Image.grab() takes about 0.5 seconds. That's just to get data from the screen to my app, without any processing on my part. FRAPS, on the other hand, can take screenshots up to 30 FPS. Is there any way for me to do the same from a Python program? If not, how about from a C program? (I could interface it w/ the Python program, potentially...)
|
[
"If you want fast screenshots, you must use a lower level API, like DirectX or GTK. There are Python wrappers for those, like DirectPython and PyGTK. Some samples I've found follow:\n\nPyGTK sample \nWindows and DirectX samples\n\n"
] |
[
7
] |
[] |
[] |
[
"image",
"optimization",
"performance",
"python",
"screen_scraping"
] |
stackoverflow_0001276616_image_optimization_performance_python_screen_scraping.txt
|
Q:
Batch output redirection when using start command for GUI app
This is the scenario:
We have a Python script that starts a Windows batch file and redirects its output to a file. Afterwards it reads the file and then tries to delete it:
os.system(C:\batch.bat >C:\temp.txt 2>&1)
os.remove(C:\temp.txt)
In the batch.bat we start a Windows GUI programm like this:
start c:\the_programm.exe
Thats all in the batch fíle.
Now the os.remove() fails with "Permission denied" because the temp.txt is still locked by the system. It seems this is caused by the still runing the_programm.exe (whos output also seems to be redirected to the temp.txt).
Any idea how to start the_programm.exe without having the temp.txt locked while it is still running? The Python part is hardly changeable as this is a tool (BusyB).
In fact I do not need the output of the_programm.exe, so the essence of the question is: How do I decouple the_programm.exe from locking temp.txt for its output?
Or: How do I use START or another Windows command to start a program without inheriting the batch output redirection?
A:
This is a bit hacky, but you could try it. It uses the AT command to run the_programm.exe up to a minute in the future (which it computes using the %TIME% environment variable and SET arithmetic).
batch.bat:
@echo off
setlocal
:: store the current time so it does not change while parsing
set t=%time%
:: parse hour, minute, second
set h=%t:~0,2%
set m=%t:~3,2%
set s=%t:~6,2%
:: reduce strings to simple integers
if "%h:~0,1%"==" " set h=%h:~1%
if "%m:~0,1%"=="0" set m=%m:~1%
if "%s:~0,1%"=="0" set s=%s:~1%
:: choose number of seconds in the future; granularity for AT is one
:: minute, plus we need a few extra seconds for this script to run
set x=70
:: calculate hour and minute to run the program
set /a x=s + x
set /a s="x %% 60"
set /a x=m + x / 60
set /a m="x %% 60"
set /a h=h + x / 60
set /a h="h %% 24"
:: schedule the program to run
at %h%:%m% c:\the_programm.exe
You can look at the AT /? and SET /? to see what each of these is doing. I left off the /interactive parameter of AT since you commented that "no user interaction is allowed".
Caveats:
It appears that %TIME% is always 24-hour time, regardless of locale settings in the control panel, but I don't have any proof of this.
If your system is loaded down and batch.bat takes more than 10 seconds to run, the AT command will be scheduled to run 1 day later. You can recover this manually, using AT {job} /delete, and increase the x=70 to something more acceptable.
The START command, unfortunately, even when given /i to ignore the current environment, seems to pass along the open file descriptors of the parent cmd.exe process. These file descriptors appear to be handed off to subprocesses, even if the subprocesses are redirected to NUL, and are kept open even if intermediate shell processes terminate. You can see this in Process Explorer if you have a batch file which STARTs another batch file which STARTs another batch file (etc.) which STARTs a GUI Windows app. Once the intermediate batch files have terminated, the GUI app will own the file handles, even if it (and the intermediate batch files) were all redirected to NUL.
A:
I don't think Windows will let you delete an open file. Sounds like you're wanting to throw away the program's output; would redirecting to 'nul' instead do what you need?
A:
As I understand it, this is the issue, and what he wants to do:
Make no changes to the python code.
The Python code is written assuming that "temp.txt" is no longer being used when this function returns:
os.system(C:\batch.bat >C:\temp.txt 2>&1)
This is in fact not the case because "batch.bat" spawns an interactive GUI program using the "start" command.
How about changing your "batch.bat" file to contain:
start c:\the_programm.exe
pause
This will keep the "batch.bat" file running until you hit a key on that window. Once you hit a key, the "os.system" python command will return, and then python will call "os.remove".
A:
Are you closing the file after you're done reading it? The following works at my end:
import os
os.system('runbat.bat > runbat.log 2>&1')
f = open('runbat.log')
print f.read()
f.close()
os.remove('runbat.log')
but fails if I remove the f.close() line.
A:
Why capture to a file if you're just deleting it immediately?
How about this:
os.system(C:\batch.bat >nul 2>&1)
EDIT: Oops, I missed your comment about reading the file, I only noticed the code.
A:
Finally I could find a proper solution:
I am not using a batch file anymore for starting the_programm.exe, but a Python script:
from subprocess import Popen
if __name__ == '__main__':
Popen('C:/the_programm.exe', close_fds=True)
The close_fds parameter decouples the file handles from the .exe process! That's it!
|
Batch output redirection when using start command for GUI app
|
This is the scenario:
We have a Python script that starts a Windows batch file and redirects its output to a file. Afterwards it reads the file and then tries to delete it:
os.system(C:\batch.bat >C:\temp.txt 2>&1)
os.remove(C:\temp.txt)
In the batch.bat we start a Windows GUI programm like this:
start c:\the_programm.exe
Thats all in the batch fíle.
Now the os.remove() fails with "Permission denied" because the temp.txt is still locked by the system. It seems this is caused by the still runing the_programm.exe (whos output also seems to be redirected to the temp.txt).
Any idea how to start the_programm.exe without having the temp.txt locked while it is still running? The Python part is hardly changeable as this is a tool (BusyB).
In fact I do not need the output of the_programm.exe, so the essence of the question is: How do I decouple the_programm.exe from locking temp.txt for its output?
Or: How do I use START or another Windows command to start a program without inheriting the batch output redirection?
|
[
"This is a bit hacky, but you could try it. It uses the AT command to run the_programm.exe up to a minute in the future (which it computes using the %TIME% environment variable and SET arithmetic).\nbatch.bat:\n@echo off\nsetlocal\n:: store the current time so it does not change while parsing\nset t=%time%\n:: parse hour, minute, second\nset h=%t:~0,2%\nset m=%t:~3,2%\nset s=%t:~6,2%\n:: reduce strings to simple integers\nif \"%h:~0,1%\"==\" \" set h=%h:~1%\nif \"%m:~0,1%\"==\"0\" set m=%m:~1%\nif \"%s:~0,1%\"==\"0\" set s=%s:~1%\n:: choose number of seconds in the future; granularity for AT is one\n:: minute, plus we need a few extra seconds for this script to run\nset x=70\n:: calculate hour and minute to run the program\nset /a x=s + x\nset /a s=\"x %% 60\"\nset /a x=m + x / 60\nset /a m=\"x %% 60\"\nset /a h=h + x / 60\nset /a h=\"h %% 24\"\n:: schedule the program to run\nat %h%:%m% c:\\the_programm.exe\n\nYou can look at the AT /? and SET /? to see what each of these is doing. I left off the /interactive parameter of AT since you commented that \"no user interaction is allowed\".\nCaveats:\n\nIt appears that %TIME% is always 24-hour time, regardless of locale settings in the control panel, but I don't have any proof of this.\nIf your system is loaded down and batch.bat takes more than 10 seconds to run, the AT command will be scheduled to run 1 day later. You can recover this manually, using AT {job} /delete, and increase the x=70 to something more acceptable.\n\nThe START command, unfortunately, even when given /i to ignore the current environment, seems to pass along the open file descriptors of the parent cmd.exe process. These file descriptors appear to be handed off to subprocesses, even if the subprocesses are redirected to NUL, and are kept open even if intermediate shell processes terminate. You can see this in Process Explorer if you have a batch file which STARTs another batch file which STARTs another batch file (etc.) which STARTs a GUI Windows app. Once the intermediate batch files have terminated, the GUI app will own the file handles, even if it (and the intermediate batch files) were all redirected to NUL.\n",
"I don't think Windows will let you delete an open file. Sounds like you're wanting to throw away the program's output; would redirecting to 'nul' instead do what you need?\n",
"As I understand it, this is the issue, and what he wants to do:\n\nMake no changes to the python code.\nThe Python code is written assuming that \"temp.txt\" is no longer being used when this function returns:\nos.system(C:\\batch.bat >C:\\temp.txt 2>&1)\nThis is in fact not the case because \"batch.bat\" spawns an interactive GUI program using the \"start\" command.\n\nHow about changing your \"batch.bat\" file to contain:\nstart c:\\the_programm.exe\npause\n\nThis will keep the \"batch.bat\" file running until you hit a key on that window. Once you hit a key, the \"os.system\" python command will return, and then python will call \"os.remove\".\n",
"Are you closing the file after you're done reading it? The following works at my end:\nimport os\n\nos.system('runbat.bat > runbat.log 2>&1')\nf = open('runbat.log')\nprint f.read()\nf.close()\nos.remove('runbat.log')\n\nbut fails if I remove the f.close() line.\n",
"Why capture to a file if you're just deleting it immediately?\nHow about this:\nos.system(C:\\batch.bat >nul 2>&1)\n\nEDIT: Oops, I missed your comment about reading the file, I only noticed the code.\n",
"Finally I could find a proper solution:\nI am not using a batch file anymore for starting the_programm.exe, but a Python script:\nfrom subprocess import Popen\n\n if __name__ == '__main__':\n Popen('C:/the_programm.exe', close_fds=True)\n\nThe close_fds parameter decouples the file handles from the .exe process! That's it!\n"
] |
[
2,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"batch_file",
"output_redirect",
"python",
"windows"
] |
stackoverflow_0001272309_batch_file_output_redirect_python_windows.txt
|
Q:
python subprocess module: looping over stdout of child process
I have some commands which I am running using the subprocess module. I then want to loop over the lines of the output. The documentation says do not do data_stream.stdout.read which I am not but I may be doing something which calls that. I am looping over the output like this:
for line in data_stream.stdout:
#do stuff here
.
.
.
Can this cause deadlocks like reading from data_stream.stdout or are the Popen modules set up for this kind of looping such that it uses the communicate code but handles all the callings of it for you?
A:
You have to worry about deadlocks if you're communicating with your subprocess, i.e. if you're writing to stdin as well as reading from stdout. Because these pipes may be cached, doing this kind of two-way communication is very much a no-no:
data_stream = Popen(mycmd, stdin=PIPE, stdout=PIPE)
data_stream.stdin.write("do something\n")
for line in data_stream:
... # BAD!
However, if you've not set up stdin (or stderr) when constructing data_stream, you should be fine.
data_stream = Popen(mycmd, stdout=PIPE)
for line in data_stream.stdout:
... # Fine
If you need two-way communication, use communicate.
A:
The two answer have caught the gist of the issue pretty well: don't mix writing something to the subprocess, reading something from it, writing again, etc -- the pipe's buffering means you're at risk of a deadlock. If you can, write everything you need to write to the subprocess FIRST, close that pipe, and only THEN read everything the subprocess has to say; communicate is nice for the purpose, IF the amount of data is not too large to fit in memory (if it is, you can still achieve the same effect "manually").
If you need finer-grain interaction, look instead at pexpect or, if you're on Windows, wexpect.
A:
SilentGhost's/chrispy's answers are OK if you have a small to moderate amount of output from your subprocess. Sometimes, though, there may be a lot of output - too much to comfortably buffer in memory. In such a case, the thing to do is start() the process, and spawn a couple of threads - one to read child.stdout and one to read child.stderr where child is the subprocess. You then need to wait() for the subprocess to terminate.
This is actually how communicate() works; the advantage of using your own threads is that you can process the output from the subprocess as it is generated. For example, in my project python-gnupg I use this technique to read status output from the GnuPG executable as it is generated, rather than waiting for all of it by calling communicate(). You are welcome to inspect the source of this project - the relevant stuff is in the module gnupg.py.
A:
data_stream.stdout is a standard output handle. you shouldn't be looping over it. communicate returns tuple of (stdoutdata, stderr). this stdoutdata you should be using to do your stuff.
|
python subprocess module: looping over stdout of child process
|
I have some commands which I am running using the subprocess module. I then want to loop over the lines of the output. The documentation says do not do data_stream.stdout.read which I am not but I may be doing something which calls that. I am looping over the output like this:
for line in data_stream.stdout:
#do stuff here
.
.
.
Can this cause deadlocks like reading from data_stream.stdout or are the Popen modules set up for this kind of looping such that it uses the communicate code but handles all the callings of it for you?
|
[
"You have to worry about deadlocks if you're communicating with your subprocess, i.e. if you're writing to stdin as well as reading from stdout. Because these pipes may be cached, doing this kind of two-way communication is very much a no-no:\ndata_stream = Popen(mycmd, stdin=PIPE, stdout=PIPE)\ndata_stream.stdin.write(\"do something\\n\")\nfor line in data_stream:\n ... # BAD!\n\nHowever, if you've not set up stdin (or stderr) when constructing data_stream, you should be fine.\ndata_stream = Popen(mycmd, stdout=PIPE)\nfor line in data_stream.stdout:\n ... # Fine\n\nIf you need two-way communication, use communicate.\n",
"The two answer have caught the gist of the issue pretty well: don't mix writing something to the subprocess, reading something from it, writing again, etc -- the pipe's buffering means you're at risk of a deadlock. If you can, write everything you need to write to the subprocess FIRST, close that pipe, and only THEN read everything the subprocess has to say; communicate is nice for the purpose, IF the amount of data is not too large to fit in memory (if it is, you can still achieve the same effect \"manually\").\nIf you need finer-grain interaction, look instead at pexpect or, if you're on Windows, wexpect.\n",
"SilentGhost's/chrispy's answers are OK if you have a small to moderate amount of output from your subprocess. Sometimes, though, there may be a lot of output - too much to comfortably buffer in memory. In such a case, the thing to do is start() the process, and spawn a couple of threads - one to read child.stdout and one to read child.stderr where child is the subprocess. You then need to wait() for the subprocess to terminate.\nThis is actually how communicate() works; the advantage of using your own threads is that you can process the output from the subprocess as it is generated. For example, in my project python-gnupg I use this technique to read status output from the GnuPG executable as it is generated, rather than waiting for all of it by calling communicate(). You are welcome to inspect the source of this project - the relevant stuff is in the module gnupg.py.\n",
"data_stream.stdout is a standard output handle. you shouldn't be looping over it. communicate returns tuple of (stdoutdata, stderr). this stdoutdata you should be using to do your stuff.\n"
] |
[
9,
6,
4,
0
] |
[] |
[] |
[
"python",
"subprocess"
] |
stackoverflow_0001277866_python_subprocess.txt
|
Q:
How do I detect missing fields in a CSV file in a Pythonic way?
I'm trying to parse a CSV file using Python's csv module (specifically, the DictReader class). Is there a Pythonic way to detect empty or missing fields and throw an error?
Here's a sample file using the following headers: NAME, LABEL, VALUE
foo,bar,baz
yes,no
x,y,z
When parsing, I'd like the second line to throw an error since it's missing the VALUE field.
Here's a code snippet which shows how I'm approaching this (disregard the hard-coded strings...they're only present for brevity):
import csv
HEADERS = ["name", "label", "value" ]
fileH = open('configFile')
reader = csv.DictReader(fileH, HEADERS)
for row in reader:
if row["name"] is None or row["name"] == "":
# raise Error
if row["label"] is None or row["label"] == "":
# raise Error
...
fileH.close()
Is there a cleaner way of checking for fields in the CSV file w/out having a bunch of if statements? If I need to add more fields, I'll also need more conditionals, which I would like to avoid if possible.
A:
if any(row[key] in (None, "") for key in row):
# raise error
Edit: Even better:
if any(val in (None, "") for val in row.itervalues()):
# raise error
A:
Since None and empty strings both evaluate to False, you should consider this:
for row in reader:
for header in HEADERS:
if not row[header]:
# raise error
Note that, unlike some other answers, you will still have the option of raising an informative, header-specific error.
A:
Something like this?
...
for row in reader:
for column, value in row.items():
if value is None or value == "":
# raise Error, using value of column to say which field is missing
You may be able to use 'if not value:' as your test instead of the more explicit test you gave.
A:
This code will provide, for each row, a list of field names which are not present (or are empty) for that row. You could then provide a more detailed exception, such as "Missing fields: foo, baz".
def missing(row):
return [h for h in HEADERS if not row.get(h)]
for row in reader:
m = missing(row)
if missing:
# raise exception with list of missing field names
A:
If you use matplotlib.mlab.csv2rec, it already saves the content of the file into an array and raise an error if one of the values is missing.
>>> from matplotlib.mlab import csv2rec
>>> content_array = csv2rec('file.txt')
IndexError: list index out of range
The problem is that there is not a simple way to customize this behaviour, or to supply a default value in case of missing rows. Moreover, the error message is not very explainatory (could be useful to post a bug report here).
p.s. since csv2rec saves the content of the file into a numpy record, it will be easier to get the values equal to None.
|
How do I detect missing fields in a CSV file in a Pythonic way?
|
I'm trying to parse a CSV file using Python's csv module (specifically, the DictReader class). Is there a Pythonic way to detect empty or missing fields and throw an error?
Here's a sample file using the following headers: NAME, LABEL, VALUE
foo,bar,baz
yes,no
x,y,z
When parsing, I'd like the second line to throw an error since it's missing the VALUE field.
Here's a code snippet which shows how I'm approaching this (disregard the hard-coded strings...they're only present for brevity):
import csv
HEADERS = ["name", "label", "value" ]
fileH = open('configFile')
reader = csv.DictReader(fileH, HEADERS)
for row in reader:
if row["name"] is None or row["name"] == "":
# raise Error
if row["label"] is None or row["label"] == "":
# raise Error
...
fileH.close()
Is there a cleaner way of checking for fields in the CSV file w/out having a bunch of if statements? If I need to add more fields, I'll also need more conditionals, which I would like to avoid if possible.
|
[
"if any(row[key] in (None, \"\") for key in row):\n # raise error\n\nEdit: Even better:\nif any(val in (None, \"\") for val in row.itervalues()):\n # raise error\n\n",
"Since None and empty strings both evaluate to False, you should consider this:\nfor row in reader:\n for header in HEADERS:\n if not row[header]:\n # raise error\n\nNote that, unlike some other answers, you will still have the option of raising an informative, header-specific error.\n",
"Something like this?\n...\nfor row in reader:\n for column, value in row.items():\n if value is None or value == \"\":\n # raise Error, using value of column to say which field is missing\n\nYou may be able to use 'if not value:' as your test instead of the more explicit test you gave.\n",
"This code will provide, for each row, a list of field names which are not present (or are empty) for that row. You could then provide a more detailed exception, such as \"Missing fields: foo, baz\".\ndef missing(row):\n return [h for h in HEADERS if not row.get(h)]\n\nfor row in reader:\n m = missing(row)\n if missing:\n # raise exception with list of missing field names\n\n",
"If you use matplotlib.mlab.csv2rec, it already saves the content of the file into an array and raise an error if one of the values is missing.\n>>> from matplotlib.mlab import csv2rec\n>>> content_array = csv2rec('file.txt')\nIndexError: list index out of range\n\nThe problem is that there is not a simple way to customize this behaviour, or to supply a default value in case of missing rows. Moreover, the error message is not very explainatory (could be useful to post a bug report here).\np.s. since csv2rec saves the content of the file into a numpy record, it will be easier to get the values equal to None.\n"
] |
[
21,
2,
1,
1,
0
] |
[] |
[] |
[
"csv",
"error_handling",
"python"
] |
stackoverflow_0001278749_csv_error_handling_python.txt
|
Q:
How to eliminate last digit from each of the top lines
Sequence 1.1.1 ATGCGCGCGATAAGGCGCTA
ATATTATAGCGCGCGCGCGGATATATATATATATATATATT
Sequence 1.2.2 ATATGCGCGCGCGCGCGGCG
ACCCCGCGCGCGCGCGGCGCGATATATATATATATATATATT
Sequence 2.1.1 ATTCGCGCGAGTATAGCGGCG
NOW,I would like to remove the last digit from each of the line that starts with '>'. For example, in this first line, i would like to remove '.1' (rightmost) and in second instance i would like to remove '.2' and then write the rest of the file to a new file. Thanks,
A:
import fileinput
import re
for line in fileinput.input(inplace=True, backup='.bak'):
line = line.rstrip()
if line.startswith('>'):
line = re.sub(r'\.\d$', '', line)
print line
many details can be changed depending on details of the processing you want, which you have not clearly communicated, but this is the general idea.
A:
if line.startswith('>Sequence'):
line = line[:-2] # trim 2 characters from the end of the string
or if there could be more than one digit after the period:
if line.startswith('>Sequence'):
dot_pos = line.rfind('.') # find position of rightmost period
line = line[:dot_pos] # truncate upto but not including the dot
Edit for if the sequence occurs on the same line as >Sequence
If we know that there will always be only 1 digit to remove we can cut out the period and the digit with:
line = line[:13] + line[15:]
This is using a feature of Python called slices. The indexes are zero-based and exclusive for the end of the range so line[0:13] will give us the first 13 characters of line. Except that if we want to start at the beginning the 0 is optional so line[:13] does the same thing. Similarly line[15:] gives us the substring starting at character 15 to the end of the string.
A:
import re
trimmedtext = re.sub(r'(\d+\.\d+)\.\d', '$1', text)
Should do it. Somewhat simpler than searching for start characters (and it won't effect your DNA chains)
A:
map "".join(line.split('.')[:-1]) to each line of the file.
A:
Here's a short script. Run it like: script [filename to clean]. Lots of error handling omitted.
It operates using generators, so it should work fine on huge files as well.
import sys
import os
def clean_line(line):
if line.startswith(">"):
return line.rstrip()[:-2]
else:
return line.rstrip()
def clean(input):
for line in input:
yield clean_line(line)
if __name__ == "__main__":
filename = sys.argv[1]
print "Cleaning %s; output to %s.." % (filename, filename + ".clean")
input = None
output = None
try:
input = open(filename, "r")
output = open(filename + ".clean", "w")
for line in clean(input):
output.write(line + os.linesep)
print ": " + line
except:
input.close()
if output != None:
output.close()
A:
import re
input_file = open('in')
output_file = open('out', 'w')
for line in input_file:
line = re.sub(r'(\d+[.]\d+)[.]\d+', r'\1', line)
output_file.write(line)
|
How to eliminate last digit from each of the top lines
|
Sequence 1.1.1 ATGCGCGCGATAAGGCGCTA
ATATTATAGCGCGCGCGCGGATATATATATATATATATATT
Sequence 1.2.2 ATATGCGCGCGCGCGCGGCG
ACCCCGCGCGCGCGCGGCGCGATATATATATATATATATATT
Sequence 2.1.1 ATTCGCGCGAGTATAGCGGCG
NOW,I would like to remove the last digit from each of the line that starts with '>'. For example, in this first line, i would like to remove '.1' (rightmost) and in second instance i would like to remove '.2' and then write the rest of the file to a new file. Thanks,
|
[
"import fileinput\nimport re\n\nfor line in fileinput.input(inplace=True, backup='.bak'):\n line = line.rstrip()\n if line.startswith('>'):\n line = re.sub(r'\\.\\d$', '', line)\n print line\n\nmany details can be changed depending on details of the processing you want, which you have not clearly communicated, but this is the general idea.\n",
"if line.startswith('>Sequence'):\n line = line[:-2] # trim 2 characters from the end of the string\n\nor if there could be more than one digit after the period:\nif line.startswith('>Sequence'):\n dot_pos = line.rfind('.') # find position of rightmost period\n line = line[:dot_pos] # truncate upto but not including the dot\n\nEdit for if the sequence occurs on the same line as >Sequence\nIf we know that there will always be only 1 digit to remove we can cut out the period and the digit with:\nline = line[:13] + line[15:]\n\nThis is using a feature of Python called slices. The indexes are zero-based and exclusive for the end of the range so line[0:13] will give us the first 13 characters of line. Except that if we want to start at the beginning the 0 is optional so line[:13] does the same thing. Similarly line[15:] gives us the substring starting at character 15 to the end of the string.\n",
"import re\ntrimmedtext = re.sub(r'(\\d+\\.\\d+)\\.\\d', '$1', text)\n\nShould do it. Somewhat simpler than searching for start characters (and it won't effect your DNA chains)\n",
"map \"\".join(line.split('.')[:-1]) to each line of the file.\n",
"Here's a short script. Run it like: script [filename to clean]. Lots of error handling omitted.\nIt operates using generators, so it should work fine on huge files as well.\nimport sys\nimport os\n\ndef clean_line(line):\n if line.startswith(\">\"):\n return line.rstrip()[:-2]\n else:\n return line.rstrip()\n\ndef clean(input):\n for line in input:\n yield clean_line(line)\n\nif __name__ == \"__main__\":\n filename = sys.argv[1]\n\n print \"Cleaning %s; output to %s..\" % (filename, filename + \".clean\")\n\n input = None\n output = None\n try:\n input = open(filename, \"r\")\n output = open(filename + \".clean\", \"w\")\n for line in clean(input):\n output.write(line + os.linesep)\n print \": \" + line\n except:\n input.close()\n if output != None:\n output.close()\n\n",
"import re\n\ninput_file = open('in')\noutput_file = open('out', 'w')\n\nfor line in input_file:\n line = re.sub(r'(\\d+[.]\\d+)[.]\\d+', r'\\1', line)\n output_file.write(line)\n\n"
] |
[
7,
4,
4,
2,
1,
0
] |
[] |
[] |
[
"file",
"python"
] |
stackoverflow_0001278664_file_python.txt
|
Q:
example for using streamhtmlparser
Can anyone give me an example on how to use http://code.google.com/p/streamhtmlparser to parse out all the A tag href's from an html document? (either C++ code or python code is ok, but I would prefer an example using the python bindings)
I can see how it works in the python tests, but they expect special tokens already in the html at which points it checks state values. I don't see how to get the proper callbacks during state changes when feeding the parser plain html.
I can get some of the information I am looking for with the following code, but I need to feed it blocks of html not just characters at a time, and i need to know when it's finished with a tag,attribute, etc not just if it's in a tag, attribute, or value.
import py_streamhtmlparser
parser = py_streamhtmlparser.HtmlParser()
html = """<html><body><a href='http://google.com'>link</a></body></html>"""
for index, character in enumerate(html):
parser.Parse(character)
print index, character, parser.Tag(), parser.Attribute(), parser.Value(), parser.ValueIndex()
you can see a sample run of this code here
A:
import py_streamhtmlparser
parser = py_streamhtmlparser.HtmlParser()
html = """<html><body><a href='http://google.com' id=100>
link</a><p><a href=heise.de/></body></html>"""
cur_attr = cur_value = None
for index, character in enumerate(html):
parser.Parse(character)
if parser.State() == py_streamhtmlparser.HTML_STATE_VALUE:
# we are in an attribute value. Record what we got so far
cur_tag = parser.Tag()
cur_attr = parser.Attribute()
cur_value = parser.Value()
continue
if cur_value:
# we are not in the value anymore, but have seen one just before
print "%r %r %r" % (cur_tag, cur_attr, cur_value)
cur_value = None
gives
'a' 'href' 'http://google.com'
'a' 'id' '100'
'a' 'href' 'heise.de/'
If you only want the href attributes, check for cur_attr at the point of the print as well.
Edit: The Python bindings currently don't support any kind of event callbacks. So the only output available is the state at the end of processing the respective input. To change that, htmlparser.c:exit_attr (etc.) could be augmented with a callback function. However, this is really not the purpose of streamhtmlparser - it is meant as a templating engine, where you have markers in the source, and you process the input character by character.
|
example for using streamhtmlparser
|
Can anyone give me an example on how to use http://code.google.com/p/streamhtmlparser to parse out all the A tag href's from an html document? (either C++ code or python code is ok, but I would prefer an example using the python bindings)
I can see how it works in the python tests, but they expect special tokens already in the html at which points it checks state values. I don't see how to get the proper callbacks during state changes when feeding the parser plain html.
I can get some of the information I am looking for with the following code, but I need to feed it blocks of html not just characters at a time, and i need to know when it's finished with a tag,attribute, etc not just if it's in a tag, attribute, or value.
import py_streamhtmlparser
parser = py_streamhtmlparser.HtmlParser()
html = """<html><body><a href='http://google.com'>link</a></body></html>"""
for index, character in enumerate(html):
parser.Parse(character)
print index, character, parser.Tag(), parser.Attribute(), parser.Value(), parser.ValueIndex()
you can see a sample run of this code here
|
[
"import py_streamhtmlparser\nparser = py_streamhtmlparser.HtmlParser()\nhtml = \"\"\"<html><body><a href='http://google.com' id=100>\n link</a><p><a href=heise.de/></body></html>\"\"\"\ncur_attr = cur_value = None\nfor index, character in enumerate(html):\n parser.Parse(character)\n if parser.State() == py_streamhtmlparser.HTML_STATE_VALUE:\n # we are in an attribute value. Record what we got so far\n cur_tag = parser.Tag()\n cur_attr = parser.Attribute()\n cur_value = parser.Value()\n continue\n if cur_value:\n # we are not in the value anymore, but have seen one just before\n print \"%r %r %r\" % (cur_tag, cur_attr, cur_value)\n cur_value = None\n\ngives\n'a' 'href' 'http://google.com'\n'a' 'id' '100'\n'a' 'href' 'heise.de/'\n\nIf you only want the href attributes, check for cur_attr at the point of the print as well.\nEdit: The Python bindings currently don't support any kind of event callbacks. So the only output available is the state at the end of processing the respective input. To change that, htmlparser.c:exit_attr (etc.) could be augmented with a callback function. However, this is really not the purpose of streamhtmlparser - it is meant as a templating engine, where you have markers in the source, and you process the input character by character.\n"
] |
[
1
] |
[] |
[] |
[
"c++",
"html",
"parsing",
"python"
] |
stackoverflow_0001261264_c++_html_parsing_python.txt
|
Q:
Python and regex
i have to parse need string.
Here is command I execute in Linux console:
amixer get Master |grep Mono:
And get, for example,
Mono: Playback 61 [95%] [-3.00dB] [on]
Then i test it from python-console:
import re,os
print re.search( ur"(?<=\[)[0-9]{1,3}", u" Mono: Playback 61 [95%] [-3.00dB] [on]" ).group()[0]
And get result: 95. It's that, what i need. But if I'll change my script to this:
print re.search( ur"(?<=\[)[0-9]{1,3}", str(os.system("amixer get Master |grep Mono:")) ).group()[0]
It'll returns None-object. Why?
A:
os.system() returns the exit code from the application, not the text output of the application.
You should read up on the subprocess Python module; it will do what you need.
A:
Instead of using os.system(), use the subprocess module:
from subprocess import Popen, PIPE
p = Popen("amixer get Master | grep Mono:", shell = True, stdout = PIPE)
stdout = p.stdout.read()
print re.search( ur"(?<=\[)[0-9]{1,3}", stdout).group()
A:
How to run a process and get the output:
http://docs.python.org/library/popen2.html
|
Python and regex
|
i have to parse need string.
Here is command I execute in Linux console:
amixer get Master |grep Mono:
And get, for example,
Mono: Playback 61 [95%] [-3.00dB] [on]
Then i test it from python-console:
import re,os
print re.search( ur"(?<=\[)[0-9]{1,3}", u" Mono: Playback 61 [95%] [-3.00dB] [on]" ).group()[0]
And get result: 95. It's that, what i need. But if I'll change my script to this:
print re.search( ur"(?<=\[)[0-9]{1,3}", str(os.system("amixer get Master |grep Mono:")) ).group()[0]
It'll returns None-object. Why?
|
[
"os.system() returns the exit code from the application, not the text output of the application.\nYou should read up on the subprocess Python module; it will do what you need.\n",
"Instead of using os.system(), use the subprocess module:\nfrom subprocess import Popen, PIPE\np = Popen(\"amixer get Master | grep Mono:\", shell = True, stdout = PIPE)\nstdout = p.stdout.read()\nprint re.search( ur\"(?<=\\[)[0-9]{1,3}\", stdout).group()\n\n",
"How to run a process and get the output:\nhttp://docs.python.org/library/popen2.html\n"
] |
[
7,
1,
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0001279087_python_regex.txt
|
Q:
BytesIO with python v2.5
Question:
How do I get a byte stream that works like StringIO for Python 2.5?
Application:
I'm converting a PDF to text, but don't want to save a file to the hard disk.
Other Thoughts:
I figured I could use StringIO, but there's no mode parameter (I guess "String" implies text mode).
Apparently the io.BytesIO class is new in v2.6, so that doesn't work for me either.
I've got a solution with the tempfile module, but I'd like to avoid any reads/writes to/from the hard disk.
A:
In Python 2.x, "string" means "bytes", and "unicode" means "string". You should use the StringIO or cStringIO modules. The mode will depend on which kind of data you pass in as the buffer parameter.
A:
If you're working with PDF, then StringIO should be fine as long as you pay heed to the docs:
The StringIO object can accept either Unicode or 8-bit strings, but mixing the two may take some care. If both are used, 8-bit strings that cannot be interpreted as 7-bit ASCII (that use the 8th bit) will cause a UnicodeError to be raised when getvalue() is called.
Note this is not true for cStringIO:
Unlike the memory files implemented by the StringIO module, those provided by this module are not able to accept Unicode strings that cannot be encoded as plain ASCII strings.
See the full documentation at:
http://docs.python.org/library/stringio.html
|
BytesIO with python v2.5
|
Question:
How do I get a byte stream that works like StringIO for Python 2.5?
Application:
I'm converting a PDF to text, but don't want to save a file to the hard disk.
Other Thoughts:
I figured I could use StringIO, but there's no mode parameter (I guess "String" implies text mode).
Apparently the io.BytesIO class is new in v2.6, so that doesn't work for me either.
I've got a solution with the tempfile module, but I'd like to avoid any reads/writes to/from the hard disk.
|
[
"In Python 2.x, \"string\" means \"bytes\", and \"unicode\" means \"string\". You should use the StringIO or cStringIO modules. The mode will depend on which kind of data you pass in as the buffer parameter.\n",
"If you're working with PDF, then StringIO should be fine as long as you pay heed to the docs:\n\nThe StringIO object can accept either Unicode or 8-bit strings, but mixing the two may take some care. If both are used, 8-bit strings that cannot be interpreted as 7-bit ASCII (that use the 8th bit) will cause a UnicodeError to be raised when getvalue() is called.\n\nNote this is not true for cStringIO:\n\nUnlike the memory files implemented by the StringIO module, those provided by this module are not able to accept Unicode strings that cannot be encoded as plain ASCII strings.\n\nSee the full documentation at:\n\nhttp://docs.python.org/library/stringio.html\n\n"
] |
[
4,
2
] |
[] |
[] |
[
"bytesio",
"python",
"stringio"
] |
stackoverflow_0001279244_bytesio_python_stringio.txt
|
Q:
Multiple Datacenters
I am finding a lack of information regarding handling multiple datacenters. What tools and techniques are available for taking advantage of multiple datacenters? A requirement is that the databases become consistent very quickly.
A:
http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster.html
|
Multiple Datacenters
|
I am finding a lack of information regarding handling multiple datacenters. What tools and techniques are available for taking advantage of multiple datacenters? A requirement is that the databases become consistent very quickly.
|
[
"http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster.html\n\n"
] |
[
1
] |
[] |
[] |
[
"mysql",
"python",
"replication",
"scaling"
] |
stackoverflow_0001279358_mysql_python_replication_scaling.txt
|
Q:
sqlalchemy - grouping items and iterating over the sub-lists
Consider a table like this:
| Name | Version | Other |
| ---------------------|-------|
| Foo | 1 | 'a' |
| Foo | 2 | 'b' |
| Bar | 5 | 'c' |
| Baz | 3 | 'd' |
| Baz | 4 | 'e' |
| Baz | 5 | 'f' |
--------------------------------
I would like to write a sqlalchemy query statement to list all items (as mapper objects, not just the Name column) with max version: Foo-2-b, Bar-5-c, Baz-5-f. I understand that I would have to use the group_by method, but beyond that I am puzzled as to how to retrieve the sub-lists (and then find the max element). SQLAlchemy documentation is apparently not very clear on this.
In the real scenario, there are many other columns (like 'Other') - which is why I need the actual row object (mapper class) to be returned rather than just the 'Name' value.
A:
If you need full objects you'll need to select maximum versions by name in a subquery and join to that:
max_versions = session.query(Cls.name, func.max(Cls.version).label('max_version'))\
.group_by(Cls.name).subquery()
objs = session.query(Cls).join((max_versions,
and_(Cls.name == max_versions.c.name,
Cls.version == max_versions.c.max_version)
)).all()
This will result in something like this:
SELECT tbl.id AS tbl_id, tbl.name AS tbl_name, tbl.version AS tbl_version
FROM tbl JOIN (SELECT tbl.name AS name, max(tbl.version) AS max_version
FROM tbl GROUP BY tbl.name) AS anon_1 ON tbl.name = anon_1.name AND tbl.version = anon_1.max_version
Be aware that you'll get multiple rows with the same name if there are multiple rows with the max version.
|
sqlalchemy - grouping items and iterating over the sub-lists
|
Consider a table like this:
| Name | Version | Other |
| ---------------------|-------|
| Foo | 1 | 'a' |
| Foo | 2 | 'b' |
| Bar | 5 | 'c' |
| Baz | 3 | 'd' |
| Baz | 4 | 'e' |
| Baz | 5 | 'f' |
--------------------------------
I would like to write a sqlalchemy query statement to list all items (as mapper objects, not just the Name column) with max version: Foo-2-b, Bar-5-c, Baz-5-f. I understand that I would have to use the group_by method, but beyond that I am puzzled as to how to retrieve the sub-lists (and then find the max element). SQLAlchemy documentation is apparently not very clear on this.
In the real scenario, there are many other columns (like 'Other') - which is why I need the actual row object (mapper class) to be returned rather than just the 'Name' value.
|
[
"If you need full objects you'll need to select maximum versions by name in a subquery and join to that:\nmax_versions = session.query(Cls.name, func.max(Cls.version).label('max_version'))\\\n .group_by(Cls.name).subquery()\nobjs = session.query(Cls).join((max_versions,\n and_(Cls.name == max_versions.c.name,\n Cls.version == max_versions.c.max_version)\n )).all()\n\nThis will result in something like this:\nSELECT tbl.id AS tbl_id, tbl.name AS tbl_name, tbl.version AS tbl_version\nFROM tbl JOIN (SELECT tbl.name AS name, max(tbl.version) AS max_version\nFROM tbl GROUP BY tbl.name) AS anon_1 ON tbl.name = anon_1.name AND tbl.version = anon_1.max_version\n\nBe aware that you'll get multiple rows with the same name if there are multiple rows with the max version.\n"
] |
[
6
] |
[
"Here's the SQL you can call with the engine.execute command:\nselect\n t1.*\nfrom\n table t1\n inner join \n (select\n Name,\n max(version) as Version\n from\n table\n group by\n name) s on\n s.name = t1.name\n and s.version = t1.version\n\n"
] |
[
-2
] |
[
"python",
"sql",
"sqlalchemy"
] |
stackoverflow_0001279356_python_sql_sqlalchemy.txt
|
Q:
SQLAlchemy - Trying Eager loading.. Attribute Error
I access a a postgres table using SQLAlchemy. I want a query to have eagerloading.
from sqlalchemy.orm import sessionmaker, scoped_session, eagerload
from settings import DATABASE_USER, DATABASE_PASSWORD, DATABASE_HOST, DATABASE_PORT, DATABASE_NAME
from sqlalchemy import create_engine
from sqlalchemy import Table, Column, Integer, String, Boolean, MetaData, ForeignKey
from sqlalchemy.orm import mapper
from sqlalchemy.ext.declarative import declarative_base
def create_session():
engine = create_engine('postgres://%s:%s@%s:%s/%s' % (DATABASE_USER, DATABASE_PASSWORD, DATABASE_HOST, DATABASE_PORT, DATABASE_NAME), echo=True)
Session = scoped_session(sessionmaker(bind=engine))
return Session()
Base = declarative_base()
class Zipcode(Base):
__tablename__ = 'zipcode'
zipcode = Column(String(6), primary_key = True, nullable=False)
city = Column(String(30), nullable=False)
state = Column(String(30), nullable=False)
session = create_session()
query = session.query(Zipcode).options(eagerload('zipcode')).filter(Zipcode.state.in_(['NH', 'ME']))
#query = session.query(Zipcode.zipcode).filter(Zipcode.state.in_(['NH', 'ME']))
print query.count()
This fails with
AttributeError: 'ColumnProperty' object has no attribute 'mapper'
One without eagerloading returns the records correctly.
I am new to SQLAlchemy. I am not sure what the problem is. Any pointers?
A:
You can only eager load on a relation property. Not on the table itself. Eager loading is meant for loading objects from other tables at the same time as getting a particular object. The way you load all the objects for a query will be simply adding all().
query = session.query(Zipcode).options(eagerload('zipcode')).filter(Zipcode.state.in_(['NH', 'ME'])).all()
The query will now be a list of all objects (rows) in the table and len(query) will give you the count.
|
SQLAlchemy - Trying Eager loading.. Attribute Error
|
I access a a postgres table using SQLAlchemy. I want a query to have eagerloading.
from sqlalchemy.orm import sessionmaker, scoped_session, eagerload
from settings import DATABASE_USER, DATABASE_PASSWORD, DATABASE_HOST, DATABASE_PORT, DATABASE_NAME
from sqlalchemy import create_engine
from sqlalchemy import Table, Column, Integer, String, Boolean, MetaData, ForeignKey
from sqlalchemy.orm import mapper
from sqlalchemy.ext.declarative import declarative_base
def create_session():
engine = create_engine('postgres://%s:%s@%s:%s/%s' % (DATABASE_USER, DATABASE_PASSWORD, DATABASE_HOST, DATABASE_PORT, DATABASE_NAME), echo=True)
Session = scoped_session(sessionmaker(bind=engine))
return Session()
Base = declarative_base()
class Zipcode(Base):
__tablename__ = 'zipcode'
zipcode = Column(String(6), primary_key = True, nullable=False)
city = Column(String(30), nullable=False)
state = Column(String(30), nullable=False)
session = create_session()
query = session.query(Zipcode).options(eagerload('zipcode')).filter(Zipcode.state.in_(['NH', 'ME']))
#query = session.query(Zipcode.zipcode).filter(Zipcode.state.in_(['NH', 'ME']))
print query.count()
This fails with
AttributeError: 'ColumnProperty' object has no attribute 'mapper'
One without eagerloading returns the records correctly.
I am new to SQLAlchemy. I am not sure what the problem is. Any pointers?
|
[
"You can only eager load on a relation property. Not on the table itself. Eager loading is meant for loading objects from other tables at the same time as getting a particular object. The way you load all the objects for a query will be simply adding all().\nquery = session.query(Zipcode).options(eagerload('zipcode')).filter(Zipcode.state.in_(['NH', 'ME'])).all()\n\nThe query will now be a list of all objects (rows) in the table and len(query) will give you the count.\n"
] |
[
2
] |
[] |
[] |
[
"postgresql",
"python",
"sqlalchemy"
] |
stackoverflow_0001279583_postgresql_python_sqlalchemy.txt
|
Q:
Default route doesn't work
I'm using the standard routing module with pylons to try and setup a default route for the home page of my website.
I've followed the instructions in the docs and here http://routes.groovie.org/recipes.html but when I try http://127.0.0.1:5000/ I just get the 'Welcome to Pylons' default page.
My config/routing.py file looks like this
from pylons import config
from routes import Mapper
def make_map():
"""Create, configure and return the routes Mapper"""
map = Mapper(directory=config['pylons.paths']['controllers'],
always_scan=config['debug'])
map.minimization = False
map.connect('/error/{action}', controller='error')
map.connect('/error/{action}/{id}', controller='error')
# CUSTOM ROUTES HERE
map.connect( '', controller='main', action='index' )
map.connect('/{controller}/{action}')
map.connect('/{controller}/{action}/{id}')
return map
I've also tried
map.connect( '/', controller='main', action='index' )
and (using http://127.0.0.1:5000/homepage/)
map.connect( 'homepage', controller='main', action='index' )
But nothing works at all. I know its reloading my config file as I used
paster serve --reload development.ini
to start the server
system info
$ paster --version
PasteScript 1.7.3 from /Library/Python/2.5/site-packages/PasteScript-1.7.3-py2.5.egg (python 2.5.1 (r251:54863, Feb 6 2009, 19:02:12))
A:
You have to delete the static page (myapp/public/index.html). Static
files take priority due to the Cascade configuration at the end of
middleware.py.
|
Default route doesn't work
|
I'm using the standard routing module with pylons to try and setup a default route for the home page of my website.
I've followed the instructions in the docs and here http://routes.groovie.org/recipes.html but when I try http://127.0.0.1:5000/ I just get the 'Welcome to Pylons' default page.
My config/routing.py file looks like this
from pylons import config
from routes import Mapper
def make_map():
"""Create, configure and return the routes Mapper"""
map = Mapper(directory=config['pylons.paths']['controllers'],
always_scan=config['debug'])
map.minimization = False
map.connect('/error/{action}', controller='error')
map.connect('/error/{action}/{id}', controller='error')
# CUSTOM ROUTES HERE
map.connect( '', controller='main', action='index' )
map.connect('/{controller}/{action}')
map.connect('/{controller}/{action}/{id}')
return map
I've also tried
map.connect( '/', controller='main', action='index' )
and (using http://127.0.0.1:5000/homepage/)
map.connect( 'homepage', controller='main', action='index' )
But nothing works at all. I know its reloading my config file as I used
paster serve --reload development.ini
to start the server
system info
$ paster --version
PasteScript 1.7.3 from /Library/Python/2.5/site-packages/PasteScript-1.7.3-py2.5.egg (python 2.5.1 (r251:54863, Feb 6 2009, 19:02:12))
|
[
"You have to delete the static page (myapp/public/index.html). Static\nfiles take priority due to the Cascade configuration at the end of\nmiddleware.py. \n"
] |
[
9
] |
[] |
[] |
[
"pylons",
"python",
"routes"
] |
stackoverflow_0001279403_pylons_python_routes.txt
|
Q:
How to classify users into different countries, based on the Location field
Most web applications have a Location field, in which uses may enter a Location of their choice.
How would you classify users into different countries, based on the location entered.
For eg, I used the Stack Overflow dump of users.xml and extracted users' names, reputation and location:
['Jeff Atwood', '12853', 'El Cerrito, CA']
['Jarrod Dixon', '1114', 'Morganton, NC']
['Sneakers OToole', '200', 'Unknown']
['Greg Hurlman', '5327', 'Halfway between the boardwalk and Six Flags, NJ']
['Power-coder', '812', 'Burlington, Ontario, Canada']
['Chris Jester-Young', '16509', 'Durham, NC']
['Teifion', '7024', 'Wales']
['Grant', '3333', 'Georgia']
['TimM', '133', 'Alabama']
['Leon Bambrick', '2450', 'Australia']
['Coincoin', '3801', 'Montreal']
['Tom Grochowicz', '125', 'NJ']
['Rex M', '12822', 'US']
['Dillie-O', '7109', 'Prescott, AZ']
['Pete', '653', 'Reynoldsburg, OH']
['Nick Berardi', '9762', 'Phoenixville, PA']
['Kandis', '39', '']
['Shawn', '4248', 'philadelphia']
['Yaakov Ellis', '3651', 'Israel']
['redwards', '21', 'US']
['Dave Ward', '4831', 'Atlanta']
['Liron Yahdav', '527', 'San Rafael, CA']
['Geoff Dalgas', '648', 'Corvallis, OR']
['Kevin Dente', '1619', 'Oakland, CA']
['Tom', '3316', '']
['denny', '573', 'Winchester, VA']
['Karl Seguin', '4195', 'Ottawa']
['Bob', '4652', 'US']
['saniul', '2352', 'London, UK']
['saint_groceon', '1087', 'Houston, TX']
['Tim Boland', '192', 'Cincinnati Ohio']
['Darren Kopp', '5807', 'Woods Cross, UT']
using the following Python script:
from xml.etree import ElementTree
root = ElementTree.parse('SO Export/so-export-2009-05/users.xml').getroot()
items = ['DisplayName','Reputation','Location']
def loop1():
for count,i in enumerate(root):
det = [i.get(x) for x in items]
print det
if count>30: break
loop1()
What is the simplest way to classify people into different countries? Are there any ready lookup tables available that provide me an output saying X location belongs to Y country?
The lookup table need not be totally accurate. Reasonably accurate answers are obtained by querying the location string on Google, or better still, Wolfram Alpha.
A:
You best bet is to use a Geocoding API like geopy (some Examples).
The Google Geocoding API, for example, will return the country in the CountryNameCode-field of the response.
With just this one location field the number of false matches will probably be relatively high, but maybe it is good enough.
If you had server logs, you could try to also look up the users IP address with an IP geocoder (more information and pointers on Wikipedia
A:
Force users to specify country, because you'll have to deal with ambiguities. This would be the right way.
If that's not possible, at least make your best-guess in conjunction with their IP address.
For example, ['Grant', '3333', 'Georgia']
Is this Georgia, USA?
Or is this the Republic of Georgia?
If their IP address suggests somewhere in Central Asia or Eastern Europe, then chances are it's the Republic of Georgia. If it's North America, chances are pretty good they mean Georgia, USA.
Note that mappings for IP address to country isn't 100% accurate, and the database needs to be updated regularly. In my opinion, far too much trouble.
|
How to classify users into different countries, based on the Location field
|
Most web applications have a Location field, in which uses may enter a Location of their choice.
How would you classify users into different countries, based on the location entered.
For eg, I used the Stack Overflow dump of users.xml and extracted users' names, reputation and location:
['Jeff Atwood', '12853', 'El Cerrito, CA']
['Jarrod Dixon', '1114', 'Morganton, NC']
['Sneakers OToole', '200', 'Unknown']
['Greg Hurlman', '5327', 'Halfway between the boardwalk and Six Flags, NJ']
['Power-coder', '812', 'Burlington, Ontario, Canada']
['Chris Jester-Young', '16509', 'Durham, NC']
['Teifion', '7024', 'Wales']
['Grant', '3333', 'Georgia']
['TimM', '133', 'Alabama']
['Leon Bambrick', '2450', 'Australia']
['Coincoin', '3801', 'Montreal']
['Tom Grochowicz', '125', 'NJ']
['Rex M', '12822', 'US']
['Dillie-O', '7109', 'Prescott, AZ']
['Pete', '653', 'Reynoldsburg, OH']
['Nick Berardi', '9762', 'Phoenixville, PA']
['Kandis', '39', '']
['Shawn', '4248', 'philadelphia']
['Yaakov Ellis', '3651', 'Israel']
['redwards', '21', 'US']
['Dave Ward', '4831', 'Atlanta']
['Liron Yahdav', '527', 'San Rafael, CA']
['Geoff Dalgas', '648', 'Corvallis, OR']
['Kevin Dente', '1619', 'Oakland, CA']
['Tom', '3316', '']
['denny', '573', 'Winchester, VA']
['Karl Seguin', '4195', 'Ottawa']
['Bob', '4652', 'US']
['saniul', '2352', 'London, UK']
['saint_groceon', '1087', 'Houston, TX']
['Tim Boland', '192', 'Cincinnati Ohio']
['Darren Kopp', '5807', 'Woods Cross, UT']
using the following Python script:
from xml.etree import ElementTree
root = ElementTree.parse('SO Export/so-export-2009-05/users.xml').getroot()
items = ['DisplayName','Reputation','Location']
def loop1():
for count,i in enumerate(root):
det = [i.get(x) for x in items]
print det
if count>30: break
loop1()
What is the simplest way to classify people into different countries? Are there any ready lookup tables available that provide me an output saying X location belongs to Y country?
The lookup table need not be totally accurate. Reasonably accurate answers are obtained by querying the location string on Google, or better still, Wolfram Alpha.
|
[
"You best bet is to use a Geocoding API like geopy (some Examples).\nThe Google Geocoding API, for example, will return the country in the CountryNameCode-field of the response.\nWith just this one location field the number of false matches will probably be relatively high, but maybe it is good enough.\nIf you had server logs, you could try to also look up the users IP address with an IP geocoder (more information and pointers on Wikipedia\n",
"Force users to specify country, because you'll have to deal with ambiguities. This would be the right way.\nIf that's not possible, at least make your best-guess in conjunction with their IP address.\nFor example, ['Grant', '3333', 'Georgia']\nIs this Georgia, USA?\nOr is this the Republic of Georgia?\nIf their IP address suggests somewhere in Central Asia or Eastern Europe, then chances are it's the Republic of Georgia. If it's North America, chances are pretty good they mean Georgia, USA.\nNote that mappings for IP address to country isn't 100% accurate, and the database needs to be updated regularly. In my opinion, far too much trouble.\n"
] |
[
2,
1
] |
[] |
[] |
[
"elementtree",
"geolocation",
"python",
"xml"
] |
stackoverflow_0001280266_elementtree_geolocation_python_xml.txt
|
Q:
Executes Fine In Jail Shell but not in Browser
My python script executes fine in Jail shell, putting out html which I can pipe to an html file. When I look at the file, it's exactly what I want. However when I try to run the file from a browser I get a 500 error. According to the instructions at http://imgseekweb.sourceforge.net/install.html the cgi-bin should be in suEXEC mode. My hosting company changed the cgi-script handler to allow .py files and he made a little test script that works fine, but mine still does not.
I tried to make suEXEC a custom Apache file handler of .py files in cPanel, but this did not seem to help, it just made the Python script print out as text in the browser without executing.
Any ideas for resolution. I'm so close now that script at least works in Jail Shell. I even try to fake out Apache by making the test file launch a system execution of the python script but that also caused a 500 error, although it too spit out the correct html in Jail Shell even doing some Lynx like display of the html this time.
Whatever machinations I did also caused the test.py to stop working. It now gives a 500 error too even with all the code I added removed.
A:
My hoster resolved the issue. It turns out I'm working in a Windows environment with Microsoft Expression 2.0 HTML editor. The code needed to be converted to a UNIX environment with dos2unix which is installed in the hoster environment and can be accessed from the shell...Thanks for reading this thread to any who read it.
|
Executes Fine In Jail Shell but not in Browser
|
My python script executes fine in Jail shell, putting out html which I can pipe to an html file. When I look at the file, it's exactly what I want. However when I try to run the file from a browser I get a 500 error. According to the instructions at http://imgseekweb.sourceforge.net/install.html the cgi-bin should be in suEXEC mode. My hosting company changed the cgi-script handler to allow .py files and he made a little test script that works fine, but mine still does not.
I tried to make suEXEC a custom Apache file handler of .py files in cPanel, but this did not seem to help, it just made the Python script print out as text in the browser without executing.
Any ideas for resolution. I'm so close now that script at least works in Jail Shell. I even try to fake out Apache by making the test file launch a system execution of the python script but that also caused a 500 error, although it too spit out the correct html in Jail Shell even doing some Lynx like display of the html this time.
Whatever machinations I did also caused the test.py to stop working. It now gives a 500 error too even with all the code I added removed.
|
[
"My hoster resolved the issue. It turns out I'm working in a Windows environment with Microsoft Expression 2.0 HTML editor. The code needed to be converted to a UNIX environment with dos2unix which is installed in the hoster environment and can be accessed from the shell...Thanks for reading this thread to any who read it.\n"
] |
[
0
] |
[] |
[] |
[
"browser",
"python",
"scripting"
] |
stackoverflow_0001276497_browser_python_scripting.txt
|
Q:
Controlling Django ModelForm output
I've got a Model in Django, example code below (not my actual code):
class Department(models.Model):
name = models.CharField(max_length=100)
abbreviation = models.CharField(max_length=4)
Let's say I do the following in the Django shell:
>>> Department(name='Computer Science',abbreviation='C S ').save()
>>> Department(name='Mathematics',abbreviation='MATH').save()
>>> Department(name='Anthropology',abbreviation='ANTH').save()
I now have those four departments stored in my database. Say we have another class, Course, which belongs to a Department:
class Course(models.Model):
department = models.ForeignKey('Department')
number = models.IntegerField()
class CourseForm(ModelForm):
class Meta:
model = Course
If I render the ModelForm object directly in a template by just referncing a variable, say form, which got passed down, the Departments appear in a drop-down box (an HTML select box). So far so good.
The problem is: the items in the select box are sorted by ID. So they appear as:
Computer ScienceMathematicsAnthropology
But, I want them sorted alphabetically, i.e.
AnthropologyComputer ScienceMathematics
How can I change the way these items are sorted in the ModelForm code, or in the Model code, rather than in the template?
And in general, how can I customize the way a particular field or widget works when generated by a ModelForm?
A:
How can I change the way these items are sorted in the ModelForm code, or in the Model code, rather than in the template?
One thing you can do is add an ordering meta option. You do this by adding a Meta inner class to a class, with the ordering attribute specified:
class Department(models.Model):
name = models.CharField(max_length=100)
abbreviation = models.CharField(max_length=4)
class Meta:
ordering = ["name"]
Note that this changes default ordering for Department models (not just when used in a form).
And in general, how can I customize the way a particular field or widget works when generated by a ModelForm?
You'll want to read the Django docs about ModelForm and built-in form fields. In particular, pay attention to the optional widget attribute, which allows you to change a form field's widget. The difference types of widgets are described here.
A:
mipadi has shown you how to modify the ordering for your Department model, and this is reflected in the form widget.
However, if you only wanted to change the ordering for that widget, and leave the model's default ordering as something else, you could do this:
class CourseForm(ModelForm):
department = forms.ModelChoiceField(
queryset=Department.objects.order_by('name'))
class Meta:
model = Course
|
Controlling Django ModelForm output
|
I've got a Model in Django, example code below (not my actual code):
class Department(models.Model):
name = models.CharField(max_length=100)
abbreviation = models.CharField(max_length=4)
Let's say I do the following in the Django shell:
>>> Department(name='Computer Science',abbreviation='C S ').save()
>>> Department(name='Mathematics',abbreviation='MATH').save()
>>> Department(name='Anthropology',abbreviation='ANTH').save()
I now have those four departments stored in my database. Say we have another class, Course, which belongs to a Department:
class Course(models.Model):
department = models.ForeignKey('Department')
number = models.IntegerField()
class CourseForm(ModelForm):
class Meta:
model = Course
If I render the ModelForm object directly in a template by just referncing a variable, say form, which got passed down, the Departments appear in a drop-down box (an HTML select box). So far so good.
The problem is: the items in the select box are sorted by ID. So they appear as:
Computer ScienceMathematicsAnthropology
But, I want them sorted alphabetically, i.e.
AnthropologyComputer ScienceMathematics
How can I change the way these items are sorted in the ModelForm code, or in the Model code, rather than in the template?
And in general, how can I customize the way a particular field or widget works when generated by a ModelForm?
|
[
"\nHow can I change the way these items are sorted in the ModelForm code, or in the Model code, rather than in the template?\n\nOne thing you can do is add an ordering meta option. You do this by adding a Meta inner class to a class, with the ordering attribute specified:\nclass Department(models.Model):\n name = models.CharField(max_length=100)\n abbreviation = models.CharField(max_length=4)\n\n class Meta:\n ordering = [\"name\"]\n\nNote that this changes default ordering for Department models (not just when used in a form).\n\nAnd in general, how can I customize the way a particular field or widget works when generated by a ModelForm?\n\nYou'll want to read the Django docs about ModelForm and built-in form fields. In particular, pay attention to the optional widget attribute, which allows you to change a form field's widget. The difference types of widgets are described here.\n",
"mipadi has shown you how to modify the ordering for your Department model, and this is reflected in the form widget.\nHowever, if you only wanted to change the ordering for that widget, and leave the model's default ordering as something else, you could do this:\nclass CourseForm(ModelForm):\n department = forms.ModelChoiceField(\n queryset=Department.objects.order_by('name'))\n\n class Meta:\n model = Course\n\n"
] |
[
9,
1
] |
[] |
[] |
[
"django",
"django_forms",
"django_templates",
"python"
] |
stackoverflow_0001279221_django_django_forms_django_templates_python.txt
|
Q:
remove duplicates from nested dictionaries in list
quick and very basic newbie question.
If i have list of dictionaries looking like this:
L = []
L.append({"value1": value1, "value2": value2, "value3": value3, "value4": value4})
Let's say there exists multiple entries where value3 and value4 are identical to other nested dictionaries. How can i quick and easy find and remove those duplicate dictionaries.
Preserving order is of no importance.
Thanks.
EDIT:
If there are five inputs, like this:
L = [{"value1": fssd, "value2": dsfds, "value3": abcd, "value4": gk},
{"value1": asdasd, "value2": asdas, "value3": dafdd, "value4": sdfsdf},
{"value1": sdfsf, "value2": sdfsdf, "value3": abcd, "value4": gk},
{"value1": asddas, "value2": asdsa, "value3": abcd, "value4": gk},
{"value1": asdasd, "value2": dskksks, "value3": ldlsld, "value4": sdlsld}]
The output shoud look like this:
L = [{"value1": fssd, "value2": dsfds, "value3": abcd, "value4": gk},
{"value1": asdasd, "value2": asdas, "value3": dafdd, "value4": sdfsdf},
{"value1": asdasd, "value2": dskksks, "value3": ldlsld, "value4": sdlsld}
A:
Here's one way:
keyfunc = lambda d: (d['value3'], d['value4'])
from itertools import groupby
giter = groupby(sorted(L, key=keyfunc), keyfunc)
L2 = [g[1].next() for g in giter]
print L2
A:
In Python 2.6 or 3.*:
import itertools
import pprint
L = [{"value1": "fssd", "value2": "dsfds", "value3": "abcd", "value4": "gk"},
{"value1": "asdasd", "value2": "asdas", "value3": "dafdd", "value4": "sdfsdf"},
{"value1": "sdfsf", "value2": "sdfsdf", "value3": "abcd", "value4": "gk"},
{"value1": "asddas", "value2": "asdsa", "value3": "abcd", "value4": "gk"},
{"value1": "asdasd", "value2": "dskksks", "value3": "ldlsld", "value4": "sdlsld"}]
getvals = operator.itemgetter('value3', 'value4')
L.sort(key=getvals)
result = []
for k, g in itertools.groupby(L, getvals):
result.append(next(g))
L[:] = result
pprint.pprint(L)
Almost the same in Python 2.5, except you have to use g.next() instead of next(g) in the append.
A:
You can use a temporary array to store an items dict. The previous code was bugged for removing items in the for loop.
(v,r) = ([],[])
for i in l:
if ('value4', i['value4']) not in v and ('value3', i['value3']) not in v:
r.append(i)
v.extend(i.items())
l = r
Your test:
l = [{"value1": 'fssd', "value2": 'dsfds', "value3": 'abcd', "value4": 'gk'},
{"value1": 'asdasd', "value2": 'asdas', "value3": 'dafdd', "value4": 'sdfsdf'},
{"value1": 'sdfsf', "value2": 'sdfsdf', "value3": 'abcd', "value4": 'gk'},
{"value1": 'asddas', "value2": 'asdsa', "value3": 'abcd', "value4": 'gk'},
{"value1": 'asdasd', "value2": 'dskksks', "value3": 'ldlsld', "value4": 'sdlsld'}]
ouputs
{'value4': 'gk', 'value3': 'abcd', 'value2': 'dsfds', 'value1': 'fssd'}
{'value4': 'sdfsdf', 'value3': 'dafdd', 'value2': 'asdas', 'value1': 'asdasd'}
{'value4': 'sdlsld', 'value3': 'ldlsld', 'value2': 'dskksks', 'value1': 'asdasd'}
A:
for dic in list:
for anotherdic in list:
if dic != anotherdic:
if dic["value3"] == anotherdic["value3"] or dic["value4"] == anotherdic["value4"]:
list.remove(anotherdic)
Tested with
list = [{"value1": 'fssd', "value2": 'dsfds', "value3": 'abcd', "value4": 'gk'},
{"value1": 'asdasd', "value2": 'asdas', "value3": 'dafdd', "value4": 'sdfsdf'},
{"value1": 'sdfsf', "value2": 'sdfsdf', "value3": 'abcd', "value4": 'gk'},
{"value1": 'asddas', "value2": 'asdsa', "value3": 'abcd', "value4": 'gk'},
{"value1": 'asdasd', "value2": 'dskksks', "value3": 'ldlsld', "value4": 'sdlsld'}]
worked fine for me :)
A:
That's a list of one dictionary and but, assuming there are more dictionaries in the list l:
l = [ldict for ldict in l if ldict.get("value3") != value3 or ldict.get("value4") != value4]
But is that what you really want to do? Perhaps you need to refine your description.
BTW, don't use list as a name since it is the name of a Python built-in.
EDIT: Assuming you started with a list of dictionaries, rather than a list of lists of 1 dictionary each that should work with your example. It wouldn't work if either of the values were None, so better something like:
l = [ldict for ldict in l if not ( ("value3" in ldict and ldict["value3"] == value3) and ("value4" in ldict and ldict["value4"] == value4) )]
But it still seems like an unusual data structure.
EDIT: no need to use explicit gets.
Also, there are always tradeoffs in solutions. Without more info and without actually measuring, it's hard to know which performance tradeoffs are most important for the problem. But, as the Zen sez: "Simple is better than complex".
A:
If I understand correctly, you want to discard matches that come later in the original list but do not care about the order of the resulting list, so:
(Tested with 2.5.2)
tempDict = {}
for d in L[::-1]:
tempDict[(d["value3"],d["value4"])] = d
L[:] = tempDict.itervalues()
tempDict = None
|
remove duplicates from nested dictionaries in list
|
quick and very basic newbie question.
If i have list of dictionaries looking like this:
L = []
L.append({"value1": value1, "value2": value2, "value3": value3, "value4": value4})
Let's say there exists multiple entries where value3 and value4 are identical to other nested dictionaries. How can i quick and easy find and remove those duplicate dictionaries.
Preserving order is of no importance.
Thanks.
EDIT:
If there are five inputs, like this:
L = [{"value1": fssd, "value2": dsfds, "value3": abcd, "value4": gk},
{"value1": asdasd, "value2": asdas, "value3": dafdd, "value4": sdfsdf},
{"value1": sdfsf, "value2": sdfsdf, "value3": abcd, "value4": gk},
{"value1": asddas, "value2": asdsa, "value3": abcd, "value4": gk},
{"value1": asdasd, "value2": dskksks, "value3": ldlsld, "value4": sdlsld}]
The output shoud look like this:
L = [{"value1": fssd, "value2": dsfds, "value3": abcd, "value4": gk},
{"value1": asdasd, "value2": asdas, "value3": dafdd, "value4": sdfsdf},
{"value1": asdasd, "value2": dskksks, "value3": ldlsld, "value4": sdlsld}
|
[
"Here's one way:\nkeyfunc = lambda d: (d['value3'], d['value4'])\n\nfrom itertools import groupby\ngiter = groupby(sorted(L, key=keyfunc), keyfunc)\n\nL2 = [g[1].next() for g in giter]\nprint L2\n\n",
"In Python 2.6 or 3.*:\nimport itertools\nimport pprint\n\nL = [{\"value1\": \"fssd\", \"value2\": \"dsfds\", \"value3\": \"abcd\", \"value4\": \"gk\"},\n {\"value1\": \"asdasd\", \"value2\": \"asdas\", \"value3\": \"dafdd\", \"value4\": \"sdfsdf\"},\n {\"value1\": \"sdfsf\", \"value2\": \"sdfsdf\", \"value3\": \"abcd\", \"value4\": \"gk\"},\n {\"value1\": \"asddas\", \"value2\": \"asdsa\", \"value3\": \"abcd\", \"value4\": \"gk\"},\n {\"value1\": \"asdasd\", \"value2\": \"dskksks\", \"value3\": \"ldlsld\", \"value4\": \"sdlsld\"}]\n\ngetvals = operator.itemgetter('value3', 'value4')\n\nL.sort(key=getvals)\n\nresult = []\nfor k, g in itertools.groupby(L, getvals):\n result.append(next(g))\n\nL[:] = result\npprint.pprint(L)\n\nAlmost the same in Python 2.5, except you have to use g.next() instead of next(g) in the append.\n",
"You can use a temporary array to store an items dict. The previous code was bugged for removing items in the for loop.\n(v,r) = ([],[])\nfor i in l:\n if ('value4', i['value4']) not in v and ('value3', i['value3']) not in v:\n r.append(i)\n v.extend(i.items())\nl = r\n\nYour test:\nl = [{\"value1\": 'fssd', \"value2\": 'dsfds', \"value3\": 'abcd', \"value4\": 'gk'},\n {\"value1\": 'asdasd', \"value2\": 'asdas', \"value3\": 'dafdd', \"value4\": 'sdfsdf'},\n {\"value1\": 'sdfsf', \"value2\": 'sdfsdf', \"value3\": 'abcd', \"value4\": 'gk'},\n {\"value1\": 'asddas', \"value2\": 'asdsa', \"value3\": 'abcd', \"value4\": 'gk'},\n {\"value1\": 'asdasd', \"value2\": 'dskksks', \"value3\": 'ldlsld', \"value4\": 'sdlsld'}]\n\nouputs\n{'value4': 'gk', 'value3': 'abcd', 'value2': 'dsfds', 'value1': 'fssd'}\n{'value4': 'sdfsdf', 'value3': 'dafdd', 'value2': 'asdas', 'value1': 'asdasd'}\n{'value4': 'sdlsld', 'value3': 'ldlsld', 'value2': 'dskksks', 'value1': 'asdasd'}\n\n",
"for dic in list: \n for anotherdic in list:\n if dic != anotherdic:\n if dic[\"value3\"] == anotherdic[\"value3\"] or dic[\"value4\"] == anotherdic[\"value4\"]:\n list.remove(anotherdic)\n\nTested with\nlist = [{\"value1\": 'fssd', \"value2\": 'dsfds', \"value3\": 'abcd', \"value4\": 'gk'},\n{\"value1\": 'asdasd', \"value2\": 'asdas', \"value3\": 'dafdd', \"value4\": 'sdfsdf'},\n{\"value1\": 'sdfsf', \"value2\": 'sdfsdf', \"value3\": 'abcd', \"value4\": 'gk'},\n{\"value1\": 'asddas', \"value2\": 'asdsa', \"value3\": 'abcd', \"value4\": 'gk'},\n{\"value1\": 'asdasd', \"value2\": 'dskksks', \"value3\": 'ldlsld', \"value4\": 'sdlsld'}]\n\nworked fine for me :)\n",
"That's a list of one dictionary and but, assuming there are more dictionaries in the list l:\nl = [ldict for ldict in l if ldict.get(\"value3\") != value3 or ldict.get(\"value4\") != value4]\n\nBut is that what you really want to do? Perhaps you need to refine your description.\nBTW, don't use list as a name since it is the name of a Python built-in.\nEDIT: Assuming you started with a list of dictionaries, rather than a list of lists of 1 dictionary each that should work with your example. It wouldn't work if either of the values were None, so better something like:\nl = [ldict for ldict in l if not ( (\"value3\" in ldict and ldict[\"value3\"] == value3) and (\"value4\" in ldict and ldict[\"value4\"] == value4) )]\n\nBut it still seems like an unusual data structure.\nEDIT: no need to use explicit gets.\nAlso, there are always tradeoffs in solutions. Without more info and without actually measuring, it's hard to know which performance tradeoffs are most important for the problem. But, as the Zen sez: \"Simple is better than complex\".\n",
"If I understand correctly, you want to discard matches that come later in the original list but do not care about the order of the resulting list, so:\n(Tested with 2.5.2)\ntempDict = {}\nfor d in L[::-1]:\n tempDict[(d[\"value3\"],d[\"value4\"])] = d\nL[:] = tempDict.itervalues()\ntempDict = None\n\n"
] |
[
7,
7,
2,
1,
1,
0
] |
[] |
[] |
[
"dictionary",
"python"
] |
stackoverflow_0001279805_dictionary_python.txt
|
Q:
Ruby to python one-liner conversion
I have a little one-liner in my Rails app that returns a range of copyright dates with an optional parameter, e.g.:
def copyright_dates(start_year = Date.today().year)
[start_year, Date.today().year].sort.uniq.join(" - ")
end
I'm moving the app over to Django, and while I love it, I miss a bit of the conciseness. The same method in Python looks like:
def copyright_dates(start_year = datetime.datetime.today().year):
years = list(set([start_year, datetime.datetime.today().year]))
years.sort()
return " - ".join(map(str, years))
It's been years since I've touched Python, so I'm betting there's an easier way to do it. Any ideas?
EDIT: I know lists and sets are a bit of overkill, but I want the following output assuming the code is run in 2009:
copyright_dates() # '2009'
copyright_dates(2007) # '2007 - 2009'
copyright_dates(2012) # '2009 - 2012'
A:
from datetime import datetime
def copyright_dates(start_year = datetime.now().year):
return " - ".join(str(y) for y in sorted(set([start_year, datetime.now().year])))
A:
Watch out for the default parameter which is evaluated once. So if your web application runs over 12/31/09 without a restart, you won't get the expected output.
Try:
def copy(start=None):
start, curr = start if start else datetime.today().year, datetime.today().year
return str(start) if start == curr else '%d - %d' % tuple(sorted([start, curr]))
A:
Lists and sets seem to be overkill to me.
How about this:
def copyright_dates(start=datetime.datetime.today().year):
now = datetime.datetime.today().year
return (start==now and str(now) or "%d - %d" % (min(start, now), max(start, now)))
|
Ruby to python one-liner conversion
|
I have a little one-liner in my Rails app that returns a range of copyright dates with an optional parameter, e.g.:
def copyright_dates(start_year = Date.today().year)
[start_year, Date.today().year].sort.uniq.join(" - ")
end
I'm moving the app over to Django, and while I love it, I miss a bit of the conciseness. The same method in Python looks like:
def copyright_dates(start_year = datetime.datetime.today().year):
years = list(set([start_year, datetime.datetime.today().year]))
years.sort()
return " - ".join(map(str, years))
It's been years since I've touched Python, so I'm betting there's an easier way to do it. Any ideas?
EDIT: I know lists and sets are a bit of overkill, but I want the following output assuming the code is run in 2009:
copyright_dates() # '2009'
copyright_dates(2007) # '2007 - 2009'
copyright_dates(2012) # '2009 - 2012'
|
[
"from datetime import datetime\n\ndef copyright_dates(start_year = datetime.now().year):\n return \" - \".join(str(y) for y in sorted(set([start_year, datetime.now().year])))\n\n",
"Watch out for the default parameter which is evaluated once. So if your web application runs over 12/31/09 without a restart, you won't get the expected output.\nTry:\ndef copy(start=None):\n start, curr = start if start else datetime.today().year, datetime.today().year\n return str(start) if start == curr else '%d - %d' % tuple(sorted([start, curr]))\n\n",
"Lists and sets seem to be overkill to me.\nHow about this:\ndef copyright_dates(start=datetime.datetime.today().year):\n now = datetime.datetime.today().year\n return (start==now and str(now) or \"%d - %d\" % (min(start, now), max(start, now)))\n\n"
] |
[
5,
5,
2
] |
[] |
[] |
[
"django",
"python",
"ruby",
"ruby_on_rails"
] |
stackoverflow_0001280379_django_python_ruby_ruby_on_rails.txt
|
Q:
detecting two simultaneous keys in pyglet (python)
I wanted to know how to detect when two keys are simultaneously pressed using pyglet.
I currently have
def on_text_motion(self, motion):
(dx,dy) = ARROW_KEY_TO_VERSOR[motion]
self.window.move_dx_dy((dx,dy))
But this only gets arrow keys one at a time...
I'd like to distinguish between the combination UP+LEFT
and UP, then LEFT...
Hope I made myself clear
Manu
A:
Try pyglet.window.key.KeyStateHandler:
import pyglet
key = pyglet.window.key
win = pyglet.window.Window()
keyboard = key.KeyStateHandler()
win.push_handlers(keyboard)
print keyboard[key.UP] and keyboard[key.LEFT]
|
detecting two simultaneous keys in pyglet (python)
|
I wanted to know how to detect when two keys are simultaneously pressed using pyglet.
I currently have
def on_text_motion(self, motion):
(dx,dy) = ARROW_KEY_TO_VERSOR[motion]
self.window.move_dx_dy((dx,dy))
But this only gets arrow keys one at a time...
I'd like to distinguish between the combination UP+LEFT
and UP, then LEFT...
Hope I made myself clear
Manu
|
[
"Try pyglet.window.key.KeyStateHandler:\nimport pyglet\n\nkey = pyglet.window.key\n\nwin = pyglet.window.Window()\nkeyboard = key.KeyStateHandler()\nwin.push_handlers(keyboard)\n\nprint keyboard[key.UP] and keyboard[key.LEFT]\n\n"
] |
[
5
] |
[] |
[] |
[
"keyboard",
"pyglet",
"python"
] |
stackoverflow_0001280616_keyboard_pyglet_python.txt
|
Q:
What's the regex for removing dots in acronyms but not in domain names?
I want to remove dots in acronyms but not in domain names in a python string. For example,
I want the string
'a.b.c. [email protected] http://www.test.com'
to become
'abc [email protected] http://www.test.com'
The closest regex I made so far is
re.sub('(?:\s|\A).{1}\.',lambda s: s.group()[0:2], s)
which results to
'ab.c. [email protected] http://www.test.com'
It seems that for the above regex to work, I need to change the regex to
(?:\s|\A|\G).{1}\.
but there is no end of match marker (\G) in python.
EDIT: As I have mentioned in my comment, the strings have no specific formatting. These strings contain informal human conversations and so may contain zero, one or several acronyms or domain names. A few errors is fine by me if it would save me from coding a "real" parser.
A:
If your data is always formatted like this then why not split your data into 3 parts by splitting on the space.
Then it's pretty trivial to remove the periods from the first element and use join to remerge the parts.
A:
I suggest you split the string at '@' (or whatever character makes sense), do the substitution on the first part, then put the string back together. I think that will show the intent of the code better than a complex regexp. Something like this, perhaps:
string='a.b.c. [email protected] http://www.test.com'
left, rest = string.split("@",1)
left = left.replace(".","")
result="%s@%s" % (left, rest)
A:
You could simply remove DOTS that don't have two [a-z] letters (or more) ahead of them:
\.(?![a-zA-Z]{2})
But that will of course also remove the first DOT from the following address:
[email protected]
You could fix that by doing:
\.(?![a-zA-Z]{2}|[^\s@]*+@)
but I'm sure there will be many more such corner cases.
A:
The following worked for me (with thanks to Bart for his answer):
re.sub('\.(?!(\S[^. ])|\d)', '', s)
This will not remove a dot if it is the first character in a word or acronym.
A:
A non-regex way:
>>> S = 'a.b.c. [email protected] http://www.test.com'
>>> ' '.join(w if '@' in w or ':' in w else w.replace('.', '') for w in S.split())
'abc [email protected] http://www.test.com'
(Requires spaces to split on, though - so if you had something like commas with no spaces it could miss some.)
A:
Not as elegant as a simple re.sub(), but try this:
import re
s='a.b.c. [email protected] http://www.test.com'
m=re.search('(.*?)(([a-zA-Z]\.){2,})(.*)', s)
if m:
replacement=''.join(m.group(2).split('.'))
s=m.group(1)+replacement+m.group(4)
print s
It assumes that there's no more than one acronym per string, but you could always run it repeatedly.
|
What's the regex for removing dots in acronyms but not in domain names?
|
I want to remove dots in acronyms but not in domain names in a python string. For example,
I want the string
'a.b.c. [email protected] http://www.test.com'
to become
'abc [email protected] http://www.test.com'
The closest regex I made so far is
re.sub('(?:\s|\A).{1}\.',lambda s: s.group()[0:2], s)
which results to
'ab.c. [email protected] http://www.test.com'
It seems that for the above regex to work, I need to change the regex to
(?:\s|\A|\G).{1}\.
but there is no end of match marker (\G) in python.
EDIT: As I have mentioned in my comment, the strings have no specific formatting. These strings contain informal human conversations and so may contain zero, one or several acronyms or domain names. A few errors is fine by me if it would save me from coding a "real" parser.
|
[
"If your data is always formatted like this then why not split your data into 3 parts by splitting on the space.\nThen it's pretty trivial to remove the periods from the first element and use join to remerge the parts.\n",
"I suggest you split the string at '@' (or whatever character makes sense), do the substitution on the first part, then put the string back together. I think that will show the intent of the code better than a complex regexp. Something like this, perhaps:\nstring='a.b.c. [email protected] http://www.test.com'\nleft, rest = string.split(\"@\",1)\nleft = left.replace(\".\",\"\")\nresult=\"%s@%s\" % (left, rest)\n\n",
"You could simply remove DOTS that don't have two [a-z] letters (or more) ahead of them:\n\\.(?![a-zA-Z]{2})\n\nBut that will of course also remove the first DOT from the following address:\[email protected]\nYou could fix that by doing:\n\\.(?![a-zA-Z]{2}|[^\\s@]*+@)\n\nbut I'm sure there will be many more such corner cases.\n",
"The following worked for me (with thanks to Bart for his answer):\nre.sub('\\.(?!(\\S[^. ])|\\d)', '', s)\n\nThis will not remove a dot if it is the first character in a word or acronym. \n",
"A non-regex way:\n>>> S = 'a.b.c. [email protected] http://www.test.com'\n>>> ' '.join(w if '@' in w or ':' in w else w.replace('.', '') for w in S.split())\n'abc [email protected] http://www.test.com'\n\n(Requires spaces to split on, though - so if you had something like commas with no spaces it could miss some.)\n",
"Not as elegant as a simple re.sub(), but try this:\nimport re\n\ns='a.b.c. [email protected] http://www.test.com'\nm=re.search('(.*?)(([a-zA-Z]\\.){2,})(.*)', s)\n\nif m:\n replacement=''.join(m.group(2).split('.'))\n s=m.group(1)+replacement+m.group(4)\n\nprint s\n\nIt assumes that there's no more than one acronym per string, but you could always run it repeatedly.\n"
] |
[
5,
2,
2,
1,
1,
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0001279110_python_regex.txt
|
Q:
How can I list the methods in a Python 2.5 module?
I'm trying to use a Python library written in C that has no documentation of any kind. I want to use introspection to at least see what methods and classes are in the modules. Does somebody have a function or library I can use to list the functions (with argument lists) and classes (with methods and member variables) within a module?
I found this article about Python introspection, but I'm pretty sure it doesn't apply to Python 2.5. Thanks for the help.
A:
Here are some things you can do at least:
import module
print dir(module) # Find functions of interest.
# For each function of interest:
help(module.interesting_function)
print module.interesting_function.func_defaults
A:
Mark Pilgrim's chapter 4, which you mention, does actually apply just fine to Python 2.5 (and any other recent 2.* version, thanks to backwards compatibility). Mark doesn't mention help, but I see other answers do.
One key bit that nobody (including Mark;-) seems to have mentioned is inspect, an excellent module in Python's standard library that really helps with advanced introspection.
A:
Just this is pretty good too:
import module
help(module)
It will print the docstring for the module, then list the contents of the module, printing their docstrings too.
A:
The dir() functions shows all members a module has.
|
How can I list the methods in a Python 2.5 module?
|
I'm trying to use a Python library written in C that has no documentation of any kind. I want to use introspection to at least see what methods and classes are in the modules. Does somebody have a function or library I can use to list the functions (with argument lists) and classes (with methods and member variables) within a module?
I found this article about Python introspection, but I'm pretty sure it doesn't apply to Python 2.5. Thanks for the help.
|
[
"Here are some things you can do at least:\nimport module\n\nprint dir(module) # Find functions of interest.\n\n# For each function of interest:\nhelp(module.interesting_function)\nprint module.interesting_function.func_defaults\n\n",
"Mark Pilgrim's chapter 4, which you mention, does actually apply just fine to Python 2.5 (and any other recent 2.* version, thanks to backwards compatibility). Mark doesn't mention help, but I see other answers do.\nOne key bit that nobody (including Mark;-) seems to have mentioned is inspect, an excellent module in Python's standard library that really helps with advanced introspection.\n",
"Just this is pretty good too:\nimport module\nhelp(module)\n\nIt will print the docstring for the module, then list the contents of the module, printing their docstrings too.\n",
"The dir() functions shows all members a module has.\n"
] |
[
58,
12,
8,
4
] |
[] |
[] |
[
"introspection",
"python",
"python_2.5"
] |
stackoverflow_0001280787_introspection_python_python_2.5.txt
|
Q:
In python is there an easier way to write 6 nested for loops?
This problem has been getting at me for a while now. Is there an easier way to write nested for loops in python? For example if my code went something like this:
for y in range(3):
for x in range(3):
do_something()
for y1 in range(3):
for x1 in range(3):
do_something_else()
would there be an easier way to do this? I know that this code works but when you indent instead of using 2 spaces, like me, it can get to be a problem.
Oh in the example there were only 4 nested for loops to make things easier.
A:
If you're frequently iterating over a Cartesian product like in your example, you might want to investigate Python 2.6's itertools.product -- or write your own if you're in an earlier Python.
from itertools import product
for y, x in product(range(3), repeat=2):
do_something()
for y1, x1 in product(range(3), repeat=2):
do_something_else()
A:
This is fairly common when looping over multidimensional spaces. My solution is:
xy_grid = [(x, y) for x in range(3) for y in range(3)]
for x, y in xy_grid:
# do something
for x1, y1 in xy_grid:
# do something else
A:
When faced with that sort of program logic, I would probably break up the sequence of loops into two or more separate functions.
Another technique in Python is to use list comprehensions where possible, instead of a loop.
A:
Assuming each loop has some sort of independent meaning, break them out into named functions:
def do_tigers():
for x in range(3):
print something
def do_lions():
do_lionesses()
for x in range(3):
do_tigers()
def do_penguins():
for x in range(3):
do_lions()
..etc.
I could perhaps have chosen better names. 8-)
A:
Technically, you could use itertools.product to get a cartesian product of N sequences, and iterate over that:
for y, x, y1, x1 in itertools.product(range(3), repeat=4):
do_something_else()
But I don't think that actually wins you anything readability-wise.
A:
Python iterators, and generators in particular, exist exactly to allow the nice refactoring of otherwise-complicated loops. Of course, it's hard to get an abstraction out from a simple example, but assuming the 3 needs to be a parameter (maybe the whole range(3) should be?), and the two functions you're calling need some parameters that are loop variables, you could refactor the code:
for y in range(3):
for x in range(3):
do_something(x, y)
for y1 in range(3):
for x1 in range(3):
do_something_else(x, y, x1, y1)
into, e.g.:
def nestloop(n, *funcs):
head = funcs[0]
tail = funcs[1:]
for y in range(n):
for x in range(n):
yield head, x, y
if tail:
for subtup in nestloop(n, *tail):
yield subtup[:1] + (x, y) + subtup[1:]
for funcandargs in nestloop(3, do_something, do_something_else):
funcandargs[0](*funcandargs[1:])
The exact kind of refactoring will no doubt need to be tweaked for your exact purposes, but the general point that iterators (and usually in fact just simple generators) afford very nice refactorings of loops remains -- all the looping logic goes inside the generator, and the application-level code is left with simple for loops and actual application-relevant processing of the items yielded in the for loops.
A:
My personal argument would be that you're likely doing something wrong if you have 6 nested loops...
That said, functional decomposition is what you're looking for. Refactor so some of the loops happen in seperate function calls, then call those functions.
A:
From your code it looks like you want to perform an operation with every possible pair of points where x and y are in the range 0..2.
To do that:
for x1,y1,x2,y2 in itertools.product(range(3), repeat=4):
do_something_with_two_points(x1,y1,2,y2)
The operation do_something_with_two_points will be called 81 times - once for every possible combination of points.
A:
Have you looked into List Comprehensions?
Something like:
[do_something() for x in range(3) for y in range(3)]
A:
That way looks pretty straightforward and easy. Are you are saying you want to generalize to multiple layers of loops.... can you give a real-life example?
Another option I could think of would be to use a function to generate the parameters and then just apply them in a loop
def generate_params(n):
return itertools.product(range(n), range(n))
for x,y in generate_params(3):
do_something()
A:
you can also use the map() function
|
In python is there an easier way to write 6 nested for loops?
|
This problem has been getting at me for a while now. Is there an easier way to write nested for loops in python? For example if my code went something like this:
for y in range(3):
for x in range(3):
do_something()
for y1 in range(3):
for x1 in range(3):
do_something_else()
would there be an easier way to do this? I know that this code works but when you indent instead of using 2 spaces, like me, it can get to be a problem.
Oh in the example there were only 4 nested for loops to make things easier.
|
[
"If you're frequently iterating over a Cartesian product like in your example, you might want to investigate Python 2.6's itertools.product -- or write your own if you're in an earlier Python.\nfrom itertools import product\nfor y, x in product(range(3), repeat=2):\n do_something()\n for y1, x1 in product(range(3), repeat=2):\n do_something_else()\n\n",
"This is fairly common when looping over multidimensional spaces. My solution is:\nxy_grid = [(x, y) for x in range(3) for y in range(3)]\n\nfor x, y in xy_grid:\n # do something\n for x1, y1 in xy_grid:\n # do something else\n\n",
"When faced with that sort of program logic, I would probably break up the sequence of loops into two or more separate functions.\nAnother technique in Python is to use list comprehensions where possible, instead of a loop.\n",
"Assuming each loop has some sort of independent meaning, break them out into named functions:\ndef do_tigers():\n for x in range(3):\n print something\n\ndef do_lions():\n do_lionesses()\n for x in range(3):\n do_tigers()\n\ndef do_penguins():\n for x in range(3):\n do_lions()\n\n..etc.\n\nI could perhaps have chosen better names. 8-)\n",
"Technically, you could use itertools.product to get a cartesian product of N sequences, and iterate over that:\n for y, x, y1, x1 in itertools.product(range(3), repeat=4):\n do_something_else()\n\nBut I don't think that actually wins you anything readability-wise.\n",
"Python iterators, and generators in particular, exist exactly to allow the nice refactoring of otherwise-complicated loops. Of course, it's hard to get an abstraction out from a simple example, but assuming the 3 needs to be a parameter (maybe the whole range(3) should be?), and the two functions you're calling need some parameters that are loop variables, you could refactor the code:\n for y in range(3):\n for x in range(3):\n do_something(x, y)\n for y1 in range(3):\n for x1 in range(3):\n do_something_else(x, y, x1, y1)\n\ninto, e.g.:\ndef nestloop(n, *funcs):\n head = funcs[0]\n tail = funcs[1:]\n for y in range(n):\n for x in range(n):\n yield head, x, y\n if tail:\n for subtup in nestloop(n, *tail):\n yield subtup[:1] + (x, y) + subtup[1:]\n\nfor funcandargs in nestloop(3, do_something, do_something_else):\n funcandargs[0](*funcandargs[1:])\n\nThe exact kind of refactoring will no doubt need to be tweaked for your exact purposes, but the general point that iterators (and usually in fact just simple generators) afford very nice refactorings of loops remains -- all the looping logic goes inside the generator, and the application-level code is left with simple for loops and actual application-relevant processing of the items yielded in the for loops.\n",
"My personal argument would be that you're likely doing something wrong if you have 6 nested loops...\nThat said, functional decomposition is what you're looking for. Refactor so some of the loops happen in seperate function calls, then call those functions.\n",
"From your code it looks like you want to perform an operation with every possible pair of points where x and y are in the range 0..2.\nTo do that:\nfor x1,y1,x2,y2 in itertools.product(range(3), repeat=4):\n do_something_with_two_points(x1,y1,2,y2)\n\nThe operation do_something_with_two_points will be called 81 times - once for every possible combination of points. \n",
"Have you looked into List Comprehensions?\nSomething like:\n[do_something() for x in range(3) for y in range(3)]\n\n",
"That way looks pretty straightforward and easy. Are you are saying you want to generalize to multiple layers of loops.... can you give a real-life example?\nAnother option I could think of would be to use a function to generate the parameters and then just apply them in a loop\ndef generate_params(n):\n return itertools.product(range(n), range(n))\n\nfor x,y in generate_params(3):\n do_something()\n\n",
"you can also use the map() function \n"
] |
[
60,
14,
10,
8,
6,
4,
3,
3,
2,
2,
1
] |
[] |
[] |
[
"for_loop",
"nested_loops",
"python"
] |
stackoverflow_0001280667_for_loop_nested_loops_python.txt
|
Q:
wxPython: Items in BoxSizer don't expand horizontally, only vertically
I have several buttons in various sizers and they expand in the way that I want them to. However, when I add the parent to a new wx.BoxSizer that is used to add a border around all the elements in the frame, the sizer that has been added functions correctly vertically, but not horizontally.
The following code demonstrates the problem:
#! /usr/bin/env python
import wx
import webbrowser
class App(wx.App):
def OnInit(self):
frame = MainFrame()
frame.Show()
self.SetTopWindow(frame)
return True
class MainFrame(wx.Frame):
title = 'Title'
def __init__(self):
wx.Frame.__init__(self, None, -1, self.title)
panel = wx.Panel(self)
#icon = wx.Icon('icon.png', wx.BITMAP_TYPE_PNG)
#self.SetIcon(icon)
sizer = wx.FlexGridSizer(rows=2, cols=1, vgap=10, hgap=10)
button1 = wx.Button(panel, -1, 'BUTTON')
sizer.Add(button1, 0, wx.EXPAND)
buttonSizer = wx.FlexGridSizer(rows=1, cols=4, vgap=10, hgap=5)
buttonDelete = wx.Button(panel, -1, 'Delete')
buttonSizer.Add(buttonDelete, 0, 0)
buttonEdit = wx.Button(panel, -1, 'Edit')
buttonSizer.Add(buttonEdit, 0, 0)
buttonNew = wx.Button(panel, -1, 'New')
buttonSizer.Add(buttonNew, 0, 0)
buttonSizer.AddGrowableCol(0, 0)
sizer.Add(buttonSizer, 0, wx.EXPAND|wx.HORIZONTAL)
sizer.AddGrowableCol(0, 0)
sizer.AddGrowableRow(0, 0)
mainSizer = wx.BoxSizer(wx.EXPAND)
mainSizer.Add(sizer, 0, wx.EXPAND|wx.ALL, 10)
#panel.SetSizerAndFit(sizer)
#sizer.SetSizeHints(self)
panel.SetSizerAndFit(mainSizer)
mainSizer.SetSizeHints(self)
if __name__ == '__main__':
app = App(False)
app.MainLoop()
Commenting out lines 57 and 58 and uncommenting lines 55 and 56 removes the extra BoxSizer and shows how I expect everything to function (without the whitespace of course).
I am completely stuck with this problem and still have no clue as to how to fix it.
A:
First of all, you're passing some flags incorrectly. BoxSizer takes wxHORIZONTAL or wxVERTICAL, not wxEXPAND. sizer.Add does not take wxHORIZONTAL.
If you have a VERTICAL BoxSizer, wxEXPAND will make the control fill horizontally, while a proportion of 1 or more (second argument to Add) will make the control fill vertically. It's the opposite for HORIZONTAL BoxSizers.
sizer = wx.BoxSizer(wxVERTICAL)
sizer.Add(widget1, 0, wxEXPAND)
sizer.Add(widget2, 1)
widget1 will expand horizontally. widget2 will expand vertically.
If you put a sizer in another sizer, you need to be sure to have its proportion and EXPAND flags set so that its insides will grow how you want them to.
I'll leave the rest to you.
|
wxPython: Items in BoxSizer don't expand horizontally, only vertically
|
I have several buttons in various sizers and they expand in the way that I want them to. However, when I add the parent to a new wx.BoxSizer that is used to add a border around all the elements in the frame, the sizer that has been added functions correctly vertically, but not horizontally.
The following code demonstrates the problem:
#! /usr/bin/env python
import wx
import webbrowser
class App(wx.App):
def OnInit(self):
frame = MainFrame()
frame.Show()
self.SetTopWindow(frame)
return True
class MainFrame(wx.Frame):
title = 'Title'
def __init__(self):
wx.Frame.__init__(self, None, -1, self.title)
panel = wx.Panel(self)
#icon = wx.Icon('icon.png', wx.BITMAP_TYPE_PNG)
#self.SetIcon(icon)
sizer = wx.FlexGridSizer(rows=2, cols=1, vgap=10, hgap=10)
button1 = wx.Button(panel, -1, 'BUTTON')
sizer.Add(button1, 0, wx.EXPAND)
buttonSizer = wx.FlexGridSizer(rows=1, cols=4, vgap=10, hgap=5)
buttonDelete = wx.Button(panel, -1, 'Delete')
buttonSizer.Add(buttonDelete, 0, 0)
buttonEdit = wx.Button(panel, -1, 'Edit')
buttonSizer.Add(buttonEdit, 0, 0)
buttonNew = wx.Button(panel, -1, 'New')
buttonSizer.Add(buttonNew, 0, 0)
buttonSizer.AddGrowableCol(0, 0)
sizer.Add(buttonSizer, 0, wx.EXPAND|wx.HORIZONTAL)
sizer.AddGrowableCol(0, 0)
sizer.AddGrowableRow(0, 0)
mainSizer = wx.BoxSizer(wx.EXPAND)
mainSizer.Add(sizer, 0, wx.EXPAND|wx.ALL, 10)
#panel.SetSizerAndFit(sizer)
#sizer.SetSizeHints(self)
panel.SetSizerAndFit(mainSizer)
mainSizer.SetSizeHints(self)
if __name__ == '__main__':
app = App(False)
app.MainLoop()
Commenting out lines 57 and 58 and uncommenting lines 55 and 56 removes the extra BoxSizer and shows how I expect everything to function (without the whitespace of course).
I am completely stuck with this problem and still have no clue as to how to fix it.
|
[
"First of all, you're passing some flags incorrectly. BoxSizer takes wxHORIZONTAL or wxVERTICAL, not wxEXPAND. sizer.Add does not take wxHORIZONTAL.\nIf you have a VERTICAL BoxSizer, wxEXPAND will make the control fill horizontally, while a proportion of 1 or more (second argument to Add) will make the control fill vertically. It's the opposite for HORIZONTAL BoxSizers.\nsizer = wx.BoxSizer(wxVERTICAL)\nsizer.Add(widget1, 0, wxEXPAND)\nsizer.Add(widget2, 1)\n\nwidget1 will expand horizontally. widget2 will expand vertically.\nIf you put a sizer in another sizer, you need to be sure to have its proportion and EXPAND flags set so that its insides will grow how you want them to.\nI'll leave the rest to you.\n"
] |
[
28
] |
[] |
[] |
[
"python",
"wxpython",
"wxwidgets"
] |
stackoverflow_0001280600_python_wxpython_wxwidgets.txt
|
Q:
Why can't I set a global variable in Python?
How do global variables work in Python? I know global variables are evil, I'm just experimenting.
This does not work in python:
G = None
def foo():
if G is None:
G = 1
foo()
I get an error:
UnboundLocalError: local variable 'G' referenced before assignment
What am I doing wrong?
A:
You need the global statement:
def foo():
global G
if G is None:
G = 1
In Python, variables that you assign to become local variables by default. You need to use global to declare them as global variables. On the other hand, variables that you refer to but do not assign to do not automatically become local variables. These variables refer to the closest variable in an enclosing scope.
Python 3.x introduces the nonlocal statement which is analogous to global, but binds the variable to its nearest enclosing scope. For example:
def foo():
x = 5
def bar():
nonlocal x
x = x * 2
bar()
return x
This function returns 10 when called.
A:
You need to declare G as global, but as for why: whenever you refer to a variable inside a function, if you set the variable anywhere in that function, Python assumes that it's a local variable. So if a local variable by that name doesn't exist at that point in the code, you'll get the UnboundLocalError. If you actually meant to refer to a global variable, as in your question, you need the global keyword to tell Python that's what you meant.
If you don't assign to the variable anywhere in the function, but only access its value, Python will use the global variable by that name if one exists. So you could do:
G = None
def foo():
if G is None:
print G
foo()
This code prints None and does not throw the UnboundLocalError.
A:
You still have to declare G as global, from within that function:
G = None
def foo():
global G
if G is None:
G = 1
foo()
print G
which simply outputs
1
A:
Define G as global in the function like this:
#!/usr/bin/python
G = None;
def foo():
global G
if G is None:
G = 1;
print G;
foo();
The above python prints 1.
Using global variables like this is bad practice because: http://c2.com/cgi/wiki?GlobalVariablesAreBad
|
Why can't I set a global variable in Python?
|
How do global variables work in Python? I know global variables are evil, I'm just experimenting.
This does not work in python:
G = None
def foo():
if G is None:
G = 1
foo()
I get an error:
UnboundLocalError: local variable 'G' referenced before assignment
What am I doing wrong?
|
[
"You need the global statement:\ndef foo():\n global G\n if G is None:\n G = 1\n\nIn Python, variables that you assign to become local variables by default. You need to use global to declare them as global variables. On the other hand, variables that you refer to but do not assign to do not automatically become local variables. These variables refer to the closest variable in an enclosing scope.\nPython 3.x introduces the nonlocal statement which is analogous to global, but binds the variable to its nearest enclosing scope. For example:\ndef foo():\n x = 5\n def bar():\n nonlocal x\n x = x * 2\n bar()\n return x\n\nThis function returns 10 when called.\n",
"You need to declare G as global, but as for why: whenever you refer to a variable inside a function, if you set the variable anywhere in that function, Python assumes that it's a local variable. So if a local variable by that name doesn't exist at that point in the code, you'll get the UnboundLocalError. If you actually meant to refer to a global variable, as in your question, you need the global keyword to tell Python that's what you meant.\nIf you don't assign to the variable anywhere in the function, but only access its value, Python will use the global variable by that name if one exists. So you could do:\nG = None\n\ndef foo():\n if G is None:\n print G\n\nfoo()\n\nThis code prints None and does not throw the UnboundLocalError.\n",
"You still have to declare G as global, from within that function:\nG = None\n\ndef foo():\n global G\n if G is None:\n G = 1\n\nfoo()\nprint G\n\nwhich simply outputs\n1\n\n",
"Define G as global in the function like this:\n#!/usr/bin/python\n\nG = None;\ndef foo():\n global G\n if G is None:\n G = 1;\n print G;\n\nfoo();\n\nThe above python prints 1. \nUsing global variables like this is bad practice because: http://c2.com/cgi/wiki?GlobalVariablesAreBad\n"
] |
[
71,
10,
9,
2
] |
[] |
[] |
[
"global_variables",
"python"
] |
stackoverflow_0001281184_global_variables_python.txt
|
Q:
Accessing elements with offsets in Python's for .. in loops
I've been mucking around a bit with Python, and I've gathered that it's usually better (or 'pythonic') to use
for x in SomeArray:
rather than the more C-style
for i in range(0, len(SomeArray)):
I do see the benefits in this, mainly cleaner code, and the ability to use the nice map() and related functions. However, I am quite often faced with the situation where I would like to simultaneously access elements of varying offsets in the array. For example, I might want to add the current element to the element two steps behind it. Is there a way to do this without resorting to explicit indices?
A:
The way to do this in Python is:
for i, x in enumerate(SomeArray):
print i, x
The enumerate generator produces a sequence of 2-tuples, each containing the array index and the element.
A:
List indexing and zip() are your friends.
Here's my answer for your more specific question:
I might want to add the current element to the element two steps behind it. Is there a way to do this without resorting to explicit indices?
arr = range(10)
[i+j for i,j in zip(arr[:-2], arr[2:])]
You can also use the module numpy if you intend to work on numerical arrays. For example, the above code can be more elegantly written as:
import numpy
narr = numpy.arange(10)
narr[:-2] + narr[2:]
Adding the nth element to the (n-2)th element is equivalent to adding the mth element to the (m+2) element (for the mathematically inclined, we performed the substitution n->m+2). The range of n is [2, len(arr)) and the range of m is [0, len(arr)-2). Note the brackets and parenthesis. The elements from 0 to len(arr)-3 (you exclude the last two elements) is indexed as [:-2] while elements from 2 to len(arr)-1 (you exclude the first two elements) is indexed as [2:].
I assume that you already know list comprehensions.
|
Accessing elements with offsets in Python's for .. in loops
|
I've been mucking around a bit with Python, and I've gathered that it's usually better (or 'pythonic') to use
for x in SomeArray:
rather than the more C-style
for i in range(0, len(SomeArray)):
I do see the benefits in this, mainly cleaner code, and the ability to use the nice map() and related functions. However, I am quite often faced with the situation where I would like to simultaneously access elements of varying offsets in the array. For example, I might want to add the current element to the element two steps behind it. Is there a way to do this without resorting to explicit indices?
|
[
"The way to do this in Python is:\nfor i, x in enumerate(SomeArray):\n print i, x\n\nThe enumerate generator produces a sequence of 2-tuples, each containing the array index and the element.\n",
"List indexing and zip() are your friends.\nHere's my answer for your more specific question:\n\nI might want to add the current element to the element two steps behind it. Is there a way to do this without resorting to explicit indices?\n\narr = range(10)\n[i+j for i,j in zip(arr[:-2], arr[2:])]\n\nYou can also use the module numpy if you intend to work on numerical arrays. For example, the above code can be more elegantly written as:\nimport numpy\nnarr = numpy.arange(10)\nnarr[:-2] + narr[2:]\n\nAdding the nth element to the (n-2)th element is equivalent to adding the mth element to the (m+2) element (for the mathematically inclined, we performed the substitution n->m+2). The range of n is [2, len(arr)) and the range of m is [0, len(arr)-2). Note the brackets and parenthesis. The elements from 0 to len(arr)-3 (you exclude the last two elements) is indexed as [:-2] while elements from 2 to len(arr)-1 (you exclude the first two elements) is indexed as [2:].\nI assume that you already know list comprehensions.\n"
] |
[
15,
6
] |
[] |
[] |
[
"loops",
"python"
] |
stackoverflow_0001281752_loops_python.txt
|
Q:
Copying files to directories as specified in a file list with python
I have a bunch of files in a single directory that I would like to organize in sub-directories.
This directory structure (which file would go in which directory) is specified in a file list that looks like this:
Directory: Music\
-> 01-some_song1.mp3
-> 02-some_song2.mp3
-> 03-some_song3.mp3
Directory: Images\
-> 01-some_image1.jpg
-> 02-some_image2.jpg
......................
I was thinking of extracting the data (directory name and file name) and store it in a dictionary that would look like this:
dictionary = {'Music': (01-some_song1.mp3, 02-some_song2.mp3,
03-some_song3.mp3),
'Images': (01-some_image1.jpg, 02-some_image2.jpg),
......................................................
}
After that I would copy/move the files in their respective directories.
I already extracted the directory names and created the empty dirs.
For the dictionary values I tried to get a list of lists by doing the following:
def get_values(file):
values = []
tmp = []
pattern = re.compile(r'^-> (.+?)$')
for line in file:
if line.strip().startswith('->'):
match = re.search(pattern, line.strip())
if match:
tmp.append(match.group(1))
elif line.strip().startswith('Directory'):
values.append(tmp)
del tmp[:]
return values
This doesn't seem to work. Each list from the values list contains the same 4 file names over and over again.
What am I doing wrong?
I would also like to know what are the other ways of doing this whole thing? I'm sure there's a better/simpler/cleaner way.
A:
I think that the cause is that you are reusing always the same list.
del tmp[:] clears the list and doesn't create a new instance. In your case, you need to create a new list by calling tmp = []
Following fix should work (I didn't test it)
def get_values(file):
values = []
tmp = []
pattern = re.compile(r'^-> (.+?)$')
for line in file:
if line.strip().startswith('->'):
match = re.search(pattern, line.strip())
if match:
tmp.append(match.group(1))
elif line.strip().startswith('Directory'):
values.append(tmp)
tmp = []
return values
A:
no need to use regular expression
d = {}
for line in open("file"):
line=line.strip()
if line.endswith("\\"):
directory = line.split(":")[-1].strip().replace("\\","")
d.setdefault(directory,[])
if line.startswith("->"):
song=line.split(" ")[-1]
d[directory].append(song)
print d
output
# python python.py
{'Images': ['01-some_image1.jpg', '02-some_image2.jpg'], 'Music': ['01-some_song1.mp3', '02-some_song2.mp3', '03-some_song3.mp3']}
A:
If you use collections.defaultdict(list), you get a list that dictionary whose elements are lists. If the key is not found, it is added with a value of empty list, so you can start appending to the list immediately. That's what this line does:
d[dir].append(match.group(1))
It creates the directory name as a key if it does not exist and appends the file name found to the list.
BTW, if you are having problems getting your regexes to work, try creating them with the debug flag. I can't remember the symbolic name, but the number is 128. So if you do this:
file_regex = re.compile(r'^-> (.+?)$', 128)
You get this additional output:
at at_beginning
literal 45
literal 62
literal 32
subpattern 1
min_repeat 1 65535
any None
at at_end
And you can see that there is a start line match plus '-> ' (for 45 62 32) and then a repeated any pattern and end of line match. Very useful for debugging.
Code:
from __future__ import with_statement
import re
import collections
def get_values(file):
d = collections.defaultdict(list)
dir = ""
dir_regex = re.compile(r'^Directory: (.+?)\\$')
file_regex = re.compile(r'\-\> (.+?)$')
with open(file) as f:
for line in f:
line = line.strip()
match = dir_regex.search(line)
if match:
dir = match.group(1)
else:
match = file_regex.search(line)
if match:
d[dir].append(match.group(1))
return d
if __name__ == '__main__':
d = get_values('test_file')
for k, v in d.items():
print k, v
Result:
Images ['01-some_image1.jpg', '02-some_image2.jpg']
Music ['01-some_song1.mp3', '02-some_song2.mp3', '03-some_song3.mp3']
|
Copying files to directories as specified in a file list with python
|
I have a bunch of files in a single directory that I would like to organize in sub-directories.
This directory structure (which file would go in which directory) is specified in a file list that looks like this:
Directory: Music\
-> 01-some_song1.mp3
-> 02-some_song2.mp3
-> 03-some_song3.mp3
Directory: Images\
-> 01-some_image1.jpg
-> 02-some_image2.jpg
......................
I was thinking of extracting the data (directory name and file name) and store it in a dictionary that would look like this:
dictionary = {'Music': (01-some_song1.mp3, 02-some_song2.mp3,
03-some_song3.mp3),
'Images': (01-some_image1.jpg, 02-some_image2.jpg),
......................................................
}
After that I would copy/move the files in their respective directories.
I already extracted the directory names and created the empty dirs.
For the dictionary values I tried to get a list of lists by doing the following:
def get_values(file):
values = []
tmp = []
pattern = re.compile(r'^-> (.+?)$')
for line in file:
if line.strip().startswith('->'):
match = re.search(pattern, line.strip())
if match:
tmp.append(match.group(1))
elif line.strip().startswith('Directory'):
values.append(tmp)
del tmp[:]
return values
This doesn't seem to work. Each list from the values list contains the same 4 file names over and over again.
What am I doing wrong?
I would also like to know what are the other ways of doing this whole thing? I'm sure there's a better/simpler/cleaner way.
|
[
"I think that the cause is that you are reusing always the same list. \ndel tmp[:] clears the list and doesn't create a new instance. In your case, you need to create a new list by calling tmp = []\nFollowing fix should work (I didn't test it)\n\ndef get_values(file):\n values = []\n tmp = []\n pattern = re.compile(r'^-> (.+?)$')\n for line in file:\n if line.strip().startswith('->'):\n match = re.search(pattern, line.strip())\n if match:\n tmp.append(match.group(1))\n elif line.strip().startswith('Directory'):\n values.append(tmp)\n tmp = []\n return values\n\n",
"no need to use regular expression \nd = {}\nfor line in open(\"file\"):\n line=line.strip()\n if line.endswith(\"\\\\\"):\n directory = line.split(\":\")[-1].strip().replace(\"\\\\\",\"\")\n d.setdefault(directory,[])\n if line.startswith(\"->\"):\n song=line.split(\" \")[-1]\n d[directory].append(song)\nprint d\n\noutput\n# python python.py\n{'Images': ['01-some_image1.jpg', '02-some_image2.jpg'], 'Music': ['01-some_song1.mp3', '02-some_song2.mp3', '03-some_song3.mp3']}\n\n",
"If you use collections.defaultdict(list), you get a list that dictionary whose elements are lists. If the key is not found, it is added with a value of empty list, so you can start appending to the list immediately. That's what this line does:\nd[dir].append(match.group(1))\n\nIt creates the directory name as a key if it does not exist and appends the file name found to the list.\nBTW, if you are having problems getting your regexes to work, try creating them with the debug flag. I can't remember the symbolic name, but the number is 128. So if you do this:\nfile_regex = re.compile(r'^-> (.+?)$', 128)\n\nYou get this additional output:\nat at_beginning\nliteral 45\nliteral 62\nliteral 32\nsubpattern 1\n min_repeat 1 65535\n any None\nat at_end\n\nAnd you can see that there is a start line match plus '-> ' (for 45 62 32) and then a repeated any pattern and end of line match. Very useful for debugging.\nCode:\nfrom __future__ import with_statement\n\nimport re\nimport collections\n\ndef get_values(file):\n d = collections.defaultdict(list)\n dir = \"\"\n dir_regex = re.compile(r'^Directory: (.+?)\\\\$')\n file_regex = re.compile(r'\\-\\> (.+?)$')\n with open(file) as f:\n for line in f:\n line = line.strip()\n match = dir_regex.search(line)\n if match:\n dir = match.group(1)\n else:\n match = file_regex.search(line)\n if match:\n d[dir].append(match.group(1))\n return d\n\nif __name__ == '__main__':\n d = get_values('test_file')\n for k, v in d.items():\n print k, v\n\nResult:\nImages ['01-some_image1.jpg', '02-some_image2.jpg']\nMusic ['01-some_song1.mp3', '02-some_song2.mp3', '03-some_song3.mp3']\n\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"copy",
"directory",
"file",
"python"
] |
stackoverflow_0001281944_copy_directory_file_python.txt
|
Q:
Handling both SSL and non-SSL connections when inheriting from httplib.HTTP(s)Connection
I have a class that inherits from httplib.HTTPSConnection.
class MyConnection(httplib.HTTPSConnection):
def __init__(self, *args, **kw):
httplib.HTTPSConnection.__init__(self,*args, **kw)
...
Is it possible to turn off the SSL layer when the class is instantiatied so I can also use it to communicate with non-secure servers?
I my case it is known before initialisation if SSL should be used so another solution would be to try to switch the inheritance from httplib.HTTPSConnection to httplib.HTTPConnection, but I am also not sure how to do this in a sensible way?
A:
Per your last paragraph, in Python you can use something like a factory pattern:
class Foo:
def doit(self):
print "I'm a foo"
class Bar:
def doit(self):
print "I'm a bar"
def MakeClass(isSecure):
if isSecure:
base = Foo
else:
base = Bar
class Quux(base):
def __init__(self):
print "I am derived from", base
return Quux()
MakeClass(True).doit()
MakeClass(False).doit()
outputs:
I am derived from __main__.Foo
I'm a foo
I am derived from __main__.Bar
I'm a bar
A:
As per my comment on @Mark's answer, I like the factory approach he's advocating. However, I wouldn't do it exactly his way, because he makes a new class afresh every time. Rather, this is a nice use case for mixin MI and super, as follows:
class MyConnectionPlugin(object):
def __init__(self, *args, **kw):
super(MyConnectionPlugin, self).__init__(*args, **kw)
# etc etc -- rest of initiatizations, other methods
class SecureConnection(MyConnectionPlugin,
httplib.HTTPSConnection, object):
pass
class PlainConnection(MyConnectionPlugin,
httplib.HTTPConnection, object):
pass
def ConnectionClass(secure):
if secure:
return SecureConnection
else:
return PlainConnection
conn = ConnectionClass(whatever_expression())()
etc.
Now, alternatives ARE possible, since a Python object can change its own __class__, and so forth. However, like shooting flies with a rhino rifle, using excessive force (extremely powerful, deep, and near-magical language features) to solve problems that can be nicely solved with reasonable restraint (the equivalent of a flyswatter), is NOT recommended;-).
Edit: the extra injection of object in the bases is needed only to compensate for the sad fact that in Python 2.* the HTTPConnection class is old-style and therefore doesn't play well with others -- witness...:
>>> import httplib
>>> class Z(object): pass
...
>>> class Y(Z, httplib.HTTPConnection): pass
...
>>> Y.mro()
[<class '__main__.Y'>, <class '__main__.Z'>, <type 'object'>, <class httplib.HTTPConnection at 0x264ae0>]
>>> class X(Z, httplib.HTTPConnection, object): pass
...
>>> X.mro()
[<class '__main__.X'>, <class '__main__.Z'>, <class httplib.HTTPConnection at 0x264ae0>, <type 'object'>]
>>>
The method-resolution order (aka MRO) in class Y (without the further injection of an object base) has object before the class from httplib (so super doesn't do the right thing), but the extra injection jiggles the MRO to compensate. Alas, such care is needed in Python 2.* when dealing with bad old legacy-style classes; fortunately, in Python 3, the legacy style has disappeared and every class "plays well with others" as it should!-)
A:
Apparently, you want to use MyConnection for multiple subsequent connections to different hosts. If so, you shouldn't be inheriting from HTTP(S)Connection at all - that class isn't really meant to be used for multiple connections. Instead, just make MyConnection have a HTTP(S)Connection.
|
Handling both SSL and non-SSL connections when inheriting from httplib.HTTP(s)Connection
|
I have a class that inherits from httplib.HTTPSConnection.
class MyConnection(httplib.HTTPSConnection):
def __init__(self, *args, **kw):
httplib.HTTPSConnection.__init__(self,*args, **kw)
...
Is it possible to turn off the SSL layer when the class is instantiatied so I can also use it to communicate with non-secure servers?
I my case it is known before initialisation if SSL should be used so another solution would be to try to switch the inheritance from httplib.HTTPSConnection to httplib.HTTPConnection, but I am also not sure how to do this in a sensible way?
|
[
"Per your last paragraph, in Python you can use something like a factory pattern:\nclass Foo:\n def doit(self):\n print \"I'm a foo\"\nclass Bar:\n def doit(self):\n print \"I'm a bar\"\n\ndef MakeClass(isSecure):\n if isSecure:\n base = Foo\n else:\n base = Bar\n\n class Quux(base):\n def __init__(self):\n print \"I am derived from\", base\n\n return Quux()\n\nMakeClass(True).doit()\nMakeClass(False).doit()\n\noutputs:\nI am derived from __main__.Foo\nI'm a foo\nI am derived from __main__.Bar\nI'm a bar\n\n",
"As per my comment on @Mark's answer, I like the factory approach he's advocating. However, I wouldn't do it exactly his way, because he makes a new class afresh every time. Rather, this is a nice use case for mixin MI and super, as follows:\nclass MyConnectionPlugin(object):\n def __init__(self, *args, **kw):\n super(MyConnectionPlugin, self).__init__(*args, **kw)\n # etc etc -- rest of initiatizations, other methods\n\nclass SecureConnection(MyConnectionPlugin,\n httplib.HTTPSConnection, object):\n pass\n\nclass PlainConnection(MyConnectionPlugin,\n httplib.HTTPConnection, object):\n pass\n\ndef ConnectionClass(secure):\n if secure:\n return SecureConnection\n else:\n return PlainConnection\n\nconn = ConnectionClass(whatever_expression())()\n\netc.\nNow, alternatives ARE possible, since a Python object can change its own __class__, and so forth. However, like shooting flies with a rhino rifle, using excessive force (extremely powerful, deep, and near-magical language features) to solve problems that can be nicely solved with reasonable restraint (the equivalent of a flyswatter), is NOT recommended;-).\nEdit: the extra injection of object in the bases is needed only to compensate for the sad fact that in Python 2.* the HTTPConnection class is old-style and therefore doesn't play well with others -- witness...:\n>>> import httplib\n>>> class Z(object): pass\n... \n>>> class Y(Z, httplib.HTTPConnection): pass\n... \n>>> Y.mro()\n[<class '__main__.Y'>, <class '__main__.Z'>, <type 'object'>, <class httplib.HTTPConnection at 0x264ae0>]\n>>> class X(Z, httplib.HTTPConnection, object): pass\n... \n>>> X.mro()\n[<class '__main__.X'>, <class '__main__.Z'>, <class httplib.HTTPConnection at 0x264ae0>, <type 'object'>]\n>>> \n\nThe method-resolution order (aka MRO) in class Y (without the further injection of an object base) has object before the class from httplib (so super doesn't do the right thing), but the extra injection jiggles the MRO to compensate. Alas, such care is needed in Python 2.* when dealing with bad old legacy-style classes; fortunately, in Python 3, the legacy style has disappeared and every class \"plays well with others\" as it should!-)\n",
"Apparently, you want to use MyConnection for multiple subsequent connections to different hosts. If so, you shouldn't be inheriting from HTTP(S)Connection at all - that class isn't really meant to be used for multiple connections. Instead, just make MyConnection have a HTTP(S)Connection.\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"http",
"https",
"python"
] |
stackoverflow_0001282368_http_https_python.txt
|
Q:
How to prevent Satchmo forms from displaying asterisk after required fields?
I'm customizing my Satchmo store forms and have an icon that appears before any required fields. The problem is, Satchmo seems to want to render a text asterisk after the required fields. I'm using field.label to get this label, should I be using something else?
EDIT: All my form templates are hard coded. I have an inclusion tag that takes a field and wraps it in a standard field template I've developed. My template uses the {{ field.label }} to display the friendly name of the field. It seem the label itself has a single asterisk in it at the end.
A:
What happens if you do the following?
Copy some or all of Satchmo's form templates to a new location and modify them to remove the asterisks
Arrange it so that your copies of those templates are seen before Satchmo's copies (by configuring the template loader settings appropriately, say by placing the app with the copied templates above Satchmo in settings.INSTALLED_APPS)
Update: I'm not able to reproduce your results with a vanilla Satchmo 0.8.1 installation. Can you give some more information? Here's what I did: First, I modified templates/contact/update_form.html, which contains hard-coded asterisks. I could easily remove them by changing the template; they disappeared from the UI. Instead, I left them in but added immediately after, in parentheses, {{ form.field.label }} after each of the fields in a section of the form. This is the result:
The labels here do contain an asterisk - as I mentioned earlier, this is because ContactInfoForm hardcodes this behaviour in its __init__ method. You would have to undo this behaviour, perhaps by using a derived class which removes trailing *s from field labels.
However, I did not find any *s appearing in other required fields. For example, here's a screenshot of the checkout form when I tried submitting without entering required information:
As you can see the credit card number and CCV are required fields but do not appear with an asterisk at the prompt. Nor do the labels have asterisks. So, the problem you are experiencing appears to be something to do with your customisations, but without more information it is difficult to be more helpful.
|
How to prevent Satchmo forms from displaying asterisk after required fields?
|
I'm customizing my Satchmo store forms and have an icon that appears before any required fields. The problem is, Satchmo seems to want to render a text asterisk after the required fields. I'm using field.label to get this label, should I be using something else?
EDIT: All my form templates are hard coded. I have an inclusion tag that takes a field and wraps it in a standard field template I've developed. My template uses the {{ field.label }} to display the friendly name of the field. It seem the label itself has a single asterisk in it at the end.
|
[
"What happens if you do the following?\n\nCopy some or all of Satchmo's form templates to a new location and modify them to remove the asterisks\nArrange it so that your copies of those templates are seen before Satchmo's copies (by configuring the template loader settings appropriately, say by placing the app with the copied templates above Satchmo in settings.INSTALLED_APPS)\n\nUpdate: I'm not able to reproduce your results with a vanilla Satchmo 0.8.1 installation. Can you give some more information? Here's what I did: First, I modified templates/contact/update_form.html, which contains hard-coded asterisks. I could easily remove them by changing the template; they disappeared from the UI. Instead, I left them in but added immediately after, in parentheses, {{ form.field.label }} after each of the fields in a section of the form. This is the result:\n\nThe labels here do contain an asterisk - as I mentioned earlier, this is because ContactInfoForm hardcodes this behaviour in its __init__ method. You would have to undo this behaviour, perhaps by using a derived class which removes trailing *s from field labels.\nHowever, I did not find any *s appearing in other required fields. For example, here's a screenshot of the checkout form when I tried submitting without entering required information:\n\nAs you can see the credit card number and CCV are required fields but do not appear with an asterisk at the prompt. Nor do the labels have asterisks. So, the problem you are experiencing appears to be something to do with your customisations, but without more information it is difficult to be more helpful.\n"
] |
[
1
] |
[] |
[] |
[
"django",
"field",
"forms",
"python",
"satchmo"
] |
stackoverflow_0001267874_django_field_forms_python_satchmo.txt
|
Q:
Explain socket buffers please
I was trying to find examples about socket programming and came upon this script:
http://stacklessexamples.googlecode.com/svn/trunk/examples/networking/mud.py
When reading through this script i found this line:
listenSocket.listen(5)
As i understand it - it reads 5 bytes from the buffer and then does stuff with it...
but what happens if more than 5 bytes were sent by the other end?
in the other place of that script it checks input against 4 commands and sees if there is \r\n in the string. dont commands like "look" plus \r\n make up for more than 5 bytes?
Alan
A:
The following is applicable to sockets in general, but it should help answer your specific question about using sockets from Python.
socket.listen() is used on a server socket to listen for incoming connection requests.
The parameter passed to listen is called the backlog and it means how many connections should the socket accept and put in a pending buffer until you finish your call to accept(). That applies to connections that are waiting to connect to your server socket between the time you have called listen() and the time you have finished a matching call to accept().
So, in your example you're setting the backlog to 5 connections.
Note.. if you set your backlog to 5 connections, the following connections (6th, 7th etc.) will be dropped and the connecting socket will receive an error connecting message (something like a "host actively refused the connection" message)
A:
This might help you understand the code: http://www.amk.ca/python/howto/sockets/
A:
The argument 5 to listenSocket.listen isn't the number of bytes to read or buffer, it's the backlog:
socket.listen(backlog)
Listen for connections made to the
socket. The backlog argument specifies
the maximum number of queued
connections and should be at least 1;
the maximum value is system-dependent
(usually 5).
|
Explain socket buffers please
|
I was trying to find examples about socket programming and came upon this script:
http://stacklessexamples.googlecode.com/svn/trunk/examples/networking/mud.py
When reading through this script i found this line:
listenSocket.listen(5)
As i understand it - it reads 5 bytes from the buffer and then does stuff with it...
but what happens if more than 5 bytes were sent by the other end?
in the other place of that script it checks input against 4 commands and sees if there is \r\n in the string. dont commands like "look" plus \r\n make up for more than 5 bytes?
Alan
|
[
"The following is applicable to sockets in general, but it should help answer your specific question about using sockets from Python.\nsocket.listen() is used on a server socket to listen for incoming connection requests.\nThe parameter passed to listen is called the backlog and it means how many connections should the socket accept and put in a pending buffer until you finish your call to accept(). That applies to connections that are waiting to connect to your server socket between the time you have called listen() and the time you have finished a matching call to accept().\nSo, in your example you're setting the backlog to 5 connections.\nNote.. if you set your backlog to 5 connections, the following connections (6th, 7th etc.) will be dropped and the connecting socket will receive an error connecting message (something like a \"host actively refused the connection\" message)\n",
"This might help you understand the code: http://www.amk.ca/python/howto/sockets/\n",
"The argument 5 to listenSocket.listen isn't the number of bytes to read or buffer, it's the backlog:\n\nsocket.listen(backlog)\nListen for connections made to the\n socket. The backlog argument specifies\n the maximum number of queued\n connections and should be at least 1;\n the maximum value is system-dependent\n (usually 5).\n\n"
] |
[
13,
0,
0
] |
[] |
[] |
[
"python",
"python_stackless",
"sockets",
"stackless"
] |
stackoverflow_0001282656_python_python_stackless_sockets_stackless.txt
|
Q:
What's the Ruby equivalent of Python's os.walk?
Does anyone know if there's an existing module/function inside Ruby to traverse file system directories and files? I'm looking for something similar to Python's os.walk. The closest module I've found is Find but requires some extra work to do the traversal.
The Python code looks like the following:
for root, dirs, files in os.walk('.'):
for name in files:
print name
for name in dirs:
print name
A:
The following will print all files recursively. Then you can use File.directory? to see if the it is a directory or a file.
Dir['**/*'].each { |f| print f }
A:
Find seems pretty simple to me:
require "find"
Find.find('mydir'){|f| puts f}
A:
require 'pathname'
def os_walk(dir)
root = Pathname(dir)
files, dirs = [], []
Pathname(root).find do |path|
unless path == root
dirs << path if path.directory?
files << path if path.file?
end
end
[root, files, dirs]
end
root, files, dirs = os_walk('.')
|
What's the Ruby equivalent of Python's os.walk?
|
Does anyone know if there's an existing module/function inside Ruby to traverse file system directories and files? I'm looking for something similar to Python's os.walk. The closest module I've found is Find but requires some extra work to do the traversal.
The Python code looks like the following:
for root, dirs, files in os.walk('.'):
for name in files:
print name
for name in dirs:
print name
|
[
"The following will print all files recursively. Then you can use File.directory? to see if the it is a directory or a file.\nDir['**/*'].each { |f| print f }\n\n",
"Find seems pretty simple to me:\nrequire \"find\"\nFind.find('mydir'){|f| puts f}\n\n",
"require 'pathname'\n\ndef os_walk(dir)\n root = Pathname(dir)\n files, dirs = [], []\n Pathname(root).find do |path|\n unless path == root\n dirs << path if path.directory?\n files << path if path.file?\n end\n end\n [root, files, dirs]\nend\n\nroot, files, dirs = os_walk('.')\n\n"
] |
[
27,
10,
5
] |
[] |
[] |
[
"python",
"ruby"
] |
stackoverflow_0001281090_python_ruby.txt
|
Q:
aap - python trouble
i'm trying to run aap-application. Version is 1.076 (tried higher). All commands send me an error like:
> Traceback (most recent call last):
> File "/usr/bin/aap", line 10, in
> <module>
> import Main File "/usr/share/aap/Main.py", line 14, in
> <module>
> from DoAddDef import doadddef File "/usr/share/aap/DoAddDef.py",
> line 10, in <module>
> from Action import find_primary_action File
> "/usr/share/aap/Action.py", line 30,
> in <module>
> from Dictlist import listitem2str, str2dictlist, dictlist2str File
> "/usr/share/aap/Dictlist.py", line 18,
> in <module>
> from Process import recipe_error File "/usr/share/aap/Process.py", line
> 13, in <module>
> from Work import setrpstack File "/usr/share/aap/Work.py", line 25, in
> <module>
> from Node import Node File "/usr/share/aap/Node.py", line 10, in
> <module>
> import Filetype File "/usr/share/aap/Filetype.py", line
> 1417
> as = 0
> ^ SyntaxError: invalid syntax
What problem could it be?
A:
Well, as is a reserved word in Python. So, that can't be used in FileType.py as a variable name.
Try updating your installation of aap or writing in to the aap authors/forums.
A:
as is a reserved word in Python.
Seems aap-application was written for Python 2.5 and bellow:
Changed in version 2.5: Both as and with are only recognized when the with_statement future feature has been
enabled. It will always be enabled in
Python 2.6. See section The with
statement for details. Note that using
as and with as identifiers will always
issue a warning, even when the
with_statement future directive is not
in effect.
|
aap - python trouble
|
i'm trying to run aap-application. Version is 1.076 (tried higher). All commands send me an error like:
> Traceback (most recent call last):
> File "/usr/bin/aap", line 10, in
> <module>
> import Main File "/usr/share/aap/Main.py", line 14, in
> <module>
> from DoAddDef import doadddef File "/usr/share/aap/DoAddDef.py",
> line 10, in <module>
> from Action import find_primary_action File
> "/usr/share/aap/Action.py", line 30,
> in <module>
> from Dictlist import listitem2str, str2dictlist, dictlist2str File
> "/usr/share/aap/Dictlist.py", line 18,
> in <module>
> from Process import recipe_error File "/usr/share/aap/Process.py", line
> 13, in <module>
> from Work import setrpstack File "/usr/share/aap/Work.py", line 25, in
> <module>
> from Node import Node File "/usr/share/aap/Node.py", line 10, in
> <module>
> import Filetype File "/usr/share/aap/Filetype.py", line
> 1417
> as = 0
> ^ SyntaxError: invalid syntax
What problem could it be?
|
[
"Well, as is a reserved word in Python. So, that can't be used in FileType.py as a variable name.\nTry updating your installation of aap or writing in to the aap authors/forums.\n",
"as is a reserved word in Python.\nSeems aap-application was written for Python 2.5 and bellow:\n\nChanged in version 2.5: Both as and with are only recognized when the with_statement future feature has been\n enabled. It will always be enabled in\n Python 2.6. See section The with\n statement for details. Note that using\n as and with as identifiers will always\n issue a warning, even when the\n with_statement future directive is not\n in effect.\n\n"
] |
[
6,
3
] |
[] |
[] |
[
"linux",
"python"
] |
stackoverflow_0001282828_linux_python.txt
|
Q:
How are nested dictionaries handled by DictWriter?
Using the CSV module in python, I was experimenting with the DictWriter class to convert dictionaries to rows in a csv. Is there any way to handle nested dictionaries? Specifically, I'm exporting Disqus comments that have a structure like this:
{
u'status': u'approved',
u'forum': {u'id': u'', u'': u'', u'shortname': u'', u'name': u'', u'description': u''},
u'thread': {u'allow_comments': True, u'forum': u'', u'title': u'', u'url': u'', u'created_at': u'', u'id': u'', u'hidden': False, u'identifier': [], u'slug': u''},
u'is_anonymous': False,
u'author': {u'username': u'', u'email_hash': u'', u'display_name': u'', u'has_avatar': True, u'url': u'', u'id': 1, u'avatar': {u'small': u'', u'large': u'', u'medium': u''}, u'email': u''},
u'created_at': u'2009-08-12T10:14',
u'points': 0,
u'message': u"",
u'has_been_moderated': False,
u'ip_address': u'',
u'id': u'',
u'parent_post': None
}
I wanted to specify fields from the author and thread properties and haven't found a way so far. Here's the code:
f = open('export.csv', 'wb')
fieldnames = ('id','status','is_anonymous','created_at','ip_address','points','has_been_moderated','parent_post','thread')
try:
exportWriter = csv.DictWriter(f,
fieldnames,
restval=None,
extrasaction='ignore',
quoting=csv.QUOTE_NONNUMERIC
)
for c in comments:
exportWriter.writerow(c)
finally:
f.close()
A:
I think the main problem your going to have is how to represent a nested data structure in one flat row of csv data.
You could use some form of name mangeling to flatten the keys from the sub dict's into the top level dict.
eg thread': {u'allow_comments':
would become thread_allows_comments.
|
How are nested dictionaries handled by DictWriter?
|
Using the CSV module in python, I was experimenting with the DictWriter class to convert dictionaries to rows in a csv. Is there any way to handle nested dictionaries? Specifically, I'm exporting Disqus comments that have a structure like this:
{
u'status': u'approved',
u'forum': {u'id': u'', u'': u'', u'shortname': u'', u'name': u'', u'description': u''},
u'thread': {u'allow_comments': True, u'forum': u'', u'title': u'', u'url': u'', u'created_at': u'', u'id': u'', u'hidden': False, u'identifier': [], u'slug': u''},
u'is_anonymous': False,
u'author': {u'username': u'', u'email_hash': u'', u'display_name': u'', u'has_avatar': True, u'url': u'', u'id': 1, u'avatar': {u'small': u'', u'large': u'', u'medium': u''}, u'email': u''},
u'created_at': u'2009-08-12T10:14',
u'points': 0,
u'message': u"",
u'has_been_moderated': False,
u'ip_address': u'',
u'id': u'',
u'parent_post': None
}
I wanted to specify fields from the author and thread properties and haven't found a way so far. Here's the code:
f = open('export.csv', 'wb')
fieldnames = ('id','status','is_anonymous','created_at','ip_address','points','has_been_moderated','parent_post','thread')
try:
exportWriter = csv.DictWriter(f,
fieldnames,
restval=None,
extrasaction='ignore',
quoting=csv.QUOTE_NONNUMERIC
)
for c in comments:
exportWriter.writerow(c)
finally:
f.close()
|
[
"I think the main problem your going to have is how to represent a nested data structure in one flat row of csv data.\nYou could use some form of name mangeling to flatten the keys from the sub dict's into the top level dict.\neg thread': {u'allow_comments': \nwould become thread_allows_comments. \n"
] |
[
1
] |
[] |
[] |
[
"csv",
"python"
] |
stackoverflow_0001282920_csv_python.txt
|
Q:
Auto-tab between fields on Django admin site
I have an inline on a model with data with a fixed length, that has to be entered very fast, so I was thinking about a way of "tabbing" through fields automatically when the field is filled...
Could that be possible?
A:
Sure it's possible, but it will need some javascript. You'd want to bind an event to the keypress event on each field, and when it fires test the length of the text entered so far - if it matches, move the focus onto the next field.
A:
I can recommend the following links:
JQuery AutoTab
|
Auto-tab between fields on Django admin site
|
I have an inline on a model with data with a fixed length, that has to be entered very fast, so I was thinking about a way of "tabbing" through fields automatically when the field is filled...
Could that be possible?
|
[
"Sure it's possible, but it will need some javascript. You'd want to bind an event to the keypress event on each field, and when it fires test the length of the text entered so far - if it matches, move the focus onto the next field.\n",
"I can recommend the following links:\n\nJQuery AutoTab\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"django",
"django_admin",
"field",
"python"
] |
stackoverflow_0000881536_django_django_admin_field_python.txt
|
Q:
Using Python to read the screen and controlling keyboard/mouse on OSX
I'm looking for or trying to write a testing suite in Python which will control the mouse/keyboard and watch the screen for changes.
The obvious parts I need are (1) screen watcher, (2) keyboard/mouse control.
The latter is explained here, but what is the best way to go about doing the former on OSX?
A:
I can't think of a smart way to "watch the screen for changes" in any OS nor with any language. On MacOSX, you can take screenshots programmatically at any time, e.g. with code like the one Apple shows at this sample (translating the Objective C into Python + PyObjC if you want), or more simply by executing the external command screencapture -x -T 0 /tmp/zap.png (e.g. via subprocess) and examining the resulting PNG image -- but locating the differences between two successive screenshot is anything but trivial, and the whole approach is time consuming (there's no way that I know to receive notification of generic screen changes, so you need to keep repeating this periodically -- eek!-).
Depending on what exactly you're trying to accomplish, maybe you can get away with something simpler than completely unconstrained "watching screen changes"...?
|
Using Python to read the screen and controlling keyboard/mouse on OSX
|
I'm looking for or trying to write a testing suite in Python which will control the mouse/keyboard and watch the screen for changes.
The obvious parts I need are (1) screen watcher, (2) keyboard/mouse control.
The latter is explained here, but what is the best way to go about doing the former on OSX?
|
[
"I can't think of a smart way to \"watch the screen for changes\" in any OS nor with any language. On MacOSX, you can take screenshots programmatically at any time, e.g. with code like the one Apple shows at this sample (translating the Objective C into Python + PyObjC if you want), or more simply by executing the external command screencapture -x -T 0 /tmp/zap.png (e.g. via subprocess) and examining the resulting PNG image -- but locating the differences between two successive screenshot is anything but trivial, and the whole approach is time consuming (there's no way that I know to receive notification of generic screen changes, so you need to keep repeating this periodically -- eek!-).\nDepending on what exactly you're trying to accomplish, maybe you can get away with something simpler than completely unconstrained \"watching screen changes\"...?\n"
] |
[
2
] |
[] |
[] |
[
"macos",
"python",
"user_interface"
] |
stackoverflow_0001282860_macos_python_user_interface.txt
|
Q:
Recursive generation + filtering. Better non-recursive?
I have the following need (in python):
generate all possible tuples of length 12 (could be more) containing either 0, 1 or 2 (basically, a ternary number with 12 digits)
filter these tuples according to specific criteria, culling those not good, and keeping the ones I need.
As I had to deal with small lengths until now, the functional approach was neat and simple: a recursive function generates all possible tuples, then I cull them with a filter function. Now that I have a larger set, the generation step is taking too much time, much longer than needed as most of the paths in the solution tree will be culled later on, so I could skip their creation.
I have two solutions to solve this:
derecurse the generation into a loop, and apply the filter criteria on each new 12-digits entity
integrate the filtering in the recursive algorithm, so to prevent it stepping into paths that are already doomed.
My preference goes to 1 (seems easier) but I would like to hear your opinion, in particular with an eye towards how a functional programming style deals with such cases.
A:
How about
import itertools
results = []
for x in itertools.product(range(3), repeat=12):
if myfilter(x):
results.append(x)
where myfilter does the selection. Here, for example, only allowing result with 10 or more 1's,
def myfilter(x): # example filter, only take lists with 10 or more 1s
return x.count(1)>=10
That is, my suggestion is your option 1. For some cases it may be slower because (depending on your criteria) you many generate many lists that you don't need, but it's much more general and very easy to code.
Edit: This approach also has a one-liner form, as suggested in the comments by hughdbrown:
results = [x for x in itertools.product(range(3), repeat=12) if myfilter(x)]
A:
itertools has functionality for dealing with this. However, here is a (hardcoded) way of handling with a generator:
T = (0,1,2)
GEN = ((a,b,c,d,e,f,g,h,i,j,k,l) for a in T for b in T for c in T for d in T for e in T for f in T for g in T for h in T for i in T for j in T for k in T for l in T)
for VAL in GEN:
# Filter VAL
print VAL
A:
I'd implement an iterative binary adder or hamming code and run that way.
|
Recursive generation + filtering. Better non-recursive?
|
I have the following need (in python):
generate all possible tuples of length 12 (could be more) containing either 0, 1 or 2 (basically, a ternary number with 12 digits)
filter these tuples according to specific criteria, culling those not good, and keeping the ones I need.
As I had to deal with small lengths until now, the functional approach was neat and simple: a recursive function generates all possible tuples, then I cull them with a filter function. Now that I have a larger set, the generation step is taking too much time, much longer than needed as most of the paths in the solution tree will be culled later on, so I could skip their creation.
I have two solutions to solve this:
derecurse the generation into a loop, and apply the filter criteria on each new 12-digits entity
integrate the filtering in the recursive algorithm, so to prevent it stepping into paths that are already doomed.
My preference goes to 1 (seems easier) but I would like to hear your opinion, in particular with an eye towards how a functional programming style deals with such cases.
|
[
"How about\nimport itertools\n\nresults = []\nfor x in itertools.product(range(3), repeat=12):\n if myfilter(x):\n results.append(x)\n\nwhere myfilter does the selection. Here, for example, only allowing result with 10 or more 1's,\ndef myfilter(x): # example filter, only take lists with 10 or more 1s\n return x.count(1)>=10\n\nThat is, my suggestion is your option 1. For some cases it may be slower because (depending on your criteria) you many generate many lists that you don't need, but it's much more general and very easy to code.\nEdit: This approach also has a one-liner form, as suggested in the comments by hughdbrown:\nresults = [x for x in itertools.product(range(3), repeat=12) if myfilter(x)]\n\n",
"itertools has functionality for dealing with this. However, here is a (hardcoded) way of handling with a generator:\nT = (0,1,2)\n\nGEN = ((a,b,c,d,e,f,g,h,i,j,k,l) for a in T for b in T for c in T for d in T for e in T for f in T for g in T for h in T for i in T for j in T for k in T for l in T)\n\nfor VAL in GEN:\n # Filter VAL\n print VAL\n\n",
"I'd implement an iterative binary adder or hamming code and run that way.\n"
] |
[
4,
1,
0
] |
[] |
[] |
[
"functional_programming",
"python",
"recursion"
] |
stackoverflow_0001283266_functional_programming_python_recursion.txt
|
Q:
How do unit tests work in django-tagging, because I want mine to run like that?
Few times while browsing tests dir in various Django apps I stumbled across models.py and settings.py files (in django-tagging for example).
But there's no code to be found that syncs test models or applies custom test settings - but tests make use of them just as if django would auto-magically load them. However if I try to run django-tagging's tests: manage.py test tagging, it doesn't do even a single test.
This is exactly what I need right now to test my app, but don't really know how.
So, how does it work?
A:
If you want to run the tests in django-tagging, you can try:
django-admin.py test --settings=tagging.tests.settings
Basically, it uses doctests which are in the tests.py file inside the tests package/directory. The tests use the settings file in that same directory (and specified in the command line to django-admin). For more information see the django documentation on writing doctests.
A:
You mean, "How do I write unit tests in Django?" Check the documentation on testing.
When I've done it, I wrote unit tests in a test/ subdirectory. Make sure the directory has an empty __init__.py file. You may also need a models.py file. Add unit tests that derive from unittest.TestCase (in module unittest). Add the module 'xxxx.test' to your INSTALLED_APPS in settings.py (where 'xxxx' is the base name of your application).
Here's some sample code of mine to get you started:
#!/usr/bin/env python
# http://docs.djangoproject.com/en/dev/topics/testing/
from sys import stderr
import unittest
from django.test.client import Client
from expenses.etl.loader import load_all, load_init
class TestCase(unittest.TestCase):
def setUp(self):
print "setUp"
def testLoading(self):
print "Calling load_init()"
load_init()
print "Calling load_all()"
load_all()
print "Done"
if __name__ == '__main__':
unittest.main()
If you mean, "How do I get data loaded into my unit tests?", then use fixtures, described on the same documentation page.
|
How do unit tests work in django-tagging, because I want mine to run like that?
|
Few times while browsing tests dir in various Django apps I stumbled across models.py and settings.py files (in django-tagging for example).
But there's no code to be found that syncs test models or applies custom test settings - but tests make use of them just as if django would auto-magically load them. However if I try to run django-tagging's tests: manage.py test tagging, it doesn't do even a single test.
This is exactly what I need right now to test my app, but don't really know how.
So, how does it work?
|
[
"If you want to run the tests in django-tagging, you can try:\n\ndjango-admin.py test --settings=tagging.tests.settings\n\nBasically, it uses doctests which are in the tests.py file inside the tests package/directory. The tests use the settings file in that same directory (and specified in the command line to django-admin). For more information see the django documentation on writing doctests.\n",
"You mean, \"How do I write unit tests in Django?\" Check the documentation on testing.\nWhen I've done it, I wrote unit tests in a test/ subdirectory. Make sure the directory has an empty __init__.py file. You may also need a models.py file. Add unit tests that derive from unittest.TestCase (in module unittest). Add the module 'xxxx.test' to your INSTALLED_APPS in settings.py (where 'xxxx' is the base name of your application).\nHere's some sample code of mine to get you started:\n#!/usr/bin/env python\n# http://docs.djangoproject.com/en/dev/topics/testing/\n\nfrom sys import stderr\nimport unittest\nfrom django.test.client import Client\nfrom expenses.etl.loader import load_all, load_init\n\nclass TestCase(unittest.TestCase):\n def setUp(self):\n print \"setUp\"\n\n def testLoading(self):\n print \"Calling load_init()\"\n load_init()\n print \"Calling load_all()\"\n load_all()\n print \"Done\"\n\nif __name__ == '__main__':\n unittest.main()\n\nIf you mean, \"How do I get data loaded into my unit tests?\", then use fixtures, described on the same documentation page.\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"python",
"unit_testing"
] |
stackoverflow_0001279032_django_python_unit_testing.txt
|
Q:
In GTK, how do I make a window unable to be closed?
For example, graying out the "X" on windows systems.
A:
If Gtk can't convince the window manager you can always connect the "delete-event" signal and return True from the callback. Doing this Gtk assumes that the callback handle that signal and does nothing.
import gtk
window = gtk.Window()
window.connect('delete-event',lambda widget, event: True)
A:
Just call the set_deletable with False on the window in question. It will work as long as GTK can convince the window manager to make the window unclosable.
|
In GTK, how do I make a window unable to be closed?
|
For example, graying out the "X" on windows systems.
|
[
"If Gtk can't convince the window manager you can always connect the \"delete-event\" signal and return True from the callback. Doing this Gtk assumes that the callback handle that signal and does nothing.\nimport gtk\n\nwindow = gtk.Window()\nwindow.connect('delete-event',lambda widget, event: True)\n\n",
"Just call the set_deletable with False on the window in question. It will work as long as GTK can convince the window manager to make the window unclosable. \n"
] |
[
5,
4
] |
[] |
[] |
[
"gtk",
"pygtk",
"python",
"windows"
] |
stackoverflow_0001235417_gtk_pygtk_python_windows.txt
|
Q:
Python Data Descriptor With Pass-through __set__ command
I'm having a bit of an issue solving a problem I'm looking at. I have a specialized set of functions which are going to be in use across a program, which are basically dynamic callables which can replace functions and methods. Due to the need to have them work properly to emulate the functionality of methods, these functions override __get__ to provide a wrapped version that gives access to the retrieving object.
Unfortunately, __get__ does not work if the function is set directly on an instance. This is because only "data descriptors" call the __get__ function when the key is found in the __dict__ of an instance. The only solution to this that comes to mind is: trick python into thinking this is a data descriptor. This involves creating a __set__ function on the descriptor. Ideally, I want this __set__ function to work as a pass-through (returns control to the caller and continues evaluating as if it doesn't exist).
Is there any way to trick python into thinking that a descriptor is a data descriptor but letting a containing class/instance still be able to use its setattr command as normal?
Also, I am aware that it is possible to do this with an override of __getattribute__ for the caller. However, this is a bad solution because I would have to do this for the 'object' built-in and anything that overrides it. Not exactly a great solution.
Alternatively, if there is any alternative solution I would be happy to hear it.
Here is a problem example:
class Descriptor(object):
def __get__(self, obj, objtype = None):
return None
class Caller(object):
a = Descriptor()
print a
>>> None
x = Caller()
print a
>>> None
x.a = Descriptor()
print x.a
>>> <__main__.Descriptor object at 0x011D7F10>
The last case should print 'None' to maintain consistency.
If you add a __set__ to the Descriptor, this will print 'None' (as desired). However, this messes up any command of x.a = (some value) from working as it had previously. Since I do not want to mess up this functionality, that is not helpful. Any solutions would be great.
Correction: My prior idea would still not work, as I misunderstood the descriptor handling slightly. Apparently if a descriptor is not on a class at all, it will never be called- regardless of the set. The condition I had only helps if there is a dict val and a class accessor of the same name. I am actually looking for a solution more along the lines of: http://blog.brianbeck.com/post/74086029/instance-descriptors but that does not involve having everything under the sun inherit a specialized interface.
Unfortunately, given this new understanding of the descriptor interface this may not be possible? Why oh why would python make decorators essentially non-dynamic?
A:
I think the cleanest solution is to leave __set__ alone, and set the descriptor on the class -- wrapping the original class if needed. I.e., instead of x.a = Descriptor(), do setdesc(x, 'a', Descriptor() where:
class Wrapper(object): pass
def setdesc(x, name, desc):
t = type(x)
if not issubclass(t, wrapper):
class awrap(Wrapper, t): pass
x.__class__ = awrap
setattr(x.__class__, name, desc)
This is the general approach I suggest when somebody wants to "set on an instance" anything (special method or descriptor) that needs to be set on the class to work, but doesn't want to affect the instance's original class.
Of course, it all works well only if you have new-style classes, but then descriptors don't really play well with old-style classes anyhow;-).
A:
I think I may have one answer to my question, though it's not all that pretty- it does sidestep the issue. My current plan of attack is to do what python does- bind the functions manually. I was already using my unbound function's get command to generate bound-type functions. One possible solution is to force anybody who wants to set a new function to manually bind it. It's annoying but it's not crazy. Python actually makes you do it (if you just set a function onto an instance as an attribute, it doesn't become bound).
It would still be nice to have this happen automatically, but it's not awful to force someone who is setting a new function to use x.a = Descriptor().get(x) which in this case will give the desired behavior (as well as for the example, for that matter). It's not a general solution but it will work for this limited problem, where method binding was being emulated basically. With that said, if anybody has a better solution I'd still be very happy to hear it.
|
Python Data Descriptor With Pass-through __set__ command
|
I'm having a bit of an issue solving a problem I'm looking at. I have a specialized set of functions which are going to be in use across a program, which are basically dynamic callables which can replace functions and methods. Due to the need to have them work properly to emulate the functionality of methods, these functions override __get__ to provide a wrapped version that gives access to the retrieving object.
Unfortunately, __get__ does not work if the function is set directly on an instance. This is because only "data descriptors" call the __get__ function when the key is found in the __dict__ of an instance. The only solution to this that comes to mind is: trick python into thinking this is a data descriptor. This involves creating a __set__ function on the descriptor. Ideally, I want this __set__ function to work as a pass-through (returns control to the caller and continues evaluating as if it doesn't exist).
Is there any way to trick python into thinking that a descriptor is a data descriptor but letting a containing class/instance still be able to use its setattr command as normal?
Also, I am aware that it is possible to do this with an override of __getattribute__ for the caller. However, this is a bad solution because I would have to do this for the 'object' built-in and anything that overrides it. Not exactly a great solution.
Alternatively, if there is any alternative solution I would be happy to hear it.
Here is a problem example:
class Descriptor(object):
def __get__(self, obj, objtype = None):
return None
class Caller(object):
a = Descriptor()
print a
>>> None
x = Caller()
print a
>>> None
x.a = Descriptor()
print x.a
>>> <__main__.Descriptor object at 0x011D7F10>
The last case should print 'None' to maintain consistency.
If you add a __set__ to the Descriptor, this will print 'None' (as desired). However, this messes up any command of x.a = (some value) from working as it had previously. Since I do not want to mess up this functionality, that is not helpful. Any solutions would be great.
Correction: My prior idea would still not work, as I misunderstood the descriptor handling slightly. Apparently if a descriptor is not on a class at all, it will never be called- regardless of the set. The condition I had only helps if there is a dict val and a class accessor of the same name. I am actually looking for a solution more along the lines of: http://blog.brianbeck.com/post/74086029/instance-descriptors but that does not involve having everything under the sun inherit a specialized interface.
Unfortunately, given this new understanding of the descriptor interface this may not be possible? Why oh why would python make decorators essentially non-dynamic?
|
[
"I think the cleanest solution is to leave __set__ alone, and set the descriptor on the class -- wrapping the original class if needed. I.e., instead of x.a = Descriptor(), do setdesc(x, 'a', Descriptor() where:\nclass Wrapper(object): pass\n\ndef setdesc(x, name, desc):\n t = type(x)\n if not issubclass(t, wrapper):\n class awrap(Wrapper, t): pass\n x.__class__ = awrap\n setattr(x.__class__, name, desc)\n\nThis is the general approach I suggest when somebody wants to \"set on an instance\" anything (special method or descriptor) that needs to be set on the class to work, but doesn't want to affect the instance's original class.\nOf course, it all works well only if you have new-style classes, but then descriptors don't really play well with old-style classes anyhow;-).\n",
"I think I may have one answer to my question, though it's not all that pretty- it does sidestep the issue. My current plan of attack is to do what python does- bind the functions manually. I was already using my unbound function's get command to generate bound-type functions. One possible solution is to force anybody who wants to set a new function to manually bind it. It's annoying but it's not crazy. Python actually makes you do it (if you just set a function onto an instance as an attribute, it doesn't become bound).\nIt would still be nice to have this happen automatically, but it's not awful to force someone who is setting a new function to use x.a = Descriptor().get(x) which in this case will give the desired behavior (as well as for the example, for that matter). It's not a general solution but it will work for this limited problem, where method binding was being emulated basically. With that said, if anybody has a better solution I'd still be very happy to hear it.\n"
] |
[
1,
0
] |
[] |
[] |
[
"descriptor",
"function",
"methods",
"python",
"set"
] |
stackoverflow_0001283435_descriptor_function_methods_python_set.txt
|
Q:
URLconfs in Django
I am going through the Django sample application and come across the URLConf.
I thought the import statement on the top resolves the url location, but for 'mysite.polls.urls' I couldn't remove the quotes by including in the import statement.
Why should I use quotes for 'mysite.polls.urls' and not for admin url? and what should I do if I have to remove the quotes.
from django.conf.urls.defaults import *
...
...
(r'^polls/', include('mysite.polls.urls')),
(r'^admin/', include(admin.site.urls)),
A:
You've elided a bunch of stuff, but do you have the following statement in there?
from django.contrib import admin
If so, that would explain why you don't need to quote the latter. See the django documentation for AdminSite.urls for more information.
If you want to remove the quotes from the former, then:
import mysite.poll.urls
...
(r'^polls/', include(mysite.poll.urls)),
...
should work.
|
URLconfs in Django
|
I am going through the Django sample application and come across the URLConf.
I thought the import statement on the top resolves the url location, but for 'mysite.polls.urls' I couldn't remove the quotes by including in the import statement.
Why should I use quotes for 'mysite.polls.urls' and not for admin url? and what should I do if I have to remove the quotes.
from django.conf.urls.defaults import *
...
...
(r'^polls/', include('mysite.polls.urls')),
(r'^admin/', include(admin.site.urls)),
|
[
"You've elided a bunch of stuff, but do you have the following statement in there?\nfrom django.contrib import admin\n\nIf so, that would explain why you don't need to quote the latter. See the django documentation for AdminSite.urls for more information.\nIf you want to remove the quotes from the former, then:\nimport mysite.poll.urls\n...\n(r'^polls/', include(mysite.poll.urls)),\n...\n\nshould work.\n"
] |
[
1
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001283811_django_python.txt
|
Q:
How does mercurial work without Python installed?
I have Mercurial 1.3 installed on my Windows 7 machine. I don't have python installed, but Mercurial seems to be OK with that.
How does it work?
Also, is it possible to force Mercurial run on IronPython and will it be compatible?
Thank you.
A:
The Mercurial windows installer is packaged using py2exe. This places the python interpreter as a DLL inside of a file called "library.zip".
On my machine, it is placed in "C:\Program Files\TortoiseHg\library.zip"
This zip file also contains the python libraries that are required by mercurial.
For a detailed description of how mercurial is packaged for windows, see the developer page describing building windows installer.
A:
Since there is a "library.zip"(9MB), Mercurial's Windows binary package maybe made by py2exe, py2exe is a Python Distutils extension which converts Python scripts into executable Windows programs, able to run without requiring a Python installation.
A:
Others have answered the first question -- let me give a guess about the second part.
Mercurial will normally use some C extensions for speed. You cannot use those with IronPython.
But we also ship pure Python versions of these modules, and depending on how much IronPython implements of a standard Python 2.4 environment, those modules could be compatible. I have seen reports on IRC about Jython (the Java port of Python) being able to do a few operations using the pure modules. You should download Mercurial and take a look at the mercurial/pure folder. These modules simply has to be moved up one directory level to be found, the setup.py script can do this if you pass the --pure flag. Please see its source or come talk with us on the Mercurial mailinglist/IRC.
A:
Mercurial bundles the necessary python binaries within it, I believe.
|
How does mercurial work without Python installed?
|
I have Mercurial 1.3 installed on my Windows 7 machine. I don't have python installed, but Mercurial seems to be OK with that.
How does it work?
Also, is it possible to force Mercurial run on IronPython and will it be compatible?
Thank you.
|
[
"The Mercurial windows installer is packaged using py2exe. This places the python interpreter as a DLL inside of a file called \"library.zip\". \nOn my machine, it is placed in \"C:\\Program Files\\TortoiseHg\\library.zip\"\nThis zip file also contains the python libraries that are required by mercurial. \nFor a detailed description of how mercurial is packaged for windows, see the developer page describing building windows installer.\n",
"Since there is a \"library.zip\"(9MB), Mercurial's Windows binary package maybe made by py2exe, py2exe is a Python Distutils extension which converts Python scripts into executable Windows programs, able to run without requiring a Python installation. \n",
"Others have answered the first question -- let me give a guess about the second part. \nMercurial will normally use some C extensions for speed. You cannot use those with IronPython.\nBut we also ship pure Python versions of these modules, and depending on how much IronPython implements of a standard Python 2.4 environment, those modules could be compatible. I have seen reports on IRC about Jython (the Java port of Python) being able to do a few operations using the pure modules. You should download Mercurial and take a look at the mercurial/pure folder. These modules simply has to be moved up one directory level to be found, the setup.py script can do this if you pass the --pure flag. Please see its source or come talk with us on the Mercurial mailinglist/IRC.\n",
"Mercurial bundles the necessary python binaries within it, I believe.\n"
] |
[
17,
7,
6,
3
] |
[] |
[] |
[
"ironpython",
"mercurial",
"python"
] |
stackoverflow_0001231853_ironpython_mercurial_python.txt
|
Q:
Django - SQL Query - Timestamp
Can anyone turn me to a tutorial, code or some kind of resource that will help me out with the following problem.
I have a table in a mySQL database. It contains an ID, Timestamp, another ID and a value. I'm passing it the 'main' ID which can uniquely identify a piece of data. However, I want to do a time search on this piece of data(therefore using the timestamp field). Therefore what would be ideal is to say: between the hours of 12 and 1, show me all the values logged for ID = 1987.
How would I go about querying this in Django? I know in mySQL it'd be something like less than/greater than etc... but how would I go about doing this in Django? i've been using Object.Filter for most of database handling so far. Finally, I'd like to stress that I'm new to Django and I'm genuinely stumped!
A:
If the table in question maps to a Django model MyModel, e.g.
class MyModel(models.Model):
...
primaryid = ...
timestamp = ...
secondaryid = ...
valuefield = ...
then you can use
MyModel.objects.filter(
primaryid=1987
).exclude(
timestamp__lt=<min_timestamp>
).exclude(
timestamp__gt=<max_timestamp>
).values_list('valuefield', flat=True)
This selects entries with the primaryid 1987, with timestamp values between <min_timestamp> and <max_timestamp>, and returns the corresponding values in a list.
Update: Corrected bug in query (filter -> exclude).
A:
I don't think Vinay Sajip's answer is correct. The closest correct variant based on his code is:
MyModel.objects.filter(
primaryid=1987
).exclude(
timestamp__lt=min_timestamp
).exclude(
timestamp__gt=max_timestamp
).values_list('valuefield', flat=True)
That's "exclude the ones less than the minimum timestamp and exclude the ones greater than the maximum timestamp." Alternatively, you can do this:
MyModel.objects.filter(
primaryid=1987
).filter(
timestamp__gte=min_timestamp
).exclude(
timestamp__gte=max_timestamp
).values_list('valuefield', flat=True)
exclude() and filter() are opposites: exclude() omits the identified rows and filter() includes them. You can use a combination of them to include/exclude whichever you prefer. In your case, you want to exclude() those below your minimum time stamp and to exclude() those above your maximum time stamp.
Here is the documentation on chaining QuerySet filters.
|
Django - SQL Query - Timestamp
|
Can anyone turn me to a tutorial, code or some kind of resource that will help me out with the following problem.
I have a table in a mySQL database. It contains an ID, Timestamp, another ID and a value. I'm passing it the 'main' ID which can uniquely identify a piece of data. However, I want to do a time search on this piece of data(therefore using the timestamp field). Therefore what would be ideal is to say: between the hours of 12 and 1, show me all the values logged for ID = 1987.
How would I go about querying this in Django? I know in mySQL it'd be something like less than/greater than etc... but how would I go about doing this in Django? i've been using Object.Filter for most of database handling so far. Finally, I'd like to stress that I'm new to Django and I'm genuinely stumped!
|
[
"If the table in question maps to a Django model MyModel, e.g.\nclass MyModel(models.Model):\n ...\n primaryid = ...\n timestamp = ...\n secondaryid = ...\n valuefield = ...\n\nthen you can use\nMyModel.objects.filter(\n primaryid=1987\n ).exclude(\n timestamp__lt=<min_timestamp>\n ).exclude(\n timestamp__gt=<max_timestamp>\n ).values_list('valuefield', flat=True)\n\nThis selects entries with the primaryid 1987, with timestamp values between <min_timestamp> and <max_timestamp>, and returns the corresponding values in a list.\nUpdate: Corrected bug in query (filter -> exclude).\n",
"I don't think Vinay Sajip's answer is correct. The closest correct variant based on his code is:\nMyModel.objects.filter(\n primaryid=1987\n ).exclude(\n timestamp__lt=min_timestamp\n ).exclude(\n timestamp__gt=max_timestamp\n ).values_list('valuefield', flat=True)\n\nThat's \"exclude the ones less than the minimum timestamp and exclude the ones greater than the maximum timestamp.\" Alternatively, you can do this:\nMyModel.objects.filter(\n primaryid=1987\n ).filter(\n timestamp__gte=min_timestamp\n ).exclude(\n timestamp__gte=max_timestamp\n ).values_list('valuefield', flat=True)\n\nexclude() and filter() are opposites: exclude() omits the identified rows and filter() includes them. You can use a combination of them to include/exclude whichever you prefer. In your case, you want to exclude() those below your minimum time stamp and to exclude() those above your maximum time stamp.\nHere is the documentation on chaining QuerySet filters.\n"
] |
[
3,
2
] |
[] |
[] |
[
"django",
"mysql",
"python"
] |
stackoverflow_0001279490_django_mysql_python.txt
|
Q:
Does django's Form class maintain state?
I'm building my first form with django, and I'm seeing some behavior that I really did not expect at all. I defined a form class:
class AssignmentFilterForm(forms.Form):
filters = []
filter = forms.ChoiceField()
def __init__(self, *args, **kwargs):
super(forms.Form, self).__init__(*args, **kwargs)
self.filters.append(PatientFilter('All'))
self.filters.append(PatientFilter('Assigned', 'service__isnull', False))
self.filters.append(PatientFilter('Unassigned', 'service__isnull', True))
for i, f in enumerate(self.filters):
self.fields["filter"].choices.append((i, f.name))
When I output this form to a template using:
{{ form.as_p }}
I see the correct choices. However, after refreshing the page, I see the list three times in the select box. Hitting refresh again results in the list showing 10 times in the select box!
Here is my view:
@login_required
def assign_test(request):
pg = PhysicianGroup.objects.get(pk=physician_group)
if request.method == 'POST':
form = AssignmentFilterForm(request.POST)
if form.is_valid():
yes = False
else:
form = AssignmentFilterForm()
patients = pg.allPatients().order_by('bed__room__unit', 'bed__room__order', 'bed__order' )
return render_to_response('hospitalists/assign_test.html', RequestContext(request, {'patients': patients, 'form': form,}))
What am I doing wrong?
Thanks, Pete
A:
This is actually a feature of Python that catches a lot of people.
When you define variables on the class as you have with filters = [] the right half of the expression is evaluated when the class is initially defined. So when your code is first run it will create a new list in memory and return a reference to this list. As a result, each AssignmentFilterForm instance will have its own filters variable, but they will all point to this same list in memory. To solve this just move the initialization of self.filters into your __init__ method.
Most of the time you don't run into this issue because the types you are using aren't stored as a reference. Numbers, booleans, etc are stored as their value. Strings are stored by reference, but strings are immutable meaning a new string must be created in memory every time it is changed and a new reference returned.
Pointers don't present themselves often in scripting language, so it's often confusing at first when they do.
Here's a simple IDLE session example to show what's happening
>>> class Test():
myList = []
def __init__( self ):
self.myList.append( "a" )
>>> Test.myList
[]
>>> test1 = Test()
>>> Test.myList
['a']
>>> test1.myList
['a']
>>> test2 = Test()
>>> test2.myList
['a', 'a']
>>> test1.myList
['a', 'a']
>>> Test.myList
['a', 'a']
A:
I picked up the book Pro Django which answers this question. It's a great book by the way, and I highly recommend it!
The solution is to make BOTH the choice field and my helper var both instance variables:
class AssignmentFilterForm(forms.Form):
def __init__(self, pg, request = None):
super(forms.Form, self).__init__(request)
self.filters = []
self.filters.append(PatientFilter('All'))
self.filters.append(PatientFilter('Assigned', 'service__isnull', False))
self.filters.append(PatientFilter('Unassigned', 'service__isnull', True))
self.addPhysicians(pg)
self.fields['filter'] = forms.ChoiceField()
for i, f in enumerate(self.filters):
self.fields['filter'].choices.append((i, f.name))
Clearing out the choices works but would surely result in threading issues.
A:
You're appending to the PER-CLASS variable self.filters. Make it into a PER-INSTANCE variable instead, by doing self.filters = [] at the start of __init__.
A:
To clarify from some of the other answers:
The fields are, and must be, class variables. They get all sorts of things done to them by the metaclass, and this is the correct way to define them.
However, your filters variable does not need to be a class var. It can quite easily be an instance var - just remove the definition from the class and put it in __init__. Or, perhaps even better, don't make it a property at all - just a local var within __init__. Then, instead of appending to filters.choices, just reassign it.
def __init__(self, *args, **kwargs):
super(forms.Form, self).__init__(*args, **kwargs)
filters = []
filters.append(PatientFilter('All'))
filters.append(PatientFilter('Assigned', 'service__isnull', False))
filters.append(PatientFilter('Unassigned', 'service__isnull', True))
self.fields["filter"].choices = [(i, f.name) for i, f in enumerate(filters)]
A:
As answered above, you need to initialize filters as an instance variable:
def __init__(...):
self.filters = []
self.filters.append(...)
# ...
If you want to know more about how the Form class works, you should read this page in the Django wiki:
Model Creation and Initialization
It talks about the internals of the Model class, but you'll find the general setup of fields is somewhat similar to the Form (minus the database stuff). It's a bit dated (2006), but I think the basic principles still apply. The metaclass stuff can be a bit confusing if you're new though.
|
Does django's Form class maintain state?
|
I'm building my first form with django, and I'm seeing some behavior that I really did not expect at all. I defined a form class:
class AssignmentFilterForm(forms.Form):
filters = []
filter = forms.ChoiceField()
def __init__(self, *args, **kwargs):
super(forms.Form, self).__init__(*args, **kwargs)
self.filters.append(PatientFilter('All'))
self.filters.append(PatientFilter('Assigned', 'service__isnull', False))
self.filters.append(PatientFilter('Unassigned', 'service__isnull', True))
for i, f in enumerate(self.filters):
self.fields["filter"].choices.append((i, f.name))
When I output this form to a template using:
{{ form.as_p }}
I see the correct choices. However, after refreshing the page, I see the list three times in the select box. Hitting refresh again results in the list showing 10 times in the select box!
Here is my view:
@login_required
def assign_test(request):
pg = PhysicianGroup.objects.get(pk=physician_group)
if request.method == 'POST':
form = AssignmentFilterForm(request.POST)
if form.is_valid():
yes = False
else:
form = AssignmentFilterForm()
patients = pg.allPatients().order_by('bed__room__unit', 'bed__room__order', 'bed__order' )
return render_to_response('hospitalists/assign_test.html', RequestContext(request, {'patients': patients, 'form': form,}))
What am I doing wrong?
Thanks, Pete
|
[
"This is actually a feature of Python that catches a lot of people.\nWhen you define variables on the class as you have with filters = [] the right half of the expression is evaluated when the class is initially defined. So when your code is first run it will create a new list in memory and return a reference to this list. As a result, each AssignmentFilterForm instance will have its own filters variable, but they will all point to this same list in memory. To solve this just move the initialization of self.filters into your __init__ method.\nMost of the time you don't run into this issue because the types you are using aren't stored as a reference. Numbers, booleans, etc are stored as their value. Strings are stored by reference, but strings are immutable meaning a new string must be created in memory every time it is changed and a new reference returned.\nPointers don't present themselves often in scripting language, so it's often confusing at first when they do.\nHere's a simple IDLE session example to show what's happening\n>>> class Test():\n myList = []\n def __init__( self ):\n self.myList.append( \"a\" )\n\n\n>>> Test.myList\n[]\n>>> test1 = Test()\n>>> Test.myList\n['a']\n>>> test1.myList\n['a']\n>>> test2 = Test()\n>>> test2.myList\n['a', 'a']\n>>> test1.myList\n['a', 'a']\n>>> Test.myList\n['a', 'a']\n\n",
"I picked up the book Pro Django which answers this question. It's a great book by the way, and I highly recommend it!\nThe solution is to make BOTH the choice field and my helper var both instance variables:\nclass AssignmentFilterForm(forms.Form):\ndef __init__(self, pg, request = None):\n super(forms.Form, self).__init__(request)\n self.filters = []\n\n self.filters.append(PatientFilter('All'))\n self.filters.append(PatientFilter('Assigned', 'service__isnull', False))\n self.filters.append(PatientFilter('Unassigned', 'service__isnull', True))\n self.addPhysicians(pg)\n\n self.fields['filter'] = forms.ChoiceField()\n for i, f in enumerate(self.filters):\n self.fields['filter'].choices.append((i, f.name))\n\nClearing out the choices works but would surely result in threading issues. \n",
"You're appending to the PER-CLASS variable self.filters. Make it into a PER-INSTANCE variable instead, by doing self.filters = [] at the start of __init__.\n",
"To clarify from some of the other answers:\nThe fields are, and must be, class variables. They get all sorts of things done to them by the metaclass, and this is the correct way to define them.\nHowever, your filters variable does not need to be a class var. It can quite easily be an instance var - just remove the definition from the class and put it in __init__. Or, perhaps even better, don't make it a property at all - just a local var within __init__. Then, instead of appending to filters.choices, just reassign it.\ndef __init__(self, *args, **kwargs):\n super(forms.Form, self).__init__(*args, **kwargs)\n filters = []\n filters.append(PatientFilter('All'))\n filters.append(PatientFilter('Assigned', 'service__isnull', False))\n filters.append(PatientFilter('Unassigned', 'service__isnull', True))\n\n self.fields[\"filter\"].choices = [(i, f.name) for i, f in enumerate(filters)] \n\n",
"As answered above, you need to initialize filters as an instance variable:\ndef __init__(...):\n self.filters = []\n self.filters.append(...)\n # ...\n\nIf you want to know more about how the Form class works, you should read this page in the Django wiki:\n\nModel Creation and Initialization\n\nIt talks about the internals of the Model class, but you'll find the general setup of fields is somewhat similar to the Form (minus the database stuff). It's a bit dated (2006), but I think the basic principles still apply. The metaclass stuff can be a bit confusing if you're new though.\n"
] |
[
7,
2,
1,
1,
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001275009_django_python.txt
|
Q:
Stackless python stopped mod_python/apache from working
I installed stackless pyton 2.6.2 after reading several sites that said its fully compatible with vanilla python. After installing i found that my django applications do not work any more.
I did reinstall django (1.1) again and now im kind of lost. The error that i get is 500:
Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request.
Please contact the server administrator, webmaster@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error.
More information about this error may be available in the server error log.
Apache/2.2.11 (Ubuntu) DAV/2 PHP/5.2.6-3ubuntu4.1 with Suhosin-Patch mod_python/3.3.1 Python/2.6.2 mod_ruby/1.2.6 Ruby/1.8.7(2008-08-11) mod_ssl/2.2.11 OpenSSL/0.9.8g Server at 127.0.0.1 Port 80
What else, could or should i do?
Edit: From 1st comment i understand that the problem is not in django but mod_python & apache? so i edited my question title.
Edit2: I think something is wrong with some paths setup. I tried going from mod_python to mod_wsgi, managed to finally set it up correctly only to get next error:
[Sun Aug 16 12:38:22 2009] [error] [client 127.0.0.1] raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e)
[Sun Aug 16 12:38:22 2009] [error] [client 127.0.0.1] ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb
Alan
A:
When you install a new version of Python (whether stackless or not) you also need to reinstall all of the third party modules you need -- either from sources, which you say you don't want to do, or from packages built for the new version of Python you've just installed.
So, check the repository from which you installed Python 2.6.2 with aptitude: does it also have versions for that specific Python of mod_python, mysqldb, django, and any other third party stuff you may need? There really is no "silver bullet" for package management and I know of no "sumo distribution" of Python bundling all the packages you could ever possibly need (if there were, it would have to be many 10s of GB;-).
|
Stackless python stopped mod_python/apache from working
|
I installed stackless pyton 2.6.2 after reading several sites that said its fully compatible with vanilla python. After installing i found that my django applications do not work any more.
I did reinstall django (1.1) again and now im kind of lost. The error that i get is 500:
Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request.
Please contact the server administrator, webmaster@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error.
More information about this error may be available in the server error log.
Apache/2.2.11 (Ubuntu) DAV/2 PHP/5.2.6-3ubuntu4.1 with Suhosin-Patch mod_python/3.3.1 Python/2.6.2 mod_ruby/1.2.6 Ruby/1.8.7(2008-08-11) mod_ssl/2.2.11 OpenSSL/0.9.8g Server at 127.0.0.1 Port 80
What else, could or should i do?
Edit: From 1st comment i understand that the problem is not in django but mod_python & apache? so i edited my question title.
Edit2: I think something is wrong with some paths setup. I tried going from mod_python to mod_wsgi, managed to finally set it up correctly only to get next error:
[Sun Aug 16 12:38:22 2009] [error] [client 127.0.0.1] raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e)
[Sun Aug 16 12:38:22 2009] [error] [client 127.0.0.1] ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb
Alan
|
[
"When you install a new version of Python (whether stackless or not) you also need to reinstall all of the third party modules you need -- either from sources, which you say you don't want to do, or from packages built for the new version of Python you've just installed. \nSo, check the repository from which you installed Python 2.6.2 with aptitude: does it also have versions for that specific Python of mod_python, mysqldb, django, and any other third party stuff you may need? There really is no \"silver bullet\" for package management and I know of no \"sumo distribution\" of Python bundling all the packages you could ever possibly need (if there were, it would have to be many 10s of GB;-).\n"
] |
[
2
] |
[] |
[] |
[
"mod_python",
"mod_wsgi",
"python",
"python_stackless",
"stackless"
] |
stackoverflow_0001283856_mod_python_mod_wsgi_python_python_stackless_stackless.txt
|
Q:
Does one often use libraries outside the standard ones?
I am trying to learn Python and referencing the documentation for the standard Python library from the Python website, and I was wondering if this was really the only library and documentation I will need or is there more? I do not plan to program advanced 3d graphics or anything advanced at the moment.
Edit:
Thanks very much for the responses, they were very useful. My problem is where to start on a script I have been thinking of. I want to write a script that converts images into a web format but I am not completely sure where to begin. Thanks for any more help you can provide.
A:
For the basics, yes, the standard Python library is probably all you'll need. But as you continue programming in Python, eventually you will need some other library for some task -- for instance, I recently needed to generate a tone at a specific, but differing, frequency for an application, and pyAudiere did the job just right.
A lot of the other libraries out there generate their documentation differently from the core Python style -- it's just visually different, the content is the same. Some only have docstrings, and you'll be best off reading them in a console, perhaps.
Regardless of how the other documentation is generated, get used to looking through the Python APIs to find the functions/classes/methods you need. When the time comes for you to use non-core libraries, you'll know what you want to do, but you'll have to find how to do it.
For the future, it wouldn't hurt to be familiar with C, either. There's a number of Python libraries that are actually just wrappers around C libraries, and the documentation for the Python libraries is just the same as the documentation for the C libraries. PyOpenGL comes to mind, but it's been a while since I've personally used it.
A:
As others have said, it depends on what you're into. The package index at http://pypi.python.org/pypi/ has categories and summaries that are helpful in seeing what other libraries are available for different purposes. (Select "Browse packages" on the left to see the categories.)
A:
One very common library, that should also fit your current needs, is the Python Image Library (PIL).
Note: the latest version is still in beta, and available only at Effbot site.
A:
If you're just beginning, all you'll need to know is the stuff you can get from the Python website. Failing that a quick Google is the fastest way to get (most) Python answers these days.
As you develop your skills and become more advanced, you'll start looking for more exciting things to do, at which point you'll naturally start coming across other libraries (for example, pygame) that you can use for your more advanced projects.
A:
It's very hard to answer this without knowing what you're planning on using Python for. I recommend Dive Into Python as a useful resource for learning Python.
In terms of popular third party frameworks, for web applications there's the Django framework and associated documentation, network stuff there's Twisted ... the list goes on. It really depends on what you're hoping to do!
A:
Assuming that the standard library doesn't provide what we need and we don't have the time, or the knowledge, to implement the code we reuse 3rd party libraries.
This is a common attitude regardless of the programming language.
A:
If there's a chance that someone else ever wanted to do what you want to do, there's a chance that someone created a library for it. A few minutes Googling something like "python image library" will find you what you need, or let you know that someone hasn't created a library for your purposes.
|
Does one often use libraries outside the standard ones?
|
I am trying to learn Python and referencing the documentation for the standard Python library from the Python website, and I was wondering if this was really the only library and documentation I will need or is there more? I do not plan to program advanced 3d graphics or anything advanced at the moment.
Edit:
Thanks very much for the responses, they were very useful. My problem is where to start on a script I have been thinking of. I want to write a script that converts images into a web format but I am not completely sure where to begin. Thanks for any more help you can provide.
|
[
"For the basics, yes, the standard Python library is probably all you'll need. But as you continue programming in Python, eventually you will need some other library for some task -- for instance, I recently needed to generate a tone at a specific, but differing, frequency for an application, and pyAudiere did the job just right.\nA lot of the other libraries out there generate their documentation differently from the core Python style -- it's just visually different, the content is the same. Some only have docstrings, and you'll be best off reading them in a console, perhaps.\nRegardless of how the other documentation is generated, get used to looking through the Python APIs to find the functions/classes/methods you need. When the time comes for you to use non-core libraries, you'll know what you want to do, but you'll have to find how to do it.\nFor the future, it wouldn't hurt to be familiar with C, either. There's a number of Python libraries that are actually just wrappers around C libraries, and the documentation for the Python libraries is just the same as the documentation for the C libraries. PyOpenGL comes to mind, but it's been a while since I've personally used it.\n",
"As others have said, it depends on what you're into. The package index at http://pypi.python.org/pypi/ has categories and summaries that are helpful in seeing what other libraries are available for different purposes. (Select \"Browse packages\" on the left to see the categories.)\n",
"One very common library, that should also fit your current needs, is the Python Image Library (PIL).\nNote: the latest version is still in beta, and available only at Effbot site.\n",
"If you're just beginning, all you'll need to know is the stuff you can get from the Python website. Failing that a quick Google is the fastest way to get (most) Python answers these days.\nAs you develop your skills and become more advanced, you'll start looking for more exciting things to do, at which point you'll naturally start coming across other libraries (for example, pygame) that you can use for your more advanced projects.\n",
"It's very hard to answer this without knowing what you're planning on using Python for. I recommend Dive Into Python as a useful resource for learning Python.\nIn terms of popular third party frameworks, for web applications there's the Django framework and associated documentation, network stuff there's Twisted ... the list goes on. It really depends on what you're hoping to do!\n",
"Assuming that the standard library doesn't provide what we need and we don't have the time, or the knowledge, to implement the code we reuse 3rd party libraries.\n\nThis is a common attitude regardless of the programming language.\n",
"If there's a chance that someone else ever wanted to do what you want to do, there's a chance that someone created a library for it. A few minutes Googling something like \"python image library\" will find you what you need, or let you know that someone hasn't created a library for your purposes.\n"
] |
[
2,
2,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"libraries",
"python"
] |
stackoverflow_0001283922_libraries_python.txt
|
Q:
django auto entry generation
I am trying to make an automated database entry generation with Django, whenever I trigger it to happen.
For instance, assume I have a such model:
class status_entry(models.Model):
name = models.TextField()
date = models.DateField()
status = models.BooleanField()
and I have several entries to the model such as:
1 - "bla bla" - 2009/11/6 - true
2 - "bla bla" - 2009/11/7 - true
3 - "bla bla" - 2009/11/10 - true
so as you can see between my 2nd and 3rd entry, I have 2 absent entry days (2009/11/8 and 2009/11/9), via creating some view or script I want to
auto fill these absent day entries such as :
id name date status
------------------------------------
1 - "bla bla" - 2009/11/6 - true
2 - "bla bla" - 2009/11/7 - true
3 - "bla bla" - 2009/11/8 - false
4 - "bla bla" - 2009/11/9 - false
5 - "bla bla" - 2009/11/10 - true
Thanks
A:
You can overwrite save and do the autofill there (daterange function taken from here):
from datetime import timedelta
def daterange(start_date, end_date):
for n in range((end_date - start_date).days):
yield start_date + timedelta(n)
class StatusEntry(models.Model):
name = models.TextField()
date = models.DateField()
status = models.BooleanField()
def __unicode__(self):
return "%s - %s - %s" % (self.name, unicode(self.status), unicode(self.date))
def save(self, fill=True):
if fill and not self.id: # autofill on insert, not on update
newest = StatusEntry.objects.all().order_by("-date")[:1]
if newest and newest[0].date < self.date:
for date in daterange(newest[0].date + timedelta(1), self.date):
entry = StatusEntry(name=self.name, date=date, status=False)
entry.save(fill=False)
super(StatusEntry, self).save()
You could also use signals or do it with triggers, like hughdbrown suggested
|
django auto entry generation
|
I am trying to make an automated database entry generation with Django, whenever I trigger it to happen.
For instance, assume I have a such model:
class status_entry(models.Model):
name = models.TextField()
date = models.DateField()
status = models.BooleanField()
and I have several entries to the model such as:
1 - "bla bla" - 2009/11/6 - true
2 - "bla bla" - 2009/11/7 - true
3 - "bla bla" - 2009/11/10 - true
so as you can see between my 2nd and 3rd entry, I have 2 absent entry days (2009/11/8 and 2009/11/9), via creating some view or script I want to
auto fill these absent day entries such as :
id name date status
------------------------------------
1 - "bla bla" - 2009/11/6 - true
2 - "bla bla" - 2009/11/7 - true
3 - "bla bla" - 2009/11/8 - false
4 - "bla bla" - 2009/11/9 - false
5 - "bla bla" - 2009/11/10 - true
Thanks
|
[
"You can overwrite save and do the autofill there (daterange function taken from here):\nfrom datetime import timedelta\n\ndef daterange(start_date, end_date):\n for n in range((end_date - start_date).days):\n yield start_date + timedelta(n)\n\n\nclass StatusEntry(models.Model):\n name = models.TextField()\n date = models.DateField()\n status = models.BooleanField()\n\n def __unicode__(self):\n return \"%s - %s - %s\" % (self.name, unicode(self.status), unicode(self.date))\n\n def save(self, fill=True):\n if fill and not self.id: # autofill on insert, not on update\n newest = StatusEntry.objects.all().order_by(\"-date\")[:1]\n if newest and newest[0].date < self.date:\n for date in daterange(newest[0].date + timedelta(1), self.date):\n entry = StatusEntry(name=self.name, date=date, status=False)\n entry.save(fill=False)\n super(StatusEntry, self).save()\n\nYou could also use signals or do it with triggers, like hughdbrown suggested\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_models",
"python",
"scripting"
] |
stackoverflow_0001284814_django_django_models_python_scripting.txt
|
Q:
Programmatically change font color of text in PDF
I'm not familiar with the PDF specification at all. I was wondering if it's possible to directly manipulate a PDF file so that certain blocks of text that I've identified as important are highlighted in colors of my choice. Language of choice would be python.
A:
It's possible, but not necessarily easy, because the PDF format is so rich. You can find a document describing it in detail here. The first elementary example it gives about how PDFs display text is:
BT
/F13 12 Tf
288 720 Td
(ABC) Tj
ET
BT and ET are commands to begin and end a text object; Tf is a command to use external font resource F13 (which happens to be Helvetica) at size 12; Td is a command to position the cursor at the given coordinates; Tj is a command to write the glyphs for the previous string. The flavor is somewhat "reverse-polish notation"-oid, and indeed quite close to the flavor of Postscript, one of Adobe's other great contributions to typesetting.
The problem is, there is nothing in the PDF specs that says that text that "looks" like it belongs together on the page as displayed must actually "be" together; since precise coordinates can always be given, if the PDF is generated by a sophisticated typography layout system, it might position text precisely, character by character, by coordinates. Reconstructing text in form of words and sentences is therefore not necessarily easy -- it's almost as hard as optical text recognition, except that you are given the characters precisely (well -- almost... some alleged "images" might actually display as characters...;-).
pyPdf is a very simple pure-Python library that's a good starting point for playing around with PDF files. Its "text extraction" function is quite elementary and does nothing but concatenate the arguments of a few text-drawing commands; you'll see that suffices on some docs, and is quite unusable on others, but at least it's a start. As distributed, pyPdf does just about nothing with colors, but with some hacking that could be remedied.
reportlab's powerful Python library is entirely focused on generating new PDFs, not on interpreting or modifying existing ones. At the other extreme, pure Python library pdfminer entirely focusing on parsing PDF files; it does do some clustering to try and reconstruct text in cases in which simpler libraries would be stumped.
I don't know of an existing library that performs the transformational tasks you desire, but it should be feasible to mix and match some of these existing ones to get most of it done... good luck!
A:
Highlight is possible in pdf file using PDF annotations but doing it natively is not that easy job. If any of the mentioned library provide such facility is something that you may look for.
|
Programmatically change font color of text in PDF
|
I'm not familiar with the PDF specification at all. I was wondering if it's possible to directly manipulate a PDF file so that certain blocks of text that I've identified as important are highlighted in colors of my choice. Language of choice would be python.
|
[
"It's possible, but not necessarily easy, because the PDF format is so rich. You can find a document describing it in detail here. The first elementary example it gives about how PDFs display text is:\nBT\n/F13 12 Tf\n288 720 Td\n(ABC) Tj\nET\n\nBT and ET are commands to begin and end a text object; Tf is a command to use external font resource F13 (which happens to be Helvetica) at size 12; Td is a command to position the cursor at the given coordinates; Tj is a command to write the glyphs for the previous string. The flavor is somewhat \"reverse-polish notation\"-oid, and indeed quite close to the flavor of Postscript, one of Adobe's other great contributions to typesetting.\nThe problem is, there is nothing in the PDF specs that says that text that \"looks\" like it belongs together on the page as displayed must actually \"be\" together; since precise coordinates can always be given, if the PDF is generated by a sophisticated typography layout system, it might position text precisely, character by character, by coordinates. Reconstructing text in form of words and sentences is therefore not necessarily easy -- it's almost as hard as optical text recognition, except that you are given the characters precisely (well -- almost... some alleged \"images\" might actually display as characters...;-).\npyPdf is a very simple pure-Python library that's a good starting point for playing around with PDF files. Its \"text extraction\" function is quite elementary and does nothing but concatenate the arguments of a few text-drawing commands; you'll see that suffices on some docs, and is quite unusable on others, but at least it's a start. As distributed, pyPdf does just about nothing with colors, but with some hacking that could be remedied.\nreportlab's powerful Python library is entirely focused on generating new PDFs, not on interpreting or modifying existing ones. At the other extreme, pure Python library pdfminer entirely focusing on parsing PDF files; it does do some clustering to try and reconstruct text in cases in which simpler libraries would be stumped.\nI don't know of an existing library that performs the transformational tasks you desire, but it should be feasible to mix and match some of these existing ones to get most of it done... good luck!\n",
"Highlight is possible in pdf file using PDF annotations but doing it natively is not that easy job. If any of the mentioned library provide such facility is something that you may look for.\n"
] |
[
16,
0
] |
[] |
[] |
[
"fonts",
"pdf",
"python"
] |
stackoverflow_0001283065_fonts_pdf_python.txt
|
Q:
Why can't you add attributes to object in python?
(Written in Python shell)
>>> o = object()
>>> o.test = 1
Traceback (most recent call last):
File "<pyshell#45>", line 1, in <module>
o.test = 1
AttributeError: 'object' object has no attribute 'test'
>>> class test1:
pass
>>> t = test1()
>>> t.test
Traceback (most recent call last):
File "<pyshell#50>", line 1, in <module>
t.test
AttributeError: test1 instance has no attribute 'test'
>>> t.test = 1
>>> t.test
1
>>> class test2(object):
pass
>>> t = test2()
>>> t.test = 1
>>> t.test
1
>>>
Why doesn't object allow you to add attributes to it?
A:
Notice that an object instance has no __dict__ attribute:
>>> dir(object())
['__class__', '__delattr__', '__doc__', '__getattribute__', '__hash__', '__init__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__str__']
An example to illustrate this behavior in a derived class:
>>> class Foo(object):
... __slots__ = {}
...
>>> f = Foo()
>>> f.bar = 42
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Foo' object has no attribute 'bar'
Quoting from the docs on slots:
[...] The __slots__ declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because __dict__ is not created for each instance.
EDIT: To answer ThomasH from the comments, OP's test class is an "old-style" class. Try:
>>> class test: pass
...
>>> getattr(test(), '__dict__')
{}
>>> getattr(object(), '__dict__')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'object' object has no attribute '__dict__'
and you'll notice there is a __dict__ instance. The object class may not have a __slots__ defined, but the result is the same: lack of a __dict__, which is what prevents dynamic assignment of an attribute. I've reorganized my answer to make this clearer (move the second paragraph to the top).
A:
Good question, my guess is that it has to do with the fact that object is a built-in/extension type.
>>> class test(object):
... pass
...
>>> test.test = 1
>>> object.test = 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can't set attributes of built-in/extension type 'object'
IIRC, this has to do with the presence of a __dict__ attribute or, more correctly, setattr() blowing up when the object doesn't have a __dict__ attribute.
|
Why can't you add attributes to object in python?
|
(Written in Python shell)
>>> o = object()
>>> o.test = 1
Traceback (most recent call last):
File "<pyshell#45>", line 1, in <module>
o.test = 1
AttributeError: 'object' object has no attribute 'test'
>>> class test1:
pass
>>> t = test1()
>>> t.test
Traceback (most recent call last):
File "<pyshell#50>", line 1, in <module>
t.test
AttributeError: test1 instance has no attribute 'test'
>>> t.test = 1
>>> t.test
1
>>> class test2(object):
pass
>>> t = test2()
>>> t.test = 1
>>> t.test
1
>>>
Why doesn't object allow you to add attributes to it?
|
[
"Notice that an object instance has no __dict__ attribute:\n>>> dir(object())\n['__class__', '__delattr__', '__doc__', '__getattribute__', '__hash__', '__init__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__str__']\n\nAn example to illustrate this behavior in a derived class:\n>>> class Foo(object):\n... __slots__ = {}\n...\n>>> f = Foo()\n>>> f.bar = 42\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: 'Foo' object has no attribute 'bar'\n\nQuoting from the docs on slots:\n\n[...] The __slots__ declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because __dict__ is not created for each instance.\n\nEDIT: To answer ThomasH from the comments, OP's test class is an \"old-style\" class. Try:\n>>> class test: pass\n...\n>>> getattr(test(), '__dict__')\n{}\n>>> getattr(object(), '__dict__')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: 'object' object has no attribute '__dict__'\n\nand you'll notice there is a __dict__ instance. The object class may not have a __slots__ defined, but the result is the same: lack of a __dict__, which is what prevents dynamic assignment of an attribute. I've reorganized my answer to make this clearer (move the second paragraph to the top).\n",
"Good question, my guess is that it has to do with the fact that object is a built-in/extension type.\n>>> class test(object):\n... pass\n...\n>>> test.test = 1\n>>> object.test = 1\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: can't set attributes of built-in/extension type 'object'\n\nIIRC, this has to do with the presence of a __dict__ attribute or, more correctly, setattr() blowing up when the object doesn't have a __dict__ attribute.\n"
] |
[
57,
4
] |
[] |
[] |
[
"attributes",
"instances",
"python"
] |
stackoverflow_0001285269_attributes_instances_python.txt
|
Q:
DOCTEST==argv[0] as a convention?
In a bit of Python I'm writing (a command line and filter testing tool: claft) I wanted a simple way to invoke the built-in test suite (doctest) and I decided on the following:
if 'DOCTEST' in os.environ and os.environ['DOCTEST']==sys.argv[0]:
_runDocTests()
sys.exit()
Thus if the DOCTEST variable is set for some other program I'll just ignore it. In fact my test for this is just: DOCTEST=./claft ./claft or if I want to be verbose I can use: DOCTEST=./claft VERBOSE=1 ./claft So even if I leave DOCTEST=./claft in my environment the test code will only run when I invoke my program from within its own directory. If I switch to one of my test suites and invoke it using a relative PATH then I'm safe from inadvertently triggering this function.
Has anyone else used this sort of convention?
What are other suggestions or best practices for avoiding conflicts among environment variable names? For providing "hidden" access to test harness functionality?
(Also, if anyone wants to play with claft please feel free to give it a spin. It's pretty ugly code for now, and is barely a proof of concept. But it is minimally functional. It's also been a nice way to teach myself how to use Mercurial and bitbucket. The wiki and issue tracking are the best places to post feedback about claft).
A:
Since you're already doing command-line parsing, why not just add a --selftest option? You won't have to worry about any conflicts that way, and invocation will be easier.
A:
Another hackish way to avoid namespace conflicts with the environment: looks for myprogname_DEBUG or the like.
|
DOCTEST==argv[0] as a convention?
|
In a bit of Python I'm writing (a command line and filter testing tool: claft) I wanted a simple way to invoke the built-in test suite (doctest) and I decided on the following:
if 'DOCTEST' in os.environ and os.environ['DOCTEST']==sys.argv[0]:
_runDocTests()
sys.exit()
Thus if the DOCTEST variable is set for some other program I'll just ignore it. In fact my test for this is just: DOCTEST=./claft ./claft or if I want to be verbose I can use: DOCTEST=./claft VERBOSE=1 ./claft So even if I leave DOCTEST=./claft in my environment the test code will only run when I invoke my program from within its own directory. If I switch to one of my test suites and invoke it using a relative PATH then I'm safe from inadvertently triggering this function.
Has anyone else used this sort of convention?
What are other suggestions or best practices for avoiding conflicts among environment variable names? For providing "hidden" access to test harness functionality?
(Also, if anyone wants to play with claft please feel free to give it a spin. It's pretty ugly code for now, and is barely a proof of concept. But it is minimally functional. It's also been a nice way to teach myself how to use Mercurial and bitbucket. The wiki and issue tracking are the best places to post feedback about claft).
|
[
"Since you're already doing command-line parsing, why not just add a --selftest option? You won't have to worry about any conflicts that way, and invocation will be easier.\n",
"Another hackish way to avoid namespace conflicts with the environment: looks for myprogname_DEBUG or the like. \n"
] |
[
3,
0
] |
[] |
[] |
[
"doctest",
"python",
"testing",
"unit_testing"
] |
stackoverflow_0001281385_doctest_python_testing_unit_testing.txt
|
Q:
How will Python and Ruby applications be affected by .NET?
I'm curious about how .NET will affect Python and Ruby applications.
Will applications written in IronPython/IronRuby be so specific to the .NET environment, that they will essentially become platform specific?
If they don't use any of the .NET features, then what is the advantage of IronPython/IronRuby over their non .NET counterparts?
A:
I can't say anything about IronRuby, but most python implementations (like IronPython, Jython and PyPy) try to be as true to the CPython implementation as possible. IronPython is quickly becoming one of the best in this respect though, and there is a lot of traffic on Planet Python about it.
The main thing that will encourage developers to write code that's different from what they would write in CPython is the lack of C extension modules like NumPy (This is a problem in Jython and PyPy as well).
An interesting project to keep your eye on is IronClad, which will let you call C extension modules from within IronPython. This should eventually mean that you can develop code under CPython, using whatever modules you like, and it will run unmodified on IronPython.
http://www.resolversystems.com/documentation/index.php/Ironclad
So to answer your questions:
It should be easy enough to write IronPython applications that work on CPython as well, but I would probably aim to go the other way around: CPython programs that work on IronPython as well. That way, if it doesn't work then it's more likely to be a known bug with a known work-around.
The advantage of IronPython et al existing is that they provide alternative implementations of the language, which are sometimes useful for spotting bugs in CPython. They also provide alternative methods for deploying your Python applications, if for some reason you find yourself in a situation (like silverlight) where distributing the CPython implementation with your application is not appropriate.
A:
Will applications written in IronPython/IronRuby be so specific to the .NET environment, that they will essentially become platform specific?
IronRuby currently ships with most of the core ruby standard library, and support for ruby gems.
This means that it will support pretty much any native ruby app that doesn't rely on C extensions.
The flipside is that it will be possible to write native ruby apps in IronRuby that don't rely on the CLR, and those will be portable to MRI.
Whether or not people choose to create or use extensions for their apps using the CLR is the same question as to whether people create or use C extensions for MRI - one is no more portable than the other.
There is a side-question of "because it is so much easier to create IronRuby extensions in C# than it is to create CRuby extensions in C, will people create extensions where they should be sticking to native ruby code?", but that's entirely subjective.
On the whole though, I think anything that makes creating extensions easier is a big win.
If they don't use any of the .NET features, then what is the advantage of IronPython/IronRuby over their non .NET counterparts?
Performance: IronRuby is already faster for the most part than MRI 1.8, and isn't far off MRI 1.9, and things will only improve in future. I think python is similar in this respect.
Deployment: As people have mentioned, running a native ruby cross-platform rails app inside IIS is an attractive proposition to some windows-based developers, as it lets them better integrate with existing servers/management infrastructure/etc
Stability: While MRI 1.9 is much better than 1.8 was, I don't think anyone could disagree that CLR has a much better garbage collector and base runtime than C ruby does.
A:
IronPython/IronRuby are built to work on the .net virtual machine, so they are as you say essentially platform specific.
Apparently they are compatible with Python and Ruby as long as you don't use any of the .net framework in your programs.
A:
If you create a library or framework, people can use it on .NET with their .NET code. That's pretty cool for them, and for you!
When developing an application, if you use .NET's facilities with abandon then you lose "cross-platformity", which is not always an issue.
If you wrap these uses with an internal API, you can replace the .NET implementations later with pure-Python, wrapped C (for CPython), or Java (for Jython) later.
A:
According to the Mono page, IronPython is compatible with Mono's implementation of the .Net runtime, so executables should work both on Windows and Linux.
A:
You answer your first question with the second one, if you don't use anything from .Net only the original libs provided by the implementation of the language, you could interpret your *.py or *.rb file with another implementation and it should work.
The advantage would be if your a .Net shop you usually take care of having the right framework installed on client machine etc... well if you want python or ruby code, you now need to support another "framework" need to distribute install, take care of version problem etc... So there 2 advantages, using .Net framework power inside another language + keep the distribution/maintenance as simple as possible.
A:
It would be cool to run Rails/Django under IIS rather then Apache/Mongrel type solutions
|
How will Python and Ruby applications be affected by .NET?
|
I'm curious about how .NET will affect Python and Ruby applications.
Will applications written in IronPython/IronRuby be so specific to the .NET environment, that they will essentially become platform specific?
If they don't use any of the .NET features, then what is the advantage of IronPython/IronRuby over their non .NET counterparts?
|
[
"I can't say anything about IronRuby, but most python implementations (like IronPython, Jython and PyPy) try to be as true to the CPython implementation as possible. IronPython is quickly becoming one of the best in this respect though, and there is a lot of traffic on Planet Python about it.\nThe main thing that will encourage developers to write code that's different from what they would write in CPython is the lack of C extension modules like NumPy (This is a problem in Jython and PyPy as well). \nAn interesting project to keep your eye on is IronClad, which will let you call C extension modules from within IronPython. This should eventually mean that you can develop code under CPython, using whatever modules you like, and it will run unmodified on IronPython.\nhttp://www.resolversystems.com/documentation/index.php/Ironclad\nSo to answer your questions:\nIt should be easy enough to write IronPython applications that work on CPython as well, but I would probably aim to go the other way around: CPython programs that work on IronPython as well. That way, if it doesn't work then it's more likely to be a known bug with a known work-around.\nThe advantage of IronPython et al existing is that they provide alternative implementations of the language, which are sometimes useful for spotting bugs in CPython. They also provide alternative methods for deploying your Python applications, if for some reason you find yourself in a situation (like silverlight) where distributing the CPython implementation with your application is not appropriate.\n",
"\nWill applications written in IronPython/IronRuby be so specific to the .NET environment, that they will essentially become platform specific?\n\nIronRuby currently ships with most of the core ruby standard library, and support for ruby gems.\nThis means that it will support pretty much any native ruby app that doesn't rely on C extensions.\nThe flipside is that it will be possible to write native ruby apps in IronRuby that don't rely on the CLR, and those will be portable to MRI. \nWhether or not people choose to create or use extensions for their apps using the CLR is the same question as to whether people create or use C extensions for MRI - one is no more portable than the other.\nThere is a side-question of \"because it is so much easier to create IronRuby extensions in C# than it is to create CRuby extensions in C, will people create extensions where they should be sticking to native ruby code?\", but that's entirely subjective.\nOn the whole though, I think anything that makes creating extensions easier is a big win.\n\n\nIf they don't use any of the .NET features, then what is the advantage of IronPython/IronRuby over their non .NET counterparts?\n\n\nPerformance: IronRuby is already faster for the most part than MRI 1.8, and isn't far off MRI 1.9, and things will only improve in future. I think python is similar in this respect.\nDeployment: As people have mentioned, running a native ruby cross-platform rails app inside IIS is an attractive proposition to some windows-based developers, as it lets them better integrate with existing servers/management infrastructure/etc\nStability: While MRI 1.9 is much better than 1.8 was, I don't think anyone could disagree that CLR has a much better garbage collector and base runtime than C ruby does.\n\n",
"IronPython/IronRuby are built to work on the .net virtual machine, so they are as you say essentially platform specific. \nApparently they are compatible with Python and Ruby as long as you don't use any of the .net framework in your programs. \n",
"If you create a library or framework, people can use it on .NET with their .NET code. That's pretty cool for them, and for you!\nWhen developing an application, if you use .NET's facilities with abandon then you lose \"cross-platformity\", which is not always an issue.\nIf you wrap these uses with an internal API, you can replace the .NET implementations later with pure-Python, wrapped C (for CPython), or Java (for Jython) later.\n",
"According to the Mono page, IronPython is compatible with Mono's implementation of the .Net runtime, so executables should work both on Windows and Linux.\n",
"You answer your first question with the second one, if you don't use anything from .Net only the original libs provided by the implementation of the language, you could interpret your *.py or *.rb file with another implementation and it should work.\nThe advantage would be if your a .Net shop you usually take care of having the right framework installed on client machine etc... well if you want python or ruby code, you now need to support another \"framework\" need to distribute install, take care of version problem etc... So there 2 advantages, using .Net framework power inside another language + keep the distribution/maintenance as simple as possible.\n",
"It would be cool to run Rails/Django under IIS rather then Apache/Mongrel type solutions\n"
] |
[
5,
2,
1,
1,
1,
0,
0
] |
[] |
[] |
[
".net",
"ironpython",
"ironruby",
"python",
"ruby"
] |
stackoverflow_0000466897_.net_ironpython_ironruby_python_ruby.txt
|
Q:
What would you call a non-persistent data structure that allows persistent operations?
I've got a class that is essentially mutable, but allows for some "persistent-like" operations. For example, I can mutate the object like this (in Python):
# create an object with y equal to 3 and z equal to "foobar"
x = MyDataStructure(y = 3, z = "foobar")
x.y = 4
However, in lieu of doing things this way, there are a couple of methods that instead make a copy and then mutate it:
x = MyDataStructure(y=3, z="foobar")
# a is just like x, but with y equal to 4.
a = x.using(y = 4)
This is making a duplicate of x with different values. Apparently, this doesn't meet the definition of partially persistent given by wikipedia.
So what would you call a class like this? QuasiPersistentObject? PersistableObject? SortOfPersistentObject? Better yet, are there any technical names for this?
A:
I call this kind of data Persistable but not sure if it's a word
.
A:
It's just a optimized copy, I'd rather rename the operation to reflect that.
a = x.copy_with(y=4)
|
What would you call a non-persistent data structure that allows persistent operations?
|
I've got a class that is essentially mutable, but allows for some "persistent-like" operations. For example, I can mutate the object like this (in Python):
# create an object with y equal to 3 and z equal to "foobar"
x = MyDataStructure(y = 3, z = "foobar")
x.y = 4
However, in lieu of doing things this way, there are a couple of methods that instead make a copy and then mutate it:
x = MyDataStructure(y=3, z="foobar")
# a is just like x, but with y equal to 4.
a = x.using(y = 4)
This is making a duplicate of x with different values. Apparently, this doesn't meet the definition of partially persistent given by wikipedia.
So what would you call a class like this? QuasiPersistentObject? PersistableObject? SortOfPersistentObject? Better yet, are there any technical names for this?
|
[
"I call this kind of data Persistable but not sure if it's a word\n.\n",
"It's just a optimized copy, I'd rather rename the operation to reflect that. \na = x.copy_with(y=4)\n\n"
] |
[
2,
2
] |
[] |
[] |
[
"data_structures",
"functional_programming",
"naming",
"persistence",
"python"
] |
stackoverflow_0001285657_data_structures_functional_programming_naming_persistence_python.txt
|
Q:
How To Clone/Mutate A Model In Django Without Subclassing
'Ello, all. I'm trying to create a model in Django based on - but not subclassing or having a DB relation to - another model. My original model looks something like this: it stores some data with a date/time stamp.
class Entry(Model):
data1 = FloatField()
data2 = FloatField()
entered = DateTimeField()
I'd also like to aggregate the numeric data for each of these entries on a daily basis, using a model that is almost identical. For the DailyAvg() variant, we'll use a DateField() instead of a DateTimeField(), as there'll only be one average per day:
class EntryDailyAvg(Model):
data1 = FloatField()
data2 = FloatField()
entered = DateField()
Thus the problem: There's going to be a lot of these data classes that will need a corresponding daily average model stored in the DB, and the definitions are almost identical. I could just re-type a definition for an equivalent DailyAvg() class for each data class, but that seems to violate DRY, and is also a huge pain in the arse. I also can't have EntryDailyAvg subclass Entry, as Django will save a new Entry base every time I save a new EntryDailyAvg.
Is there a way to automaticaly (-magically?) generate the DailyAvg() class?
Thanks in advance!
A:
What if you create a AbstractEntry class with all the data1 stuff and then, two subclasses: Entry and EntryDailyAvg.
Check the docs for info on how to tell django that one class is abstract.
|
How To Clone/Mutate A Model In Django Without Subclassing
|
'Ello, all. I'm trying to create a model in Django based on - but not subclassing or having a DB relation to - another model. My original model looks something like this: it stores some data with a date/time stamp.
class Entry(Model):
data1 = FloatField()
data2 = FloatField()
entered = DateTimeField()
I'd also like to aggregate the numeric data for each of these entries on a daily basis, using a model that is almost identical. For the DailyAvg() variant, we'll use a DateField() instead of a DateTimeField(), as there'll only be one average per day:
class EntryDailyAvg(Model):
data1 = FloatField()
data2 = FloatField()
entered = DateField()
Thus the problem: There's going to be a lot of these data classes that will need a corresponding daily average model stored in the DB, and the definitions are almost identical. I could just re-type a definition for an equivalent DailyAvg() class for each data class, but that seems to violate DRY, and is also a huge pain in the arse. I also can't have EntryDailyAvg subclass Entry, as Django will save a new Entry base every time I save a new EntryDailyAvg.
Is there a way to automaticaly (-magically?) generate the DailyAvg() class?
Thanks in advance!
|
[
"What if you create a AbstractEntry class with all the data1 stuff and then, two subclasses: Entry and EntryDailyAvg.\nCheck the docs for info on how to tell django that one class is abstract.\n"
] |
[
2
] |
[] |
[] |
[
"aggregation",
"django",
"django_models",
"dry",
"python"
] |
stackoverflow_0001285977_aggregation_django_django_models_dry_python.txt
|
Q:
Using Python Mechanize like "Tamper Data"
I'm writing a web testing script with python (2.6) and mechanize (0.1.11). The page I'm working with has an html form with a select field like this:
<select name="field1" size="1">
<option value="A" selected>A</option>
<option value="B">B</option>
<option value="C">C</option>
<option value="D">D</option>
</select>
In mechanize, if I try something like this:
browser.form['field1'] = ['E']
Then I get an error: ClientForm.ItemNotFoundError: insufficient items with name 'E'
I can do this manually with the "Tamper Data" firefox extension. Is there a way to do this with python and mechanize? Can I somehow convince mechanize that the form actually has the value I want to submit?
A:
After poking around with the guts of ClientForm, it looks like you can trick it into adding another item.
For a select field, something like this seems to work:
xitem = ClientForm.Item(browser.form.find_control(name="field1"),
{'contents':'E', 'value':'E', 'label':'E'})
Similarly, for a radio button control
xitem = ClientForm.Item(browser.form.find_control(name="field2"),
{'type':'radio', 'name':'field2', 'value':'X'})
Note that the Item initializer will automatically update the list of items for the specified control, so you only need to create the item properly for it to appear.
|
Using Python Mechanize like "Tamper Data"
|
I'm writing a web testing script with python (2.6) and mechanize (0.1.11). The page I'm working with has an html form with a select field like this:
<select name="field1" size="1">
<option value="A" selected>A</option>
<option value="B">B</option>
<option value="C">C</option>
<option value="D">D</option>
</select>
In mechanize, if I try something like this:
browser.form['field1'] = ['E']
Then I get an error: ClientForm.ItemNotFoundError: insufficient items with name 'E'
I can do this manually with the "Tamper Data" firefox extension. Is there a way to do this with python and mechanize? Can I somehow convince mechanize that the form actually has the value I want to submit?
|
[
"After poking around with the guts of ClientForm, it looks like you can trick it into adding another item.\nFor a select field, something like this seems to work:\nxitem = ClientForm.Item(browser.form.find_control(name=\"field1\"), \n {'contents':'E', 'value':'E', 'label':'E'})\n\nSimilarly, for a radio button control\nxitem = ClientForm.Item(browser.form.find_control(name=\"field2\"),\n {'type':'radio', 'name':'field2', 'value':'X'})\n\nNote that the Item initializer will automatically update the list of items for the specified control, so you only need to create the item properly for it to appear.\n"
] |
[
7
] |
[] |
[] |
[
"forms",
"mechanize",
"python",
"tampering"
] |
stackoverflow_0001285895_forms_mechanize_python_tampering.txt
|
Q:
How can I check to see if a Python script was started interactively?
I'd like for a script of mine to have 2 behaviours, one when started as a scheduled task, and another if started manually. How could I test for interactiveness?
EDIT: this could either be a cron job, or started by a windows batch file, through the scheduled tasks.
A:
You should simply add a command-line switch in the scheduled task, and check for it in your script, modifying the behavior as appropriate. Explicit is better than implicit.
One benefit to this design: you'll be able to test both behaviors, regardless of how you actually invoked the script.
A:
If you want to know if you're reading from a terminal (not clear if that is enough of a distinction, please clarify) you can use
sys.stdin.isatty()
A:
I'd just add a command line switch when you're calling it with cron:
python yourscript.py -scheduled
then in your program
import sys
if "-scheduled" in sys.argv:
#--non-interactive code--
else:
#--interactive code--
|
How can I check to see if a Python script was started interactively?
|
I'd like for a script of mine to have 2 behaviours, one when started as a scheduled task, and another if started manually. How could I test for interactiveness?
EDIT: this could either be a cron job, or started by a windows batch file, through the scheduled tasks.
|
[
"You should simply add a command-line switch in the scheduled task, and check for it in your script, modifying the behavior as appropriate. Explicit is better than implicit.\nOne benefit to this design: you'll be able to test both behaviors, regardless of how you actually invoked the script.\n",
"If you want to know if you're reading from a terminal (not clear if that is enough of a distinction, please clarify) you can use\nsys.stdin.isatty()\n\n",
"I'd just add a command line switch when you're calling it with cron:\npython yourscript.py -scheduled\nthen in your program\nimport sys\n\nif \"-scheduled\" in sys.argv:\n #--non-interactive code--\nelse: \n #--interactive code--\n\n"
] |
[
11,
7,
0
] |
[] |
[] |
[
"interactive",
"python"
] |
stackoverflow_0001285024_interactive_python.txt
|
Q:
Why does this python method gives an error saying global name not defined?
I have a single code file for my Google App Engine project. This simple file has one class, and inside it a few methods.
Why does this python method gives an error saying global name not defined?
Erro NameError: global name 'gen_groups' is not defined
import wsgiref.handlers
from google.appengine.ext import webapp
from django.utils import simplejson
class MainHandler(webapp.RequestHandler):
def gen_groups(self, lines):
""" Returns contiguous groups of lines in a file """
group = []
for line in lines:
line = line.strip()
if not line and group:
yield group
group = []
elif line:
group.append(line)
def gen_albums(self, groups):
""" Given groups of lines in an album file, returns albums """
for group in groups:
title = group.pop(0)
songinfo = zip(*[iter(group)]*2)
songs = [dict(title=title,url=url) for title,url in songinfo]
album = dict(title=title, songs=songs)
yield album
def get(self):
input = open('links.txt')
groups = gen_groups(input)
albums = gen_albums(groups)
print simplejson.dumps(list(albums))
def main():
application = webapp.WSGIApplication([('/', MainHandler)],
debug=True)
wsgiref.handlers.CGIHandler().run(application)
if __name__ == '__main__':
main()
A:
It's an instance method, you need to use self.gen_groups(...) and self.gen_albums(...).
Edit: I'm guessing the TypeError you are getting now is because you removed the 'self' argument from gen_groups(). You'll need to put it back in:
def get_groups(self, lines):
...
A:
You need to call it explicitly with an instance:
groups = self.gen_groups(input)
Similarly for some of the other calls you're making in there, e.g. gen_album.
Also, see Knowing When to Use self and __init__ for more information.
A:
You have to use it like this:
self.gen_groups(input)
There is not implicit "self" in Python.
|
Why does this python method gives an error saying global name not defined?
|
I have a single code file for my Google App Engine project. This simple file has one class, and inside it a few methods.
Why does this python method gives an error saying global name not defined?
Erro NameError: global name 'gen_groups' is not defined
import wsgiref.handlers
from google.appengine.ext import webapp
from django.utils import simplejson
class MainHandler(webapp.RequestHandler):
def gen_groups(self, lines):
""" Returns contiguous groups of lines in a file """
group = []
for line in lines:
line = line.strip()
if not line and group:
yield group
group = []
elif line:
group.append(line)
def gen_albums(self, groups):
""" Given groups of lines in an album file, returns albums """
for group in groups:
title = group.pop(0)
songinfo = zip(*[iter(group)]*2)
songs = [dict(title=title,url=url) for title,url in songinfo]
album = dict(title=title, songs=songs)
yield album
def get(self):
input = open('links.txt')
groups = gen_groups(input)
albums = gen_albums(groups)
print simplejson.dumps(list(albums))
def main():
application = webapp.WSGIApplication([('/', MainHandler)],
debug=True)
wsgiref.handlers.CGIHandler().run(application)
if __name__ == '__main__':
main()
|
[
"It's an instance method, you need to use self.gen_groups(...) and self.gen_albums(...).\nEdit: I'm guessing the TypeError you are getting now is because you removed the 'self' argument from gen_groups(). You'll need to put it back in:\ndef get_groups(self, lines):\n ...\n\n",
"You need to call it explicitly with an instance:\ngroups = self.gen_groups(input)\n\nSimilarly for some of the other calls you're making in there, e.g. gen_album.\nAlso, see Knowing When to Use self and __init__ for more information.\n",
"You have to use it like this:\nself.gen_groups(input)\n\nThere is not implicit \"self\" in Python.\n"
] |
[
5,
1,
1
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0001286235_google_app_engine_python.txt
|
Q:
Django - String to Date - Date to UNIX Timestamp
I need to convert a date from a string (entered into a url) in the form of 12/09/2008-12:40:49. Obviously, I'll need a UNIX Timestamp at the end of it, but before I get that I need the Date object first.
How do I do this? I can't find any resources that show the date in that format? Thank you.
A:
You need the strptime method. If you're on Python 2.5 or higher, this is a method on datetime, otherwise you have to use a combination of the time and datetime modules to achieve this.
Python 2.5 up:
from datetime import datetime
dt = datetime.strptime(s, "%d/%m/%Y-%H:%M:%S")
below 2.5:
from datetime import datetime
from time import strptime
dt = datetime(*strptime(s, "%d/%m/%Y-%H:%M:%S")[0:6])
A:
You can use the time.strptime() method to parse a date string. This will return a time_struct that you can pass to time.mktime() (when the string represents a local time) or calendar.timegm() (when the string is a UTC time) to get the number of seconds since the epoch.
|
Django - String to Date - Date to UNIX Timestamp
|
I need to convert a date from a string (entered into a url) in the form of 12/09/2008-12:40:49. Obviously, I'll need a UNIX Timestamp at the end of it, but before I get that I need the Date object first.
How do I do this? I can't find any resources that show the date in that format? Thank you.
|
[
"You need the strptime method. If you're on Python 2.5 or higher, this is a method on datetime, otherwise you have to use a combination of the time and datetime modules to achieve this.\nPython 2.5 up:\nfrom datetime import datetime\ndt = datetime.strptime(s, \"%d/%m/%Y-%H:%M:%S\")\n\nbelow 2.5:\nfrom datetime import datetime\nfrom time import strptime\ndt = datetime(*strptime(s, \"%d/%m/%Y-%H:%M:%S\")[0:6])\n\n",
"You can use the time.strptime() method to parse a date string. This will return a time_struct that you can pass to time.mktime() (when the string represents a local time) or calendar.timegm() (when the string is a UTC time) to get the number of seconds since the epoch.\n"
] |
[
12,
2
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001286619_django_python.txt
|
Q:
What is the fastest way to draw an image in Gtk+?
I have an image/pixbuf that I want to draw into a gtk.DrawingArea and refresh frequently, so the blitting operation has to be fast. Doing it the easy way:
def __init__(self):
self.drawing_area = gtk.DrawingArea()
self.image = gtk.gdk.pixbuf_new_from_file("image.png")
def area_expose_cb(self, area, event):
self.drawing_area.window.draw_pixbuf(self.gc, self.image, 0, 0, x, y)
However leads to very slow performance, likely caused by the pixbuf not being in the displays color format.
I had no success with Cairo either, as it seems limited to 24/32bit formats and doesn't have a 16bit format (FORMAT_RGB16_565 is unsupported and deprecated).
What alternatives are there to drawing pictures quickly in Gtk+?
A:
Try creating Pixmap that uses the same colormap as your drawing area.
dr_area.realize()
self.gc = dr_area.get_style().fg_gc[gtk.STATE_NORMAL]
img = gtk.gdk.pixbuf_new_from_file("image.png")
self.image = gtk.gdk.Pixmap(dr_area.window, img.get_width(), img.get_height())
self.image.draw_pixbuf(self.gc, img, 0, 0, 0, 0)
and drawing it to the screen using
dr_area.window.draw_drawable(self.gc, self.image, 0, 0, x, y, *self.image.get_size())
A:
Are you really not generating enough raw speed/throughput? Or is it just that you're seeing flickering?
If it's the latter, perhaps you should investigate double buffering for perfomring your updates instead? Basically the idea is to draw to an invisible buffer then tell the graphics card to use the new buffer.
Maybe check out this page which has some information on double buffering.
A:
It may be worth doing some benchmarking - if you draw with a small area is it still slow?
If it is, it may be worth asking on pygtk or gtk mailing lists...
|
What is the fastest way to draw an image in Gtk+?
|
I have an image/pixbuf that I want to draw into a gtk.DrawingArea and refresh frequently, so the blitting operation has to be fast. Doing it the easy way:
def __init__(self):
self.drawing_area = gtk.DrawingArea()
self.image = gtk.gdk.pixbuf_new_from_file("image.png")
def area_expose_cb(self, area, event):
self.drawing_area.window.draw_pixbuf(self.gc, self.image, 0, 0, x, y)
However leads to very slow performance, likely caused by the pixbuf not being in the displays color format.
I had no success with Cairo either, as it seems limited to 24/32bit formats and doesn't have a 16bit format (FORMAT_RGB16_565 is unsupported and deprecated).
What alternatives are there to drawing pictures quickly in Gtk+?
|
[
"Try creating Pixmap that uses the same colormap as your drawing area.\ndr_area.realize()\nself.gc = dr_area.get_style().fg_gc[gtk.STATE_NORMAL]\nimg = gtk.gdk.pixbuf_new_from_file(\"image.png\")\nself.image = gtk.gdk.Pixmap(dr_area.window, img.get_width(), img.get_height())\nself.image.draw_pixbuf(self.gc, img, 0, 0, 0, 0)\n\nand drawing it to the screen using\ndr_area.window.draw_drawable(self.gc, self.image, 0, 0, x, y, *self.image.get_size())\n\n",
"Are you really not generating enough raw speed/throughput? Or is it just that you're seeing flickering?\nIf it's the latter, perhaps you should investigate double buffering for perfomring your updates instead? Basically the idea is to draw to an invisible buffer then tell the graphics card to use the new buffer.\nMaybe check out this page which has some information on double buffering.\n",
"It may be worth doing some benchmarking - if you draw with a small area is it still slow?\nIf it is, it may be worth asking on pygtk or gtk mailing lists...\n"
] |
[
7,
2,
0
] |
[] |
[] |
[
"cairo",
"gtk",
"pygtk",
"python"
] |
stackoverflow_0000959675_cairo_gtk_pygtk_python.txt
|
Q:
What is the correct way to clean up when using PyOpenAL?
I'm looking at PyOpenAL for some sound needs with Python (obviously). Documentation is sparse (consisting of a demo script, which doesn't work unmodified) but as far as I can tell, there are two layers. Direct wrapping of OpenAL calls and a lightweight 'pythonic' wrapper - it is the latter I'm concerned with. Specifically, how do you clean up correctly? If we take a small example:
import time
import pyopenal
pyopenal.init(None)
l = pyopenal.Listener(22050)
b = pyopenal.WaveBuffer("somefile.wav")
s = pyopenal.Source()
s.buffer = b
s.looping = False
s.play()
while s.get_state() == pyopenal.AL_PLAYING:
time.sleep(1)
pyopenal.quit()
As it is, a message is printed on to the terminal along the lines of "one source not deleted, one buffer not deleted". But I am assuming the we can't use the native OpenAL calls with these objects, so how do I clean up correctly?
EDIT:
I eventually just ditched pyopenal and wrote a small ctypes wrapper over OpenAL and alure (pyopenal exposes the straight OpenAL functions, but I kept getting SIGFPE). Still curious as to what I was supposed to do here.
A:
#relese reference to l b and s
del l
del b
del s
#now the WaveBuffer and Source should be destroyed, so we could:
pyopenal.quit()
Probably de destructor of pyopenal calls quit() before exit so you dont need to call it yourself.
|
What is the correct way to clean up when using PyOpenAL?
|
I'm looking at PyOpenAL for some sound needs with Python (obviously). Documentation is sparse (consisting of a demo script, which doesn't work unmodified) but as far as I can tell, there are two layers. Direct wrapping of OpenAL calls and a lightweight 'pythonic' wrapper - it is the latter I'm concerned with. Specifically, how do you clean up correctly? If we take a small example:
import time
import pyopenal
pyopenal.init(None)
l = pyopenal.Listener(22050)
b = pyopenal.WaveBuffer("somefile.wav")
s = pyopenal.Source()
s.buffer = b
s.looping = False
s.play()
while s.get_state() == pyopenal.AL_PLAYING:
time.sleep(1)
pyopenal.quit()
As it is, a message is printed on to the terminal along the lines of "one source not deleted, one buffer not deleted". But I am assuming the we can't use the native OpenAL calls with these objects, so how do I clean up correctly?
EDIT:
I eventually just ditched pyopenal and wrote a small ctypes wrapper over OpenAL and alure (pyopenal exposes the straight OpenAL functions, but I kept getting SIGFPE). Still curious as to what I was supposed to do here.
|
[
"#relese reference to l b and s\ndel l\ndel b\ndel s \n#now the WaveBuffer and Source should be destroyed, so we could:\npyopenal.quit()\n\nProbably de destructor of pyopenal calls quit() before exit so you dont need to call it yourself.\n"
] |
[
1
] |
[] |
[] |
[
"openal",
"python"
] |
stackoverflow_0000787850_openal_python.txt
|
Q:
Where can i get technical information on how the internals of Django works?
Where can i get the technical manuals/details of how django internals work, i.e. i would like to know when a request comes in from a client;
which django function receives it?
what middleware get called?
how is the request object created? and what class/function creates it?
What function maps the request to the necessary view?
How does your code/view get called
?
etc...
Paul.G
A:
Besides reading the source, here's a few articles I've tagged and bookmarked from a little while ago:
How Django processes a request
Django Request Response processing
Django internals: authentication
How the Heck do Django Models Work
I've found James Bennet's blog to be a a great source for information about django workings. His book, Practical Django Projects, is also a must read -- though it isn't focused on internals, you'll still learn about how django works.
A:
"Use the source, Luke." The beauty of open source software is that you can view (and modify) the code yourself.
A:
Easiest way to understand the internals of django, is by reading a book specifically written for that.
Read Pro Django. It provides you a good in depth understanding of the meta programming first and demonstrates how it is used in django models, to create them dynamically.
It deals similarly with many other python concepts and how django uses it.
A:
Simply reading the source might be a bit overwhelming, especially since the upper-most part is a bit confusing (how the webserver hands off the request to Django code). I find a good way to get started reading the code is to set a debugger breakpoint in your view function:
def time(request):
import pdb; pdb.set_trace()
return HttpResponse(blah blah)
then hit your URL. When the debugger breaks at your breakpoint, examine the stack:
(Pdb) where
c:\abcxyzproject\django\core\management\commands\runserver.py(60)inner_run()
-> run(addr, int(port), handler)
c:\abcxyzproject\django\core\servers\basehttp.py(698)run()
-> httpd.serve_forever()
c:\python25\lib\socketserver.py(201)serve_forever()
-> self.handle_request()
c:\python25\lib\socketserver.py(222)handle_request()
-> self.process_request(request, client_address)
c:\python25\lib\socketserver.py(241)process_request()
-> self.finish_request(request, client_address)
c:\python25\lib\socketserver.py(254)finish_request()
-> self.RequestHandlerClass(request, client_address, self)
c:\abcxyzproject\django\core\servers\basehttp.py(560)__init__()
-> BaseHTTPRequestHandler.__init__(self, *args, **kwargs)
c:\python25\lib\socketserver.py(522)__init__()
-> self.handle()
c:\abcxyzproject\django\core\servers\basehttp.py(605)handle()
-> handler.run(self.server.get_app())
c:\abcxyzproject\django\core\servers\basehttp.py(279)run()
-> self.result = application(self.environ, self.start_response)
c:\abcxyzproject\django\core\servers\basehttp.py(651)__call__()
-> return self.application(environ, start_response)
c:\abcxyzproject\django\core\handlers\wsgi.py(241)__call__()
-> response = self.get_response(request)
c:\abcxyzproject\django\core\handlers\base.py(92)get_response()
-> response = callback(request, *callback_args, **callback_kwargs)
> c:\abcxyzproject\abcxyz\helpers\views.py(118)time()
-> return HttpResponse(
(Pdb)
Now you can see a summary of the path from the deepest part of the web server to your view function. Use the "up" command to move up the stack, and the "list" and "print" command to examine the code and variables at those stack frames.
A:
I doubt there are technical manuals on the subject. It might take a bit of digging, but the API documentation and the source code are your best bets for reliable, up-to-date information.
A:
The documentation often goes into detail when it has to in order to explain why things work the way they do. One of Django's design goals is to not rely on "magic" as much as possible. However, whenever Django does assume something (template locations within apps, for example) its clearly explained why in the documentation and it always occurs predictably.
Most of your questions would be answered by implementing a single page.
A request is made from the client for a particular url.
The url resolves what view to call based on the url pattern match.
The request is passed through the middleware.
The view is called and explicitly passed the request object.
The view explicitly calls the template you specify and passes it the context (variables) you specify.
Template context processors, if there are any, then add their own variables to the context.
The context is passed to the template and it is rendered.
The rendered template is returned to the client.
Django Documentation
Django Book
|
Where can i get technical information on how the internals of Django works?
|
Where can i get the technical manuals/details of how django internals work, i.e. i would like to know when a request comes in from a client;
which django function receives it?
what middleware get called?
how is the request object created? and what class/function creates it?
What function maps the request to the necessary view?
How does your code/view get called
?
etc...
Paul.G
|
[
"Besides reading the source, here's a few articles I've tagged and bookmarked from a little while ago:\n\nHow Django processes a request\nDjango Request Response processing\nDjango internals: authentication\nHow the Heck do Django Models Work\n\nI've found James Bennet's blog to be a a great source for information about django workings. His book, Practical Django Projects, is also a must read -- though it isn't focused on internals, you'll still learn about how django works.\n",
"\"Use the source, Luke.\" The beauty of open source software is that you can view (and modify) the code yourself.\n",
"Easiest way to understand the internals of django, is by reading a book specifically written for that.\nRead Pro Django. It provides you a good in depth understanding of the meta programming first and demonstrates how it is used in django models, to create them dynamically.\nIt deals similarly with many other python concepts and how django uses it.\n",
"Simply reading the source might be a bit overwhelming, especially since the upper-most part is a bit confusing (how the webserver hands off the request to Django code). I find a good way to get started reading the code is to set a debugger breakpoint in your view function:\ndef time(request):\n import pdb; pdb.set_trace() \n return HttpResponse(blah blah)\n\nthen hit your URL. When the debugger breaks at your breakpoint, examine the stack:\n(Pdb) where\n c:\\abcxyzproject\\django\\core\\management\\commands\\runserver.py(60)inner_run()\n-> run(addr, int(port), handler)\n c:\\abcxyzproject\\django\\core\\servers\\basehttp.py(698)run()\n-> httpd.serve_forever()\n c:\\python25\\lib\\socketserver.py(201)serve_forever()\n-> self.handle_request()\n c:\\python25\\lib\\socketserver.py(222)handle_request()\n-> self.process_request(request, client_address)\n c:\\python25\\lib\\socketserver.py(241)process_request()\n-> self.finish_request(request, client_address)\n c:\\python25\\lib\\socketserver.py(254)finish_request()\n-> self.RequestHandlerClass(request, client_address, self)\n c:\\abcxyzproject\\django\\core\\servers\\basehttp.py(560)__init__()\n-> BaseHTTPRequestHandler.__init__(self, *args, **kwargs)\n c:\\python25\\lib\\socketserver.py(522)__init__()\n-> self.handle()\n c:\\abcxyzproject\\django\\core\\servers\\basehttp.py(605)handle()\n-> handler.run(self.server.get_app())\n c:\\abcxyzproject\\django\\core\\servers\\basehttp.py(279)run()\n-> self.result = application(self.environ, self.start_response)\n c:\\abcxyzproject\\django\\core\\servers\\basehttp.py(651)__call__()\n-> return self.application(environ, start_response)\n c:\\abcxyzproject\\django\\core\\handlers\\wsgi.py(241)__call__()\n-> response = self.get_response(request)\n c:\\abcxyzproject\\django\\core\\handlers\\base.py(92)get_response()\n-> response = callback(request, *callback_args, **callback_kwargs)\n> c:\\abcxyzproject\\abcxyz\\helpers\\views.py(118)time()\n-> return HttpResponse(\n(Pdb)\n\nNow you can see a summary of the path from the deepest part of the web server to your view function. Use the \"up\" command to move up the stack, and the \"list\" and \"print\" command to examine the code and variables at those stack frames.\n",
"I doubt there are technical manuals on the subject. It might take a bit of digging, but the API documentation and the source code are your best bets for reliable, up-to-date information.\n",
"The documentation often goes into detail when it has to in order to explain why things work the way they do. One of Django's design goals is to not rely on \"magic\" as much as possible. However, whenever Django does assume something (template locations within apps, for example) its clearly explained why in the documentation and it always occurs predictably.\nMost of your questions would be answered by implementing a single page.\n\nA request is made from the client for a particular url.\nThe url resolves what view to call based on the url pattern match.\nThe request is passed through the middleware.\nThe view is called and explicitly passed the request object.\nThe view explicitly calls the template you specify and passes it the context (variables) you specify.\nTemplate context processors, if there are any, then add their own variables to the context.\nThe context is passed to the template and it is rendered.\nThe rendered template is returned to the client.\n\nDjango Documentation\nDjango Book\n"
] |
[
13,
12,
10,
6,
1,
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001286176_django_python.txt
|
Q:
OCR Playing Cards
I decided to do a project for fun where I want to take as input the image of a playing card and return its rank and suit. I figure that I only need look at the upper-left corner, since that has all the information. It should be robust - if I have a large image of an Ace of Diamonds, I should be able to scale it anywhere from 20 to 200% and still get the right answer.
First question - is there anything already written that does this? If so I'll find something else to OCR so I don't duplicate the efforts.
Second - what's the best way to go about doing this? Neural network? Something hand-coded? Can anyone give any pointers? (0xCAAF9452 is not an acceptable answer).
A:
I don't think there's something already written for what you are trying to accomplish (at least open source and in Python).
As for your second question, it depends on what you are trying to recognize. If the inputs can come from different sources -- e.g., different brands of playing cards with distinctive styles --, then you should probably use a machine learning-based algorithm (such as neural network or support vector machine [SVM]), in order to let it learn how to recognize unknown inputs. However, if the input is always the same in shape or style, then a simple image comparison algorithm will suffice (e.g., compare the pixels of the sliced upper-left corner with the pixels of each rank).
If you do decide to use a machine learning-based algorithm, I also think you don't need very complex features, as the suits and ranks don't really vary that much in shape or style, and you should be fine with using just the pixels of the upper left corner as features.
There's a toy OCR example here that you may find interesting. The lib that is used (LibSVM) also has a Python version, which I have used, and found very simple to work with.
Hope it helps.
A:
It's not as robust, but you can look at the colours of 3 or 4 locations on the card so that if they are white or if they are a color, you can determine which card and suit it is. Obviously this won't work if you don't always have the same cards.
A:
Personally I would go the machine learning route with this one.
A:
Given the limited sample size (4 suits, 13 different values) I'd just try to match a reference image of the suit and value with a new input image. First find the bounding box of the incoming suit / value (the smallest box enclosing all non-white pixels), scale your reference pictures to match the size of that bounding box, and find the best "match" through pixel-wise absolute difference. The colour of the picture (i.e. red or black) will make this even easier.
|
OCR Playing Cards
|
I decided to do a project for fun where I want to take as input the image of a playing card and return its rank and suit. I figure that I only need look at the upper-left corner, since that has all the information. It should be robust - if I have a large image of an Ace of Diamonds, I should be able to scale it anywhere from 20 to 200% and still get the right answer.
First question - is there anything already written that does this? If so I'll find something else to OCR so I don't duplicate the efforts.
Second - what's the best way to go about doing this? Neural network? Something hand-coded? Can anyone give any pointers? (0xCAAF9452 is not an acceptable answer).
|
[
"I don't think there's something already written for what you are trying to accomplish (at least open source and in Python).\nAs for your second question, it depends on what you are trying to recognize. If the inputs can come from different sources -- e.g., different brands of playing cards with distinctive styles --, then you should probably use a machine learning-based algorithm (such as neural network or support vector machine [SVM]), in order to let it learn how to recognize unknown inputs. However, if the input is always the same in shape or style, then a simple image comparison algorithm will suffice (e.g., compare the pixels of the sliced upper-left corner with the pixels of each rank).\nIf you do decide to use a machine learning-based algorithm, I also think you don't need very complex features, as the suits and ranks don't really vary that much in shape or style, and you should be fine with using just the pixels of the upper left corner as features.\nThere's a toy OCR example here that you may find interesting. The lib that is used (LibSVM) also has a Python version, which I have used, and found very simple to work with.\nHope it helps.\n",
"It's not as robust, but you can look at the colours of 3 or 4 locations on the card so that if they are white or if they are a color, you can determine which card and suit it is. Obviously this won't work if you don't always have the same cards.\n",
"Personally I would go the machine learning route with this one.\n",
"Given the limited sample size (4 suits, 13 different values) I'd just try to match a reference image of the suit and value with a new input image. First find the bounding box of the incoming suit / value (the smallest box enclosing all non-white pixels), scale your reference pictures to match the size of that bounding box, and find the best \"match\" through pixel-wise absolute difference. The colour of the picture (i.e. red or black) will make this even easier.\n"
] |
[
3,
1,
1,
1
] |
[] |
[] |
[
"artificial_intelligence",
"computer_vision",
"ocr",
"python"
] |
stackoverflow_0001279768_artificial_intelligence_computer_vision_ocr_python.txt
|
Q:
python scooping and recursion
I am struck in a small recursive code. I have printed output and it prints fine but when I try to put a counter to actually count my answers, it gives me scooping errors.
total = 0
def foo(me, t):
if t<0:
return
if t==0:
total = total+1
return
for i in range(1, me+1):
total = total+1
return foo(i, t-i)
it says local variable referenced before assignment, well, I am trying to refer total in the first line.... Its not about global variables, I have tried to use global as well but in vain.
This is pure scooping issue, any ideas?
A:
As mentioned by others, you need the global statement for total. Also, as noted by Svante, the for loop is unnecessary as coded since i is always 1. So, with an equivalent version of your code:
total = 0
def foo(me, t):
global total
if t < 0:
return
total = total + 1
if t == 0:
return
return foo(1, t-1)
foo(99, 100)
print total
It should be easier to see that foo(99, 100) will indeed be 101 since you're essentially counting down from 100 to 0. I'm not sure why you think otherwise?
A:
You forgot to make sure to set total as a global in your function. You said "I have tried to use global as well but in vain." but when I try it below it doesn't throw any error:
total = 0
def foo(me, t):
global total
if t<0:
return
if t==0:
total = total+1
return
for i in range(1, me+1):
total = total+1
return foo(i, t-i)
A:
I'm not sure you really know what you are trying to do...
(at least if you say that adding the global keyword gives incorrect results, but silences the errors)
you do need the statement "global total" if you are going to to try to reference total (which you are doing)
(what are you expecting when you execute foo(99, 100)?)
maybe your boundary conditions are wrong?
'cause with the arguments (99, 100)
foo will skip the two if statements
loop in the following loop:
for i in range(1, 100):
total += 1
return foo(i, 100-i)
which really is equivalent to
else:
total += 1
return foo(1, 99)
(like Svante was saying)
based on your two if conditions
foo(1,99) will correctly generate total += 100
(99 times it will execute your "else" statement bringing the total to 100, and then finally it will reach t == 0 and execute the last final if where it will push the total to your "incorrect" 101)
you should also use elif for your second case
A:
As a generic advice, recursion should always use return values, and not global variables. Recursion has already its own load of complexity, increasing it with side-effects and not clear interfaces will make it even worse.
You should try something in the lines of this:
def foo(me, t):
if t<0:
return 0
if t==0:
return foo(me, t+1)
return foo(me-1, t)
for i in range(1, me+1):
tot = foo(i, t-i)
return tot
Note: this code is wrong, it will not solve your problem and it will not even work on its own; I put it just to give a kind of idea of how to design a recursion that is easier to manage.
A:
Thanks people, thanks youarf, Liffredo.
I was wrong, I didnt notice that I was returning before adding-up things, loop was only running once, I have fixed the code. It is like this:
def foo(me, t):
if t<0:
return 0
if t==0:
return 1
toreturn = 0
for i in range(1, me+1):
toreturn = toreturn + foo(i, t-i)
return toreturn
This snippet is for this problem at http://projecteuler.net/index.php?section=problems&id=76
|
python scooping and recursion
|
I am struck in a small recursive code. I have printed output and it prints fine but when I try to put a counter to actually count my answers, it gives me scooping errors.
total = 0
def foo(me, t):
if t<0:
return
if t==0:
total = total+1
return
for i in range(1, me+1):
total = total+1
return foo(i, t-i)
it says local variable referenced before assignment, well, I am trying to refer total in the first line.... Its not about global variables, I have tried to use global as well but in vain.
This is pure scooping issue, any ideas?
|
[
"As mentioned by others, you need the global statement for total. Also, as noted by Svante, the for loop is unnecessary as coded since i is always 1. So, with an equivalent version of your code:\ntotal = 0\ndef foo(me, t):\n global total\n if t < 0:\n return\n total = total + 1\n if t == 0:\n return\n return foo(1, t-1)\n\nfoo(99, 100)\nprint total\n\nIt should be easier to see that foo(99, 100) will indeed be 101 since you're essentially counting down from 100 to 0. I'm not sure why you think otherwise?\n",
"You forgot to make sure to set total as a global in your function. You said \"I have tried to use global as well but in vain.\" but when I try it below it doesn't throw any error:\ntotal = 0\ndef foo(me, t):\n global total\n if t<0:\n return\n if t==0:\n total = total+1\n return\n for i in range(1, me+1):\n total = total+1\n return foo(i, t-i)\n\n",
"I'm not sure you really know what you are trying to do...\n(at least if you say that adding the global keyword gives incorrect results, but silences the errors)\nyou do need the statement \"global total\" if you are going to to try to reference total (which you are doing)\n(what are you expecting when you execute foo(99, 100)?)\nmaybe your boundary conditions are wrong?\n'cause with the arguments (99, 100)\n\nfoo will skip the two if statements\nloop in the following loop:\n\n\n for i in range(1, 100):\n total += 1\n return foo(i, 100-i)\n\nwhich really is equivalent to\n\n else:\n total += 1\n return foo(1, 99)\n\n(like Svante was saying)\nbased on your two if conditions\nfoo(1,99) will correctly generate total += 100\n(99 times it will execute your \"else\" statement bringing the total to 100, and then finally it will reach t == 0 and execute the last final if where it will push the total to your \"incorrect\" 101)\nyou should also use elif for your second case\n",
"As a generic advice, recursion should always use return values, and not global variables. Recursion has already its own load of complexity, increasing it with side-effects and not clear interfaces will make it even worse.\nYou should try something in the lines of this:\ndef foo(me, t):\n if t<0:\n return 0\n if t==0:\n return foo(me, t+1)\n return foo(me-1, t)\n for i in range(1, me+1):\n tot = foo(i, t-i)\n return tot\n\nNote: this code is wrong, it will not solve your problem and it will not even work on its own; I put it just to give a kind of idea of how to design a recursion that is easier to manage.\n",
"Thanks people, thanks youarf, Liffredo.\nI was wrong, I didnt notice that I was returning before adding-up things, loop was only running once, I have fixed the code. It is like this:\ndef foo(me, t): \n if t<0:\n return 0\n if t==0:\n return 1\n toreturn = 0\n for i in range(1, me+1):\n toreturn = toreturn + foo(i, t-i)\n return toreturn\n\nThis snippet is for this problem at http://projecteuler.net/index.php?section=problems&id=76\n"
] |
[
2,
1,
1,
1,
0
] |
[] |
[] |
[
"python",
"recursion",
"scope"
] |
stackoverflow_0001286626_python_recursion_scope.txt
|
Q:
Python vs. C# Twitter API libraries
I have experience with both .NET(5yrs) and Python(1yr) and I want to create a simple web project with Twitter as the backbone. I have experience with AppEngine, and have always wanted to try Azure. I'm going to make extensive use of sending and parsing tweets from lots of users at a time, and since I've set a short deadline for this I'd like to take the shortest path possible. So does anyone have any experience with both of these, or have any advice?
A quick look at the twitter API libraries(http://apiwiki.twitter.com/Libraries) gave me this for python:
python-twitter by DeWitt Clinton. This library provides a pure Python interface for the Twitter API.
python-twyt by Andrew Price. BSD licensed Twitter API interface library and command line client.
twitty-twister by Dustin Sallings. A Twisted interface to Twitter.
and this for C#:
Yedda Twitter Library by Yedda. Every Twitter API method has an equivalent .NET method in this wrapper library.
TwitterooCore API by Eric Willis/RareEdge Design Group. Binary .NET library that can be used in any .NET project.
Twitterizer originally by DigitallyBorn, but now open source. Written for .NET 2.0.
tweet# by Daniel Crenna. "100% coverage of the REST and Search APIs".
A:
The best advice is to use whatever language you are most comfortable with.
Myself and a colleague have recently re-written our Twitter web-app's entire back-end with a C# service, and the decision for us came down to which library best suited the purpose. A number of the libraries have varying 'features', some are more complete than others: we decided which to select based purely on trying them out, and seeing which were the best-optimised, and made our job easiest.
I would make a recommendation for a C# library, but the playing field changes so very quickly, and we've changed implementations a couple of times, as Twitter has deprecated various aspects of their API, and some have updated more quickly than others.
A:
I would put my vote in for this twitter library; http://code.google.com/p/python-twitter/
I've used it in 10+ projects that I can think of and its been very good. I've actually been using the dev version in a number of projects too and found it stable and has many more features.
A:
LINQ to Twitter is available too, covers the entire Twitter API, and works with VB, C#, and Delphi Prism.
Joe
A:
You can use both .NET and Python ... IronPython. IronPython will work with Yedda. 1
A:
I am using this python library for one of my project.
It's really easy to use and yet very powerful.
A:
python-twyt by Andrew Price. BSD licensed Twitter API interface library and command line client.
is my python library of choice. it's fairly straightforward.
A:
I have a bit of experience with the Twitter API (I'm Digitallyborn, author of Twitterizer).
I would say go with what is easiest to you. There are a lot of great libraries out there for every language.
|
Python vs. C# Twitter API libraries
|
I have experience with both .NET(5yrs) and Python(1yr) and I want to create a simple web project with Twitter as the backbone. I have experience with AppEngine, and have always wanted to try Azure. I'm going to make extensive use of sending and parsing tweets from lots of users at a time, and since I've set a short deadline for this I'd like to take the shortest path possible. So does anyone have any experience with both of these, or have any advice?
A quick look at the twitter API libraries(http://apiwiki.twitter.com/Libraries) gave me this for python:
python-twitter by DeWitt Clinton. This library provides a pure Python interface for the Twitter API.
python-twyt by Andrew Price. BSD licensed Twitter API interface library and command line client.
twitty-twister by Dustin Sallings. A Twisted interface to Twitter.
and this for C#:
Yedda Twitter Library by Yedda. Every Twitter API method has an equivalent .NET method in this wrapper library.
TwitterooCore API by Eric Willis/RareEdge Design Group. Binary .NET library that can be used in any .NET project.
Twitterizer originally by DigitallyBorn, but now open source. Written for .NET 2.0.
tweet# by Daniel Crenna. "100% coverage of the REST and Search APIs".
|
[
"The best advice is to use whatever language you are most comfortable with.\nMyself and a colleague have recently re-written our Twitter web-app's entire back-end with a C# service, and the decision for us came down to which library best suited the purpose. A number of the libraries have varying 'features', some are more complete than others: we decided which to select based purely on trying them out, and seeing which were the best-optimised, and made our job easiest.\nI would make a recommendation for a C# library, but the playing field changes so very quickly, and we've changed implementations a couple of times, as Twitter has deprecated various aspects of their API, and some have updated more quickly than others.\n",
"I would put my vote in for this twitter library; http://code.google.com/p/python-twitter/\nI've used it in 10+ projects that I can think of and its been very good. I've actually been using the dev version in a number of projects too and found it stable and has many more features.\n",
"LINQ to Twitter is available too, covers the entire Twitter API, and works with VB, C#, and Delphi Prism.\nJoe\n",
"You can use both .NET and Python ... IronPython. IronPython will work with Yedda. 1\n",
"I am using this python library for one of my project.\nIt's really easy to use and yet very powerful.\n",
"python-twyt by Andrew Price. BSD licensed Twitter API interface library and command line client.\nis my python library of choice. it's fairly straightforward. \n",
"I have a bit of experience with the Twitter API (I'm Digitallyborn, author of Twitterizer).\nI would say go with what is easiest to you. There are a lot of great libraries out there for every language.\n"
] |
[
4,
4,
3,
0,
0,
0,
0
] |
[] |
[] |
[
"api",
"c#",
"python",
"twitter"
] |
stackoverflow_0000872054_api_c#_python_twitter.txt
|
Q:
Any efficient way to read datas from large binary file?
I need to handle tens of Gigabytes data in one binary file. Each record in the data file is variable length.
So the file is like:
<len1><data1><len2><data2>..........<lenN><dataN>
The data contains integer, pointer, double value and so on.
I found python can not even handle this situation. There is no problem if I read the whole file in memory. It's fast. But it seems the struct package is not good at performance. It almost stuck on unpack the bytes.
Any help is appreciated.
Thanks.
A:
struct and array, which other answers recommend, are fine for the details of the implementation, and might be all you need if your needs are always to sequentially read all of the file or a prefix of it. Other options include buffer, mmap, even ctypes, depending on many details you don't mention regarding your exact needs. Maybe a little specialized Cython-coded helper can offer all the extra performance you need, if no suitable and accessible library (in C, C++, Fortran, ...) already exists that can be interfaced for the purpose of handling this humongous file as you need to.
But clearly there are peculiar issues here -- how can a data file contain pointers, for example, which are intrinsically a concept related to addressing memory? Are they maybe "offsets" instead, and, if so, how exactly are they based and coded? Are your needs at all more advanced than simply sequential reading (e.g., random access), and if so, can you do a first "indexing" pass to get all the offsets from start of file to start of record into a more usable, compact, handily-formatted auxiliary file? (That binary file of offsets would be a natural for array -- unless the offsets need to be longer than array supports on your machine!). What is the distribution of record lengths and compositions and number of records to make up the "tens of gigabytes"? Etc, etc.
You have a very large scale problem (and no doubt very large scale hardware to support it, since you mention that you can easily read all of the file into memory that means a 64bit box with many tens of GB of RAM -- wow!), so it's well worth the detailed care to optimize the handling thereof -- but we can't help much with such detailed care unless we know enough detail to do so!-).
A:
have a look at array module, specifically at array.fromfile method. This bit:
Each record in the data file is variable length.
is rather unfortunate. but you could handle it with a try-except clause.
A:
For a similar task, I defined a class like this:
class foo(Structure):
_fields_ = [("myint", c_uint32)]
created an instance
bar = foo()
and did,
block = file.read(sizeof(bar))
memmove(addressof(bar), block, sizeof(bar))
In the event of variable-size records, you can use a similar method for retrieving lenN, and then read the corresponding data entries. Seems trivial to implement. However, I have no idea of how fast this method is compared to using pack() and unpack(), perhaps someone else has profiled both methods.
A:
For help with parsing the file without reading it into memory you can use the bitstring module.
Internally this is using the struct module and a bytearray, but an immutable Bits object can be initialised with a filename so it won't read it all into memory.
For example:
from bitstring import Bits
s = Bits(filename='your_file')
while s.bytepos != s.length:
# Read a byte and interpret as an unsigned integer
length = s.read('uint:8')
# Read 'length' bytes and convert to a Python string
data = s.read(length*8).bytes
# Now do whatever you want with the data
Of course you can parse the data however you want.
You can also use slice notation to read the file contents, although note that the indices will be in bits rather than bytes so for example s[-800:] would be the final 100 bytes.
A:
What if you use dump the data file into sqlite3 in memory.
import sqlite3
sqlite3.Connection(":memory:")
You can then use sql to process the data.
Besides, you might want to look at generators (or here) and iterators (or here and here).
A:
PyTables is a very good library to handle HDF5, a binary format used in astronomy and meteorology to handle very big datasets:
PyTables
It works more or less like an hierarchical database, where you can store multiple tables, inside columns. Have a look at it.
|
Any efficient way to read datas from large binary file?
|
I need to handle tens of Gigabytes data in one binary file. Each record in the data file is variable length.
So the file is like:
<len1><data1><len2><data2>..........<lenN><dataN>
The data contains integer, pointer, double value and so on.
I found python can not even handle this situation. There is no problem if I read the whole file in memory. It's fast. But it seems the struct package is not good at performance. It almost stuck on unpack the bytes.
Any help is appreciated.
Thanks.
|
[
"struct and array, which other answers recommend, are fine for the details of the implementation, and might be all you need if your needs are always to sequentially read all of the file or a prefix of it. Other options include buffer, mmap, even ctypes, depending on many details you don't mention regarding your exact needs. Maybe a little specialized Cython-coded helper can offer all the extra performance you need, if no suitable and accessible library (in C, C++, Fortran, ...) already exists that can be interfaced for the purpose of handling this humongous file as you need to.\nBut clearly there are peculiar issues here -- how can a data file contain pointers, for example, which are intrinsically a concept related to addressing memory? Are they maybe \"offsets\" instead, and, if so, how exactly are they based and coded? Are your needs at all more advanced than simply sequential reading (e.g., random access), and if so, can you do a first \"indexing\" pass to get all the offsets from start of file to start of record into a more usable, compact, handily-formatted auxiliary file? (That binary file of offsets would be a natural for array -- unless the offsets need to be longer than array supports on your machine!). What is the distribution of record lengths and compositions and number of records to make up the \"tens of gigabytes\"? Etc, etc.\nYou have a very large scale problem (and no doubt very large scale hardware to support it, since you mention that you can easily read all of the file into memory that means a 64bit box with many tens of GB of RAM -- wow!), so it's well worth the detailed care to optimize the handling thereof -- but we can't help much with such detailed care unless we know enough detail to do so!-).\n",
"have a look at array module, specifically at array.fromfile method. This bit:\n\nEach record in the data file is variable length.\n\nis rather unfortunate. but you could handle it with a try-except clause.\n",
"For a similar task, I defined a class like this:\nclass foo(Structure):\n _fields_ = [(\"myint\", c_uint32)]\n\ncreated an instance\nbar = foo()\n\nand did,\nblock = file.read(sizeof(bar))\nmemmove(addressof(bar), block, sizeof(bar))\n\nIn the event of variable-size records, you can use a similar method for retrieving lenN, and then read the corresponding data entries. Seems trivial to implement. However, I have no idea of how fast this method is compared to using pack() and unpack(), perhaps someone else has profiled both methods.\n",
"For help with parsing the file without reading it into memory you can use the bitstring module.\nInternally this is using the struct module and a bytearray, but an immutable Bits object can be initialised with a filename so it won't read it all into memory.\nFor example:\nfrom bitstring import Bits\n\ns = Bits(filename='your_file')\nwhile s.bytepos != s.length:\n # Read a byte and interpret as an unsigned integer\n length = s.read('uint:8')\n # Read 'length' bytes and convert to a Python string\n data = s.read(length*8).bytes\n # Now do whatever you want with the data\n\nOf course you can parse the data however you want.\nYou can also use slice notation to read the file contents, although note that the indices will be in bits rather than bytes so for example s[-800:] would be the final 100 bytes.\n",
"What if you use dump the data file into sqlite3 in memory.\nimport sqlite3\nsqlite3.Connection(\":memory:\")\n\nYou can then use sql to process the data.\nBesides, you might want to look at generators (or here) and iterators (or here and here).\n",
"PyTables is a very good library to handle HDF5, a binary format used in astronomy and meteorology to handle very big datasets:\n\nPyTables\n\nIt works more or less like an hierarchical database, where you can store multiple tables, inside columns. Have a look at it.\n"
] |
[
5,
2,
2,
2,
1,
1
] |
[] |
[] |
[
"binary",
"file",
"python"
] |
stackoverflow_0001287747_binary_file_python.txt
|
Q:
Python equivalent of Jstack?
Is there a python equivalent of jstack? I've got a hung process and I really want to see what it's up to because I have yet to reproduce the defect in development.
A:
Python GDB
|
Python equivalent of Jstack?
|
Is there a python equivalent of jstack? I've got a hung process and I really want to see what it's up to because I have yet to reproduce the defect in development.
|
[
"Python GDB\n"
] |
[
5
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001289124_python.txt
|
Q:
Server Logging - in Database or Logfile?
I've been working on a server and I'm starting to implement logging. However, I'm not sure whether I should use the db for logging, or just a plaintext file.
I'm planning on logging some basic information for every request (what type of request, ip address of request, session tracking). For some requests there will be extended information present (details on what type of request was made), and if there are any errors I will log those, too.
On the one hand, putting the logs into the db means I could run queries on the logged data. On the other hand, I'm not sure if this would be putting unnecessary strain on the db. Of course, I could also use both the db and a log file for logging. What are people's thoughts on proper logging?
(If it makes a difference, I'm using mod_python on an Apache server with a MySQL db. So I'd either be using the logging library or just creating some logging tables in the db.)
A:
First, use a logging library like SLF4J/Logback that allows you to make this decision dynamically. Then you can tweak a configuration file and route some or all of your log messages to each of several different destinations.
Be very careful before logging to your application database, you can easily overwhelm it if you're logging a lot of stuff and volume starts to get high. And if your application is running close to full capacity or in a failure mode, the log messages may be inaccessible and you'll be flying blind. Probably the only messages that should go to your application database are high-level application-oriented events (a type of application data).
It's much better to "log to the file system" (which for a large production environment includes logging to a multicast address read by redundant log aggregation servers).
Log files can be read into special analytics databases where you could use eg, Hadoop to do map/reduce analyses of log data.
A:
Mix file.log + db would be the best.
Log into db information that you eventually might need to analyse, for example average number of users per day etc.
And use file.log to store some debug information.
A:
We've always logged data to a separate database.
This lets us query without impacting the application database. It also simplifies things if we realize that we need to disable logging or change the amount of what we log.
But most modern logging libraries support embedding the logging into your application and choosing the destination by configuration - file, database, whatever.
Logger gives you lots of ways to manage your logging, and although the default package doesn't have a database logger, it wouldn't be hard to write such an event handler.
A:
If you decide on a log file format that is parseable, then you can log to a file and then have an external process (perhaps run by cron) that processes your log files and inserts the details into your database. This can be arranged to happen at a time when your application and database load is low.
I always worry about what happens if the database becomes unavailable: would this prevent your application from running, or degrade it in any way? Logging to the filesystem avoids having to deal with that issue, but you'd still need to worry about disks filling up and log file rotation.
A:
Log to the DB only if it generates revenue.
For example, for one site, we logged all advertisements placed in a web site to a database. It generated revenue. No reason to be parsing log files for something that important.
Everything else goes to the file system.
Log to the file system for debugging. It's generally private stuff. Implementation details. Not to be shared.
Apache logs a mountain of stuff to the filesystem. Do not duplicate this.
Access control logs go to the file system. You'll rarely want to look at these in detail.
User activity may have to be summarized into a database. This is marketing and usability information that you'll want to study to improve your site. However, detailed activity information is too voluminous to record in the database. Put it on the file system and digest it to a marketing/product improvement/usability analysis database.
A:
Just in case you consider to tweak the standard Python logger to log to a database, this recipe might give you a head start: Logging to a Jabber account.
A:
I would primarily use filesystem logging, just as most other answers recommend. With Python's logging package, you can easily create a database handler, by adapting the suggestion made here. You can also create a custom Filter instance and attach it to your database handler - this will allow you to determine at run-time exactly which events you actually log to the database. In line with other answers, I would say it's only really worth logging some types of event to the database for later analysis.
I would concur with the recommendation to log to a separate database (on a separate server) if your main application is high-throughput.
A:
The type of logging depends upon what you're going to do with the data and how you are going to do it. Logging to db is advantageous if you are going to build a reporting system based upon this log db. Else you can log things in a specific format which you can parse later if you want to utilize the data for some analysis. For example, from the file log you can parse only the required information and generate CSVs as and when required. If you're planning to use a db logger, as already suggested, have it separately from your application db.
Secondly, you can consider having the logger independent of your main application. Either spawn a thread which does the logging, or run a logger at specific port/socket and pass on the log messages to it, or collect all logging messages together and flush it off into the log at the end of each cycle.
A:
We do both.
We log operational information/progress/etc. to the logfile. Standard logfile stuff.
In the database, we log statuses of operations. E.g. each item that's processed, so we can do queries on throughput/elapsed time/etc. This data is particularly useful when trending and detecting anomalies (system is "too quiet" etc.) that are potentially indicative of other issues.
A:
Indeed it seems important that you can later switch between DB/File logging. Database logging seems to be much slower than plain text file logging which may become important with high log traffic.
I've made a library (which can act standalone or as a handler) when I had the same requirement. It logs into database and/or files, and allows to archive critical messages (and the archive may, for example, be a database while everything goes into text files.)
It may save you from coding another one from scratch ...
See: The rrlog library
A:
It looks like many of you are logging some of the events to a database. I am doing the same, but its adding a bit of delay. Do any of you log to database through a message queue? If so, what do you use for queuing and what is your logging architecture like? I am using Java/J2EE.
|
Server Logging - in Database or Logfile?
|
I've been working on a server and I'm starting to implement logging. However, I'm not sure whether I should use the db for logging, or just a plaintext file.
I'm planning on logging some basic information for every request (what type of request, ip address of request, session tracking). For some requests there will be extended information present (details on what type of request was made), and if there are any errors I will log those, too.
On the one hand, putting the logs into the db means I could run queries on the logged data. On the other hand, I'm not sure if this would be putting unnecessary strain on the db. Of course, I could also use both the db and a log file for logging. What are people's thoughts on proper logging?
(If it makes a difference, I'm using mod_python on an Apache server with a MySQL db. So I'd either be using the logging library or just creating some logging tables in the db.)
|
[
"First, use a logging library like SLF4J/Logback that allows you to make this decision dynamically. Then you can tweak a configuration file and route some or all of your log messages to each of several different destinations.\nBe very careful before logging to your application database, you can easily overwhelm it if you're logging a lot of stuff and volume starts to get high. And if your application is running close to full capacity or in a failure mode, the log messages may be inaccessible and you'll be flying blind. Probably the only messages that should go to your application database are high-level application-oriented events (a type of application data).\nIt's much better to \"log to the file system\" (which for a large production environment includes logging to a multicast address read by redundant log aggregation servers). \nLog files can be read into special analytics databases where you could use eg, Hadoop to do map/reduce analyses of log data.\n",
"Mix file.log + db would be the best. \nLog into db information that you eventually might need to analyse, for example average number of users per day etc.\nAnd use file.log to store some debug information.\n",
"We've always logged data to a separate database.\nThis lets us query without impacting the application database. It also simplifies things if we realize that we need to disable logging or change the amount of what we log.\nBut most modern logging libraries support embedding the logging into your application and choosing the destination by configuration - file, database, whatever.\nLogger gives you lots of ways to manage your logging, and although the default package doesn't have a database logger, it wouldn't be hard to write such an event handler.\n",
"If you decide on a log file format that is parseable, then you can log to a file and then have an external process (perhaps run by cron) that processes your log files and inserts the details into your database. This can be arranged to happen at a time when your application and database load is low.\nI always worry about what happens if the database becomes unavailable: would this prevent your application from running, or degrade it in any way? Logging to the filesystem avoids having to deal with that issue, but you'd still need to worry about disks filling up and log file rotation.\n",
"Log to the DB only if it generates revenue.\nFor example, for one site, we logged all advertisements placed in a web site to a database. It generated revenue. No reason to be parsing log files for something that important.\nEverything else goes to the file system.\nLog to the file system for debugging. It's generally private stuff. Implementation details. Not to be shared.\nApache logs a mountain of stuff to the filesystem. Do not duplicate this. \nAccess control logs go to the file system. You'll rarely want to look at these in detail.\nUser activity may have to be summarized into a database. This is marketing and usability information that you'll want to study to improve your site. However, detailed activity information is too voluminous to record in the database. Put it on the file system and digest it to a marketing/product improvement/usability analysis database.\n",
"Just in case you consider to tweak the standard Python logger to log to a database, this recipe might give you a head start: Logging to a Jabber account.\n",
"I would primarily use filesystem logging, just as most other answers recommend. With Python's logging package, you can easily create a database handler, by adapting the suggestion made here. You can also create a custom Filter instance and attach it to your database handler - this will allow you to determine at run-time exactly which events you actually log to the database. In line with other answers, I would say it's only really worth logging some types of event to the database for later analysis.\nI would concur with the recommendation to log to a separate database (on a separate server) if your main application is high-throughput.\n",
"The type of logging depends upon what you're going to do with the data and how you are going to do it. Logging to db is advantageous if you are going to build a reporting system based upon this log db. Else you can log things in a specific format which you can parse later if you want to utilize the data for some analysis. For example, from the file log you can parse only the required information and generate CSVs as and when required. If you're planning to use a db logger, as already suggested, have it separately from your application db.\nSecondly, you can consider having the logger independent of your main application. Either spawn a thread which does the logging, or run a logger at specific port/socket and pass on the log messages to it, or collect all logging messages together and flush it off into the log at the end of each cycle.\n",
"We do both.\nWe log operational information/progress/etc. to the logfile. Standard logfile stuff.\nIn the database, we log statuses of operations. E.g. each item that's processed, so we can do queries on throughput/elapsed time/etc. This data is particularly useful when trending and detecting anomalies (system is \"too quiet\" etc.) that are potentially indicative of other issues.\n",
"Indeed it seems important that you can later switch between DB/File logging. Database logging seems to be much slower than plain text file logging which may become important with high log traffic.\nI've made a library (which can act standalone or as a handler) when I had the same requirement. It logs into database and/or files, and allows to archive critical messages (and the archive may, for example, be a database while everything goes into text files.)\nIt may save you from coding another one from scratch ...\nSee: The rrlog library\n",
"It looks like many of you are logging some of the events to a database. I am doing the same, but its adding a bit of delay. Do any of you log to database through a message queue? If so, what do you use for queuing and what is your logging architecture like? I am using Java/J2EE.\n"
] |
[
10,
2,
1,
1,
1,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"logging",
"python"
] |
stackoverflow_0001055917_logging_python.txt
|
Q:
how fast is python's slice
In order to save space and the complexity of having to maintain the consistency of data between different sources, I'm considering storing start/end indices for some substrings instead of storing the substrings themselves. The trick is that if I do so, it's possible I'll be creating slices ALL the time. Is this something to be avoided? Is the slice operator fast enough I don't need to worry? How about the new object creation/destruction overhead?
Okay, I learned my lesson. Don't optimize unless there's a real problem you're trying to fix. (Of course this doesn't mean to right needlessly bad code, but that's beside the point...) Also, test and profile before coming to stack overflow. =D Thanks everyone!
A:
Fast enough as opposed to what? How do you do it right now? What exactly are you storing, what exactly are you retrieving? The answer probably highly depends on this. Which brings us to ...
Measure! Don't discuss and analyze theoretically; try and measure what is the more performant way. Then decide whether the possible performance gain justifies refactoring your database.
Edit: I just ran a test measuring string slicing versus lookup in a dict keyed on (start, end) tuples. It suggests that there's not much of a difference. It's a pretty naive test, though, so take it with a pinch of salt.
A:
In a comment the OP mentions bloat "in the database" -- but no information regarding what database he's talking about; from the scant information in that comment it would seem that Python string slices aren't necessarily what's involved, rather, the "slicing" would be done by the DB engine upon retrieval.
If that's the actual situation then I would recommend on general principles against storing redundant information in the DB -- a "normal form" (maybe in a lax sense of the expression;-) whereby information is stored just once and derived information is recomputed (or cached charge of the DB engine, etc;-) should be the norm, and "denormalization" by deliberately storing derived information very much the exception and only when justified by specific, well measured retrieval-performance needs.
If the reference to "database" was a misdirection;-), or rather used in a lax sense as I did for "normal form" above;-), then another consideration may apply: since Python strings are immutable, it would seem to be natural to not have to do slices by copying, but rather have each slice reuse part of the memory space of the parent it's being sliced from (much as is done for numpy arrays' slices). However that's not currently part of the Python core. I did once try a patch to that purpose, but the problem of adding a reference to the big string and thus making it stay in memory just because a tiny substring thereof is still referenced loomed large for general-purpose adaptation. Still it would be possible to make a special purpose subclass of string (and one of unicode) for the case in which the big "parent" string needs to stay in memory anyway. Currently buffer does a tiny bit of that, but you can't call string methods on a buffer object (without explicitly copying it to a string object first), so it's only really useful for output and a few special cases... but there's no real conceptual block against adding string method (I doubt that would be adopted in the core, but it should be decently easy to maintain as a third party module anyway;-).
The worth of such an approach can hardly be solidly proven by measurement, one way or another -- speed would be very similar to the current implicitly-copying approach; the advantage would come entirely in terms of reducing memory footprint, which wouldn't so much make any given Python code faster, but rather allow a certain program to execute on a machine with a bit less RAM, or multi-task better when several instances are being used at the same time in separate processes. See rope for a similar but richer approach once experimented with in the context of C++ (but note it didn't make it into the standard;-).
A:
I haven't done any measurements either, but since it sounds like you're already taking a C approach to a problem in Python, you might want to take a look at Python's built-in mmap library:
Memory-mapped file objects behave like both strings and like file objects. Unlike normal string objects, however, these are mutable. You can use mmap objects in most places where strings are expected; for example, you can use the re module to search through a memory-mapped file. Since they’re mutable, you can change a single character by doing obj[index] = 'a', or change a substring by assigning to a slice: obj[i1:i2] = '...'. You can also read and write data starting at the current file position, and seek() through the file to different positions.
I'm not sure from your question if that's exactly what you're looking for. And it bears repeating that you need to take some measurements. Python's timeit library is the easy one to use, but there's also cProfile or hotshot, although hotshot is at risk of being removed from the standard library as I understand it.
A:
Would slices be ineffective because they create copies of the source string? This may or may not be an issue. If it turns out to be an issue, would it not be possible to simply implement a "String view"; an object that has a reference to the source string and has a start and end point.. Upon access/iteration, it just reads from the source string.
|
how fast is python's slice
|
In order to save space and the complexity of having to maintain the consistency of data between different sources, I'm considering storing start/end indices for some substrings instead of storing the substrings themselves. The trick is that if I do so, it's possible I'll be creating slices ALL the time. Is this something to be avoided? Is the slice operator fast enough I don't need to worry? How about the new object creation/destruction overhead?
Okay, I learned my lesson. Don't optimize unless there's a real problem you're trying to fix. (Of course this doesn't mean to right needlessly bad code, but that's beside the point...) Also, test and profile before coming to stack overflow. =D Thanks everyone!
|
[
"\nFast enough as opposed to what? How do you do it right now? What exactly are you storing, what exactly are you retrieving? The answer probably highly depends on this. Which brings us to ...\nMeasure! Don't discuss and analyze theoretically; try and measure what is the more performant way. Then decide whether the possible performance gain justifies refactoring your database.\n\nEdit: I just ran a test measuring string slicing versus lookup in a dict keyed on (start, end) tuples. It suggests that there's not much of a difference. It's a pretty naive test, though, so take it with a pinch of salt.\n",
"In a comment the OP mentions bloat \"in the database\" -- but no information regarding what database he's talking about; from the scant information in that comment it would seem that Python string slices aren't necessarily what's involved, rather, the \"slicing\" would be done by the DB engine upon retrieval.\nIf that's the actual situation then I would recommend on general principles against storing redundant information in the DB -- a \"normal form\" (maybe in a lax sense of the expression;-) whereby information is stored just once and derived information is recomputed (or cached charge of the DB engine, etc;-) should be the norm, and \"denormalization\" by deliberately storing derived information very much the exception and only when justified by specific, well measured retrieval-performance needs.\nIf the reference to \"database\" was a misdirection;-), or rather used in a lax sense as I did for \"normal form\" above;-), then another consideration may apply: since Python strings are immutable, it would seem to be natural to not have to do slices by copying, but rather have each slice reuse part of the memory space of the parent it's being sliced from (much as is done for numpy arrays' slices). However that's not currently part of the Python core. I did once try a patch to that purpose, but the problem of adding a reference to the big string and thus making it stay in memory just because a tiny substring thereof is still referenced loomed large for general-purpose adaptation. Still it would be possible to make a special purpose subclass of string (and one of unicode) for the case in which the big \"parent\" string needs to stay in memory anyway. Currently buffer does a tiny bit of that, but you can't call string methods on a buffer object (without explicitly copying it to a string object first), so it's only really useful for output and a few special cases... but there's no real conceptual block against adding string method (I doubt that would be adopted in the core, but it should be decently easy to maintain as a third party module anyway;-).\nThe worth of such an approach can hardly be solidly proven by measurement, one way or another -- speed would be very similar to the current implicitly-copying approach; the advantage would come entirely in terms of reducing memory footprint, which wouldn't so much make any given Python code faster, but rather allow a certain program to execute on a machine with a bit less RAM, or multi-task better when several instances are being used at the same time in separate processes. See rope for a similar but richer approach once experimented with in the context of C++ (but note it didn't make it into the standard;-).\n",
"I haven't done any measurements either, but since it sounds like you're already taking a C approach to a problem in Python, you might want to take a look at Python's built-in mmap library:\n\nMemory-mapped file objects behave like both strings and like file objects. Unlike normal string objects, however, these are mutable. You can use mmap objects in most places where strings are expected; for example, you can use the re module to search through a memory-mapped file. Since they’re mutable, you can change a single character by doing obj[index] = 'a', or change a substring by assigning to a slice: obj[i1:i2] = '...'. You can also read and write data starting at the current file position, and seek() through the file to different positions.\n\nI'm not sure from your question if that's exactly what you're looking for. And it bears repeating that you need to take some measurements. Python's timeit library is the easy one to use, but there's also cProfile or hotshot, although hotshot is at risk of being removed from the standard library as I understand it.\n",
"Would slices be ineffective because they create copies of the source string? This may or may not be an issue. If it turns out to be an issue, would it not be possible to simply implement a \"String view\"; an object that has a reference to the source string and has a start and end point.. Upon access/iteration, it just reads from the source string.\n"
] |
[
9,
3,
1,
1
] |
[
"premature optimization is the rool of all evil.\nProve to yourself that you really have a need to optimize code, then act.\n"
] |
[
-2
] |
[
"optimization",
"python"
] |
stackoverflow_0001286757_optimization_python.txt
|
Q:
How do I split different applications across multiple tcp ports on one site?
I have a series of applications which use one model and are all under one site. Essentially a mix of the main website, and public and private api's. Is there a way to make different DJango apps use a different tcp port? I have not been able to find anything in the documentation about it.
A:
Django docs. Optionally, use Apache to setup a subdomain for each application so you don't have to remember all the ports.
|
How do I split different applications across multiple tcp ports on one site?
|
I have a series of applications which use one model and are all under one site. Essentially a mix of the main website, and public and private api's. Is there a way to make different DJango apps use a different tcp port? I have not been able to find anything in the documentation about it.
|
[
"Django docs. Optionally, use Apache to setup a subdomain for each application so you don't have to remember all the ports.\n"
] |
[
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001289953_django_python.txt
|
Q:
Paypal NVP API with Django
I am looking into using the paypal NVP API to allow users to pay on my website for a recurring subscription.
I have a few questions about the requirements. Will my site have to meet the "PCI Compliance" stuff. I guess I will have to get an SSL certificate and is there anything else that is required or that I need to know about?
A:
There is nothing forcing you to meet PCI Compliance and use SSL, but you should anyway to limit your liability and inspire a little customer trust.
I thought I read something on the Satchmo Developer's Google group about a person implementing PayPal NVP and having a patch.
A:
I know this question is a bit out of date, but I wanted to add a note that I've recently released an open source Python API to the PayPal NVP interface.
|
Paypal NVP API with Django
|
I am looking into using the paypal NVP API to allow users to pay on my website for a recurring subscription.
I have a few questions about the requirements. Will my site have to meet the "PCI Compliance" stuff. I guess I will have to get an SSL certificate and is there anything else that is required or that I need to know about?
|
[
"There is nothing forcing you to meet PCI Compliance and use SSL, but you should anyway to limit your liability and inspire a little customer trust. \nI thought I read something on the Satchmo Developer's Google group about a person implementing PayPal NVP and having a patch.\n",
"I know this question is a bit out of date, but I wanted to add a note that I've recently released an open source Python API to the PayPal NVP interface.\n"
] |
[
0,
0
] |
[] |
[] |
[
"django",
"paypal",
"python"
] |
stackoverflow_0000717911_django_paypal_python.txt
|
Q:
converting django ForeignKey to a usable directory name
I'm working on a django app where the user will be able to upload documents of various kinds. The relevant part of my models.py is this:
class Materials(models.Model):
id = models.AutoField(primary_key=True)
id_presentations = models.ForeignKey(Presentations, db_column='id_Presentations', related_name = "materials_id_presentations") # Field name made lowercase.
materialpathname = 'documents/'
materialpathname += str(id_presentations)
document = models.FileField(db_column='Document', upload_to = materialpathname) # Field name made lowercase.
docname = models.CharField(max_length=40, db_column='DocName') # Field name made lowercase.
class Meta:
db_table = u'Materials'
My intention is for it to save the documents associated with a given presentation, in a subdirectory with the id number for that presentation (so if "Very Important Presentation" is on the database with id 3, it should store the associated materials at the location settings.MEDIA_ROOT/documents/3/whateverdocname.txt ).
However, while the above code "works", it creates a subdirectory that, instead of being named "3", is named <django.db.models.fields.related.ForeignKey object at 0x8e358ec>, or that kind of thing. I've tried using "id_presentations.name", "id_presentations.value", etc. but these attributes don't seem to exist. I can't seem to find a place where it gives a way to get at the integer value of the ForeignKey field, so that I can convert it to a string and use it as a subdirectory name.
Any help is greatly appreciated.
A:
As of Django 1.0, the upload_to argument to FileFields can be a callable. If I'm understanding your intentions correctly, something like this should do the trick:
def material_path(instance, filename):
return 'documents/%d' % instance.id_presentations.id
class Materials(models.Model):
id_presentations = models.ForeignKey(Presentations)
document = models.FileField(upload_to=material_path)
docname = models.CharField(max_length=40)
That model has been simplified a little bit, but hopefully it illustrates the solution. If upload_to is a callable, then every time a file is uploaded Django will call the function, passing it two arguments: the instance to which the file was uploaded is attached and its original filename. You can generate the file path you want by pulling the ID of the presentation off of the instance in question.
More info:
http://docs.djangoproject.com/en/dev/ref/models/fields/#filefield
A:
Provided that the "name" property is defined on your Presentation model, if you're working with a specific instance of the model then what you want should work. Like this:
from models import Materials
obj = Materials.objects.get([some criteria here])
name = obj.id_presentation.name
If you wanted to abstract that to a method on your model, you could do this:
class Materials(models.Model):
def id_presentation_name(self):
return self.id_presentation.name
If you want the database id of the object, you can access either object.id or object.pk.
A:
Note: you can always find out what attributes and methods are available on an object in python by calling dir(object).
|
converting django ForeignKey to a usable directory name
|
I'm working on a django app where the user will be able to upload documents of various kinds. The relevant part of my models.py is this:
class Materials(models.Model):
id = models.AutoField(primary_key=True)
id_presentations = models.ForeignKey(Presentations, db_column='id_Presentations', related_name = "materials_id_presentations") # Field name made lowercase.
materialpathname = 'documents/'
materialpathname += str(id_presentations)
document = models.FileField(db_column='Document', upload_to = materialpathname) # Field name made lowercase.
docname = models.CharField(max_length=40, db_column='DocName') # Field name made lowercase.
class Meta:
db_table = u'Materials'
My intention is for it to save the documents associated with a given presentation, in a subdirectory with the id number for that presentation (so if "Very Important Presentation" is on the database with id 3, it should store the associated materials at the location settings.MEDIA_ROOT/documents/3/whateverdocname.txt ).
However, while the above code "works", it creates a subdirectory that, instead of being named "3", is named <django.db.models.fields.related.ForeignKey object at 0x8e358ec>, or that kind of thing. I've tried using "id_presentations.name", "id_presentations.value", etc. but these attributes don't seem to exist. I can't seem to find a place where it gives a way to get at the integer value of the ForeignKey field, so that I can convert it to a string and use it as a subdirectory name.
Any help is greatly appreciated.
|
[
"As of Django 1.0, the upload_to argument to FileFields can be a callable. If I'm understanding your intentions correctly, something like this should do the trick:\ndef material_path(instance, filename):\n return 'documents/%d' % instance.id_presentations.id\n\nclass Materials(models.Model):\n id_presentations = models.ForeignKey(Presentations)\n document = models.FileField(upload_to=material_path)\n docname = models.CharField(max_length=40)\n\nThat model has been simplified a little bit, but hopefully it illustrates the solution. If upload_to is a callable, then every time a file is uploaded Django will call the function, passing it two arguments: the instance to which the file was uploaded is attached and its original filename. You can generate the file path you want by pulling the ID of the presentation off of the instance in question.\nMore info:\nhttp://docs.djangoproject.com/en/dev/ref/models/fields/#filefield\n",
"Provided that the \"name\" property is defined on your Presentation model, if you're working with a specific instance of the model then what you want should work. Like this:\nfrom models import Materials\nobj = Materials.objects.get([some criteria here])\nname = obj.id_presentation.name\n\nIf you wanted to abstract that to a method on your model, you could do this:\nclass Materials(models.Model):\n def id_presentation_name(self):\n return self.id_presentation.name\n\nIf you want the database id of the object, you can access either object.id or object.pk.\n",
"Note: you can always find out what attributes and methods are available on an object in python by calling dir(object).\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"django",
"django_models",
"python"
] |
stackoverflow_0001290202_django_django_models_python.txt
|
Q:
Python Win32 - equivalent function to DriveInfo.IsReady
I'm trying to find an equivalent Python function to the Windows function DriveInfo.IsReady. I've spent a while searching through the functions provided by win32api and win32file but I can't find anything (though perhaps that's because I didn't manage to find much useful documentation online, so was simply searching through the listing of functions).
Any help would be gratefully received.
A:
I've used GetVolumeInformation in the past to determine this. For example, something like:
def is_drive_ready(drive_name):
try:
win32api.GetVolumeInformation(drive_name)
return True
except:
return False
print 'ready:', is_drive_ready('c:\\') # true
print 'ready:', is_drive_ready('d:\\') # false (on my system)
You'll need the win32api module.
|
Python Win32 - equivalent function to DriveInfo.IsReady
|
I'm trying to find an equivalent Python function to the Windows function DriveInfo.IsReady. I've spent a while searching through the functions provided by win32api and win32file but I can't find anything (though perhaps that's because I didn't manage to find much useful documentation online, so was simply searching through the listing of functions).
Any help would be gratefully received.
|
[
"I've used GetVolumeInformation in the past to determine this. For example, something like:\ndef is_drive_ready(drive_name):\n try:\n win32api.GetVolumeInformation(drive_name)\n return True\n except:\n return False\n\nprint 'ready:', is_drive_ready('c:\\\\') # true\nprint 'ready:', is_drive_ready('d:\\\\') # false (on my system)\n\nYou'll need the win32api module.\n"
] |
[
2
] |
[] |
[] |
[
"python",
"winapi"
] |
stackoverflow_0001290515_python_winapi.txt
|
Q:
list of duplicate dictionaries copy single entry to another list
newbie question again.
Let's say i have a list of nested dictionaries.
a = [{"value1": 1234, "value2": 23423423421, "value3": norway, "value4": charlie},
{"value1": 1398, "value2": 23423412221, "value3": england, "value4": alpha},
{"value1": 1234, "value2": 23234231221, "value3": norway, "value4": charlie},
{"value1": 1398, "value2": 23423213121, "value3": england, "value4": alpha}]
What i want is to move a singularis entry of each duplicate where value1, value3 and value4 matches. The result should be looking like this:
b = [{"value1": 1398, "value2": 23423412221, "value3": england, "value4": alpha},
{"value1": 1234, "value2": 23234231221, "value3": norway, "value4": charlie}]
The orginal list, a, should remain in it's orginal state.
A:
There was a similar question on this recently. Try this entry.
In fact, you asked that question: "Let's say there exists multiple entries where value3 and value4 are identical to other nested dictionaries. How can i quick and easy find and remove those duplicate dictionaries."
It sounds like the same thing, right?
Edit: liberally stealing Alex's code, it looks something like this:
import itertools
import pprint
import operator
alpha, charlie, norway, england = range(4)
a = [{"value1": 1234, "value2": 23423423421, "value3": norway, "value4": charlie},
{"value1": 1398, "value2": 23423412221, "value3": england, "value4": alpha},
{"value1": 1234, "value2": 23234231221, "value3": norway, "value4": charlie},
{"value1": 1398, "value2": 23423213121, "value3": england, "value4": alpha}]
getvals = operator.itemgetter('value1', 'value3', 'value4')
a.sort(key=getvals)
b = [g.next() for _, g in itertools.groupby(a, getvals)]
pprint.pprint(b)
And the result is:
[{'value1': 1234, 'value2': 23423423421L, 'value3': 2, 'value4': 1},
{'value1': 1398, 'value2': 23423412221L, 'value3': 3, 'value4': 0}]
|
list of duplicate dictionaries copy single entry to another list
|
newbie question again.
Let's say i have a list of nested dictionaries.
a = [{"value1": 1234, "value2": 23423423421, "value3": norway, "value4": charlie},
{"value1": 1398, "value2": 23423412221, "value3": england, "value4": alpha},
{"value1": 1234, "value2": 23234231221, "value3": norway, "value4": charlie},
{"value1": 1398, "value2": 23423213121, "value3": england, "value4": alpha}]
What i want is to move a singularis entry of each duplicate where value1, value3 and value4 matches. The result should be looking like this:
b = [{"value1": 1398, "value2": 23423412221, "value3": england, "value4": alpha},
{"value1": 1234, "value2": 23234231221, "value3": norway, "value4": charlie}]
The orginal list, a, should remain in it's orginal state.
|
[
"There was a similar question on this recently. Try this entry.\nIn fact, you asked that question: \"Let's say there exists multiple entries where value3 and value4 are identical to other nested dictionaries. How can i quick and easy find and remove those duplicate dictionaries.\"\nIt sounds like the same thing, right?\nEdit: liberally stealing Alex's code, it looks something like this:\nimport itertools\nimport pprint\nimport operator\n\nalpha, charlie, norway, england = range(4)\n\na = [{\"value1\": 1234, \"value2\": 23423423421, \"value3\": norway, \"value4\": charlie},\n {\"value1\": 1398, \"value2\": 23423412221, \"value3\": england, \"value4\": alpha}, \n {\"value1\": 1234, \"value2\": 23234231221, \"value3\": norway, \"value4\": charlie}, \n {\"value1\": 1398, \"value2\": 23423213121, \"value3\": england, \"value4\": alpha}]\n\n\ngetvals = operator.itemgetter('value1', 'value3', 'value4')\n\na.sort(key=getvals)\n\nb = [g.next() for _, g in itertools.groupby(a, getvals)]\npprint.pprint(b)\n\nAnd the result is:\n[{'value1': 1234, 'value2': 23423423421L, 'value3': 2, 'value4': 1},\n {'value1': 1398, 'value2': 23423412221L, 'value3': 3, 'value4': 0}]\n\n"
] |
[
2
] |
[] |
[] |
[
"dictionary",
"list",
"python"
] |
stackoverflow_0001290717_dictionary_list_python.txt
|
Q:
How can I capture the error output from the ipython shell?
I'm writing an ipython macro that processes the output of a program. The thing is, the program can sometimes write to stderr , so if I do something like this :
out = !my_program
the out variable will not contain the output. I think it will contain the exit code ( correct me if I'm wrong ).
How can I capture both stdout and stderr streams?
A:
foo 2>&1 means redirect all of the output, including handle 2 (that is, STDERR), from the foo command to handle 1 (that is, STDOUT)
so here out = !foo 2>&1 maybe good enough. below is the demo:
egg.py:
#!/usr/bin/env python
# -*- coding: utf8 -*-
def main():
print 'hello'
print 3/0
if __name__ == "__main__":
main()
IPython 0.10
In [5]: out = !egg.py
Traceback (most recent call last):
File "D:\python\note\egg.py", line 7, in <module>
main()
File "D:\python\note\egg.py", line 5, in main
print 3/0
ZeroDivisionError: integer division or modulo by zero
In [6]: out
Out[6]: SList (.p, .n, .l, .s, .grep(), .fields(), sort() available):
0: hello
In [7]: out = !egg.py 2>&1
In [8]: out
Out[8]: SList (.p, .n, .l, .s, .grep(), .fields(), sort() available):
0: hello
1: Traceback (most recent call last):
2: File "D:\python\note\egg.py", line 7, in <module>
3: main()
4: File "D:\python\note\egg.py", line 5, in main
5: print 3/0
6: ZeroDivisionError: integer division or modulo by zero
Hope this helps
|
How can I capture the error output from the ipython shell?
|
I'm writing an ipython macro that processes the output of a program. The thing is, the program can sometimes write to stderr , so if I do something like this :
out = !my_program
the out variable will not contain the output. I think it will contain the exit code ( correct me if I'm wrong ).
How can I capture both stdout and stderr streams?
|
[
"foo 2>&1 means redirect all of the output, including handle 2 (that is, STDERR), from the foo command to handle 1 (that is, STDOUT)\nso here out = !foo 2>&1 maybe good enough. below is the demo:\negg.py: \n#!/usr/bin/env python\n# -*- coding: utf8 -*-\ndef main():\n print 'hello'\n print 3/0\nif __name__ == \"__main__\":\n main()\n\nIPython 0.10 \nIn [5]: out = !egg.py\nTraceback (most recent call last):\n File \"D:\\python\\note\\egg.py\", line 7, in <module>\n main()\n File \"D:\\python\\note\\egg.py\", line 5, in main\n print 3/0\nZeroDivisionError: integer division or modulo by zero\n\nIn [6]: out\nOut[6]: SList (.p, .n, .l, .s, .grep(), .fields(), sort() available):\n0: hello\n\nIn [7]: out = !egg.py 2>&1\n\nIn [8]: out\nOut[8]: SList (.p, .n, .l, .s, .grep(), .fields(), sort() available):\n0: hello\n1: Traceback (most recent call last):\n2: File \"D:\\python\\note\\egg.py\", line 7, in <module>\n3: main()\n4: File \"D:\\python\\note\\egg.py\", line 5, in main\n5: print 3/0\n6: ZeroDivisionError: integer division or modulo by zero\n\nHope this helps\n"
] |
[
4
] |
[] |
[] |
[
"ipython",
"python"
] |
stackoverflow_0001289971_ipython_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.