content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Creating modelformset from a modelform
I have a model MyModel which contains a PK - locid, that is an AutoField.
I want to construct a model formset from this, with some caveats:
The queryset for the formset should be a custom one (say, order_by('field')) rather than all()
Since locid for MyModel is an AutoField and thus hidden by default, I want to be able to show it to the user.
I'm not sure how to do this. I've tried multiple approaches,
MyModelFormSet = modelformset_factory(MyModel, fields=('locid', 'name', 'dupof'))
The above gives me the 3 fields, but locid is hidden.
class MyModelForm(ModelForm):
def __init__(self, *args, **kwargs):
super(MyModelForm, self).__init__(*args, **kwargs)
self.fields['locid'].widget.attrs["type"] = 'visible'
locid = forms.IntegerField(min_value = 1, required=True)
class Meta:
model = MyModel
fields = ('locid', 'name', 'dupof')
The above gives me a ManyToMany error.
Has anyone done something like this before?
Edit 2
I can now use a custom query when I instantiate the formset - but I still need to show the locid field to the user, because the id is important for the application's use. How would I do this? Is there a way to override the default behavior of hiding a PK if its an autofield?
A:
It makes no sense to show an autofield to the user, as it's an autoincremented primary key -- the user can not change it and it will not be available before saving the record to the database (where the DBMS selectes the next available id).
This is how you set a custom queryset for a formset:
from django.forms.models import BaseModelFormSet
class OrderedFormSet(BaseModelFormSet):
def __init__(self, *args, **kwargs):
self.queryset = MyModel.objects.order_by("field")
super(OrderedFormSet, self).__init__(*args, **kwargs)
and then you use that formset in the factory function:
MyModelFormSet = modelformset_factory(MyModel, formset=OrderedFormSet)
A:
I ended up using a template side variable to do this, as I mentioned here:
How to show hidden autofield in django formset
A:
If you like cheap workarounds, why not mangle the locid into the __unicode__ method? The user is guaranteed to see it, and no special knowledge of django-admin is required.
But, to be fair, all my answers to django-admin related questions tend along the lines of "don't strain to hard to make django-admin into an all-purpose CRUD interface".
|
Creating modelformset from a modelform
|
I have a model MyModel which contains a PK - locid, that is an AutoField.
I want to construct a model formset from this, with some caveats:
The queryset for the formset should be a custom one (say, order_by('field')) rather than all()
Since locid for MyModel is an AutoField and thus hidden by default, I want to be able to show it to the user.
I'm not sure how to do this. I've tried multiple approaches,
MyModelFormSet = modelformset_factory(MyModel, fields=('locid', 'name', 'dupof'))
The above gives me the 3 fields, but locid is hidden.
class MyModelForm(ModelForm):
def __init__(self, *args, **kwargs):
super(MyModelForm, self).__init__(*args, **kwargs)
self.fields['locid'].widget.attrs["type"] = 'visible'
locid = forms.IntegerField(min_value = 1, required=True)
class Meta:
model = MyModel
fields = ('locid', 'name', 'dupof')
The above gives me a ManyToMany error.
Has anyone done something like this before?
Edit 2
I can now use a custom query when I instantiate the formset - but I still need to show the locid field to the user, because the id is important for the application's use. How would I do this? Is there a way to override the default behavior of hiding a PK if its an autofield?
|
[
"It makes no sense to show an autofield to the user, as it's an autoincremented primary key -- the user can not change it and it will not be available before saving the record to the database (where the DBMS selectes the next available id).\nThis is how you set a custom queryset for a formset:\nfrom django.forms.models import BaseModelFormSet\n\nclass OrderedFormSet(BaseModelFormSet):\n def __init__(self, *args, **kwargs):\n self.queryset = MyModel.objects.order_by(\"field\")\n super(OrderedFormSet, self).__init__(*args, **kwargs)\n\nand then you use that formset in the factory function:\nMyModelFormSet = modelformset_factory(MyModel, formset=OrderedFormSet)\n\n",
"I ended up using a template side variable to do this, as I mentioned here:\nHow to show hidden autofield in django formset\n",
"If you like cheap workarounds, why not mangle the locid into the __unicode__ method? The user is guaranteed to see it, and no special knowledge of django-admin is required.\nBut, to be fair, all my answers to django-admin related questions tend along the lines of \"don't strain to hard to make django-admin into an all-purpose CRUD interface\".\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"django",
"formset",
"modelform",
"python"
] |
stackoverflow_0000896848_django_formset_modelform_python.txt
|
Q:
mod_wsgi 2.5 on Ubuntu 9.04 with Python 2.6.2 installation
Has anybody succeeded with mod_wsgi 2.5 on Ubuntu 9.04 with default Python installation (2.6.2)?
I got compilation errors:
mod_wsgi.c:119:2: error: #error Sorry, mod_wsgi requires at least Python 2.3.0.
mod_wsgi.c:123:2: error: #error Sorry, mod_wsgi requires that Python supporting thread.
which Python gives /usr/bin/python and /usr/bin/python -V returns Python 2.6.2 so I'm not sure what's wrong with the 1st one, and honestly I don't know how to check options used in compiling default Python on Ubuntu.
There are a lot of other errors but those 2 looks most relevant.
What else could be possibly wrong??
A:
From your errors I see that you're having to compile python extensions. If you haven't already, I suggest you install the python-dev package because it's usually required for compiling python extensions and it's not part of the default installation.
Installing the package is as easy as running:
sudo apt-get install python-dev
from a command line.
A:
Perhaps the user that the server is running as does not have /usr/bin on its path, and there is another version of python somewhere else on the path that is < 2.3
Try:
which -a python
to find all of the pythons on your path. Perhaps one of these is what the server is running.
|
mod_wsgi 2.5 on Ubuntu 9.04 with Python 2.6.2 installation
|
Has anybody succeeded with mod_wsgi 2.5 on Ubuntu 9.04 with default Python installation (2.6.2)?
I got compilation errors:
mod_wsgi.c:119:2: error: #error Sorry, mod_wsgi requires at least Python 2.3.0.
mod_wsgi.c:123:2: error: #error Sorry, mod_wsgi requires that Python supporting thread.
which Python gives /usr/bin/python and /usr/bin/python -V returns Python 2.6.2 so I'm not sure what's wrong with the 1st one, and honestly I don't know how to check options used in compiling default Python on Ubuntu.
There are a lot of other errors but those 2 looks most relevant.
What else could be possibly wrong??
|
[
"From your errors I see that you're having to compile python extensions. If you haven't already, I suggest you install the python-dev package because it's usually required for compiling python extensions and it's not part of the default installation.\nInstalling the package is as easy as running:\n\nsudo apt-get install python-dev\n\nfrom a command line.\n",
"Perhaps the user that the server is running as does not have /usr/bin on its path, and there is another version of python somewhere else on the path that is < 2.3\nTry:\nwhich -a python\n\nto find all of the pythons on your path. Perhaps one of these is what the server is running.\n"
] |
[
5,
2
] |
[] |
[] |
[
"compiler_construction",
"mod_wsgi",
"python",
"ubuntu",
"wsgi"
] |
stackoverflow_0000913232_compiler_construction_mod_wsgi_python_ubuntu_wsgi.txt
|
Q:
Is it possible to install python 3 and 2.6 on same PC?
How would I do this? The reason being I wanted to try some pygame out, but I have python 3 installed currently and have been learning with that. I'm also interested in trying out wxpython or something like that, but I haven't looked at their compatibilities yet.
EDIT:: im on a windows vista 64-bit
A:
If you are on Windows, then just install another version of Python using the installer. It would be installed into another directory.
Then if you install other packages using the installer, it would ask you for which python installation to apply. If you use installation from source or easy_install, then just make sure that when you install, you are using the one of the proper version.
If you have many packages installed in your current python-3, then just make a zip backup of your current installation just in case.
A:
Erm... yes. I just installed Python 3.0 on this computer to test it. You haven't specified your operating system, but I'm running Ubuntu 9.04 and I can explicitly specify the version of Python I want to run by typing python2.5 myscript.py or python3.0 myscript.py, depending on my needs.
A:
Typically python is installed with a name like python2.6, so you can have more than one. There may be a symlink from python to one of the numbered files. Quite workable.
A:
Yes, it is possible.
I maintain 3 python installations (2.5, 2.6, 3.0). The only issue that could be confusing is figuring out which Python version takes precedence in PATH variable (if any) . To execute a script for a specific version, you would go into the python directory for that version
C:\Python25\ , C:\Python26\, C:\Python30\, etc.
Drop the file in there, and run "python.exe file.py" from command-line.
You could even rename each python.exe to python25.exe python26.exe python30.exe and have each directory in PATH so it would be easy to execute any script on any version.
A:
I would assume it'd be the same as running two versions of 2.x; as long as they're each in their own directory you should be OK.
A:
You certainly can. On Mac Ports, there's a tool called python_select that lets you switch among python versions; if nothing like it exists on Windows (momentary googling didn't reveal one), it could certainly be written.
A:
You can set up virtual python environments using virtualenv.
|
Is it possible to install python 3 and 2.6 on same PC?
|
How would I do this? The reason being I wanted to try some pygame out, but I have python 3 installed currently and have been learning with that. I'm also interested in trying out wxpython or something like that, but I haven't looked at their compatibilities yet.
EDIT:: im on a windows vista 64-bit
|
[
"If you are on Windows, then just install another version of Python using the installer. It would be installed into another directory.\nThen if you install other packages using the installer, it would ask you for which python installation to apply. If you use installation from source or easy_install, then just make sure that when you install, you are using the one of the proper version.\nIf you have many packages installed in your current python-3, then just make a zip backup of your current installation just in case.\n",
"Erm... yes. I just installed Python 3.0 on this computer to test it. You haven't specified your operating system, but I'm running Ubuntu 9.04 and I can explicitly specify the version of Python I want to run by typing python2.5 myscript.py or python3.0 myscript.py, depending on my needs.\n",
"Typically python is installed with a name like python2.6, so you can have more than one. There may be a symlink from python to one of the numbered files. Quite workable.\n",
"Yes, it is possible. \nI maintain 3 python installations (2.5, 2.6, 3.0). The only issue that could be confusing is figuring out which Python version takes precedence in PATH variable (if any) . To execute a script for a specific version, you would go into the python directory for that version\nC:\\Python25\\ , C:\\Python26\\, C:\\Python30\\, etc. \nDrop the file in there, and run \"python.exe file.py\" from command-line.\nYou could even rename each python.exe to python25.exe python26.exe python30.exe and have each directory in PATH so it would be easy to execute any script on any version.\n",
"I would assume it'd be the same as running two versions of 2.x; as long as they're each in their own directory you should be OK.\n",
"You certainly can. On Mac Ports, there's a tool called python_select that lets you switch among python versions; if nothing like it exists on Windows (momentary googling didn't reveal one), it could certainly be written.\n",
"You can set up virtual python environments using virtualenv.\n"
] |
[
9,
3,
3,
2,
1,
1,
0
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0000913204_python_python_3.x.txt
|
Q:
Python3.0: tokenize & BytesIO
When attempting to tokenize a string in python3.0, why do I get a leading 'utf-8' before the tokens start?
From the python3 docs, tokenize should now be used as follows:
g = tokenize(BytesIO(s.encode('utf-8')).readline)
However, when attempting this at the terminal, the following happens:
>>> from tokenize import tokenize
>>> from io import BytesIO
>>> g = tokenize(BytesIO('foo'.encode()).readline)
>>> next(g)
(57, 'utf-8', (0, 0), (0, 0), '')
>>> next(g)
(1, 'foo', (1, 0), (1, 3), 'foo')
>>> next(g)
(0, '', (2, 0), (2, 0), '')
>>> next(g)
What's with the utf-8 token that precedes the others? Is this supposed to happen? If so, then should I just always skip the first token?
[edit]
I have found that token type 57 is tokenize.ENCODING, which can easily be filtered out of the token stream if need be.
A:
That's the coding cookie of the source. You can specify one explicitly:
# -*- coding: utf-8 -*-
do_it()
Otherwise Python assumes the default encoding, utf-8 in Python 3.
|
Python3.0: tokenize & BytesIO
|
When attempting to tokenize a string in python3.0, why do I get a leading 'utf-8' before the tokens start?
From the python3 docs, tokenize should now be used as follows:
g = tokenize(BytesIO(s.encode('utf-8')).readline)
However, when attempting this at the terminal, the following happens:
>>> from tokenize import tokenize
>>> from io import BytesIO
>>> g = tokenize(BytesIO('foo'.encode()).readline)
>>> next(g)
(57, 'utf-8', (0, 0), (0, 0), '')
>>> next(g)
(1, 'foo', (1, 0), (1, 3), 'foo')
>>> next(g)
(0, '', (2, 0), (2, 0), '')
>>> next(g)
What's with the utf-8 token that precedes the others? Is this supposed to happen? If so, then should I just always skip the first token?
[edit]
I have found that token type 57 is tokenize.ENCODING, which can easily be filtered out of the token stream if need be.
|
[
"That's the coding cookie of the source. You can specify one explicitly:\n# -*- coding: utf-8 -*-\ndo_it()\n\nOtherwise Python assumes the default encoding, utf-8 in Python 3.\n"
] |
[
2
] |
[] |
[] |
[
"bytesio",
"io",
"python",
"tokenize"
] |
stackoverflow_0000913409_bytesio_io_python_tokenize.txt
|
Q:
How do I pass lots of variables to and from a function in Python?
I do scientific programming, and often want to show users prompts and variable pairs, let them edit the variables, and then do the calulations with the new variables. I do this so often, that I wrote a wxPython class to move this code out of the main program. You set up a list for each variable with the type of the variable (string, float, int), the prompt, and the variable's current value. You then place all of these lists in one big list, and my utility creates a neatly formated wxPython panel with prompts and the current values which can be edited.
When I started, I only had a few variables, so I would write out each variable.
s='this is a string'; i=1; f=3.14
my_list=[ ['s','your string here',s], ['i','your int here',i], ['f','your float here'],]
input_panel = Input(my_list)
# the rest of the window is created, the input_panel is added to the window, the user is
# allowed to make choices, and control returns when the user hits the calculate button
s,i,f = input_panel.results() # the .results() function returns the values in a list
Now I want to use this routine for a lot of variables (10-30), and this approach is breaking down. I can create the input list to the function over multiple lines using the list.append() statements. When the code returns from the function, though, I get this huge list that needs to be unpacked into the right variables. This is difficult to manage, and it looks like it will be easy to get the input list and output list out of sync. And worse than that, it looks kludgy.
What is the best way to pass lots of variables to a function in Python with extra information so that they can be edited, and then get the variables back so that I can use them in the rest of the program?
If I could pass the variables by reference into the function, then users could change them or not, and I would use the values once the program returned from the function. I would only need to build the input list over multiple lines, and there wouldn't be any possiblity of the input list getting out of sync with the output list. But Python doesn't allow this.
Should I break the big lists into smaller lists that then get combined into big lists for passing into and out of the functions? Or does this just add more places to make errors?
A:
The simplest thing to do would be to create a class. Instead of dealing with a list of variables, the class will have attributes. Then you just use a single instance of the class.
A:
There are two decent options that come to mind.
The first is to use a dictionary to gather all the variables in one place:
d = {}
d['var1'] = [1,2,3]
d['var2'] = 'asdf'
foo(d)
The second is to use a class to bundle all the arguments. This could be something as simple as:
class Foo(object):
pass
f = Foo()
f.var1 = [1,2,3]
f.var2 = 'asdf'
foo(f)
In this case I would prefer the class over the dictionary, simply because you could eventually provide a definition for the class to make its use clearer or to provide methods that handle some of the packing and unpacking work.
A:
To me, the ideal solution is to use a class like this:
>>> class Vars(object):
... def __init__(self, **argd):
... self.__dict__.update(argd)
...
>>> x = Vars(x=1, y=2)
>>> x.x
1
>>> x.y
2
You can also build a dictionary and pass it like this:
>>> some_dict = {'x' : 1, 'y' : 2}
>>> #the two stars below mean to pass the dict as keyword arguments
>>> x = Vars(**some_dict)
>>> x.x
1
>>> x.y
2
You may then get data or alter it as need be when passing it to a function:
>>> def foo(some_vars):
... some_vars.z = 3 #note that we're creating the member z
...
>>> foo(x)
>>> x.z
3
A:
If I could pass the variables by reference into the function, then users could change them or not, and I would use the values once the program returned from the function.
You can obtain much the same effect as "pass by reference" by passing a dict (or for syntactic convenience a Bunch, see http://code.activestate.com/recipes/52308/).
A:
if you have a finite set of these cases, you could write specific wrapper functions for each one. Each wrapper would do the work of building and unpacking lists taht are passed to the internal function.
A:
I would recommend using a dictionary
or a class to accumulate all details
about your variables
value
prompt text
A list to store the order in which you want them to be displayed
Then use good old iteration to prepare input and collect output
This way you will only be modifying a small manageable section of the code time and again.
Of course you should encapsulate all this into a class if your comfortable working with classes.
"""Store all variables
"""
vars = {}
"""Store the order of display
"""
order = []
"""Define a function that will store details and order of the variable definitions
"""
def makeVar(parent, order, name, value, prompt):
parent[name] = dict(zip(('value', 'prompt'), (value, prompt)))
order.append(name)
"""Create your variable definitions in order
"""
makeVar(vars, order, 's', 'this is a string', 'your string here')
makeVar(vars, order, 'i', 1, 'your int here')
makeVar(vars, order, 'f', 3.14, 'your float here')
"""Use a list comprehension to prepare your input
"""
my_list = [[name, vars[name]['prompt'], vars[name]['value']] for name in order]
input_panel = Input(my_list)
out_list = input_panel.results();
"""Collect your output
"""
for i in range(0, len(order)):
vars[order[i]]['value'] = out_list[i];
|
How do I pass lots of variables to and from a function in Python?
|
I do scientific programming, and often want to show users prompts and variable pairs, let them edit the variables, and then do the calulations with the new variables. I do this so often, that I wrote a wxPython class to move this code out of the main program. You set up a list for each variable with the type of the variable (string, float, int), the prompt, and the variable's current value. You then place all of these lists in one big list, and my utility creates a neatly formated wxPython panel with prompts and the current values which can be edited.
When I started, I only had a few variables, so I would write out each variable.
s='this is a string'; i=1; f=3.14
my_list=[ ['s','your string here',s], ['i','your int here',i], ['f','your float here'],]
input_panel = Input(my_list)
# the rest of the window is created, the input_panel is added to the window, the user is
# allowed to make choices, and control returns when the user hits the calculate button
s,i,f = input_panel.results() # the .results() function returns the values in a list
Now I want to use this routine for a lot of variables (10-30), and this approach is breaking down. I can create the input list to the function over multiple lines using the list.append() statements. When the code returns from the function, though, I get this huge list that needs to be unpacked into the right variables. This is difficult to manage, and it looks like it will be easy to get the input list and output list out of sync. And worse than that, it looks kludgy.
What is the best way to pass lots of variables to a function in Python with extra information so that they can be edited, and then get the variables back so that I can use them in the rest of the program?
If I could pass the variables by reference into the function, then users could change them or not, and I would use the values once the program returned from the function. I would only need to build the input list over multiple lines, and there wouldn't be any possiblity of the input list getting out of sync with the output list. But Python doesn't allow this.
Should I break the big lists into smaller lists that then get combined into big lists for passing into and out of the functions? Or does this just add more places to make errors?
|
[
"The simplest thing to do would be to create a class. Instead of dealing with a list of variables, the class will have attributes. Then you just use a single instance of the class.\n",
"There are two decent options that come to mind.\nThe first is to use a dictionary to gather all the variables in one place:\nd = {}\nd['var1'] = [1,2,3]\nd['var2'] = 'asdf'\nfoo(d)\n\nThe second is to use a class to bundle all the arguments. This could be something as simple as:\nclass Foo(object):\n pass\nf = Foo()\nf.var1 = [1,2,3]\nf.var2 = 'asdf'\nfoo(f)\n\nIn this case I would prefer the class over the dictionary, simply because you could eventually provide a definition for the class to make its use clearer or to provide methods that handle some of the packing and unpacking work. \n",
"To me, the ideal solution is to use a class like this:\n>>> class Vars(object):\n... def __init__(self, **argd):\n... self.__dict__.update(argd)\n...\n>>> x = Vars(x=1, y=2)\n>>> x.x\n1\n>>> x.y\n2\n\nYou can also build a dictionary and pass it like this:\n>>> some_dict = {'x' : 1, 'y' : 2}\n>>> #the two stars below mean to pass the dict as keyword arguments\n>>> x = Vars(**some_dict) \n>>> x.x\n1\n>>> x.y\n2\n\nYou may then get data or alter it as need be when passing it to a function:\n>>> def foo(some_vars):\n... some_vars.z = 3 #note that we're creating the member z\n...\n>>> foo(x)\n>>> x.z\n3\n\n",
"\nIf I could pass the variables by reference into the function, then users could change them or not, and I would use the values once the program returned from the function.\n\nYou can obtain much the same effect as \"pass by reference\" by passing a dict (or for syntactic convenience a Bunch, see http://code.activestate.com/recipes/52308/).\n",
"if you have a finite set of these cases, you could write specific wrapper functions for each one. Each wrapper would do the work of building and unpacking lists taht are passed to the internal function.\n",
"\nI would recommend using a dictionary\nor a class to accumulate all details\nabout your variables\n\n\nvalue\nprompt text\n\nA list to store the order in which you want them to be displayed\nThen use good old iteration to prepare input and collect output\n\nThis way you will only be modifying a small manageable section of the code time and again.\nOf course you should encapsulate all this into a class if your comfortable working with classes.\n\"\"\"Store all variables\n\"\"\"\nvars = {}\n\"\"\"Store the order of display\n\"\"\"\norder = []\n\n\"\"\"Define a function that will store details and order of the variable definitions\n\"\"\"\ndef makeVar(parent, order, name, value, prompt):\n parent[name] = dict(zip(('value', 'prompt'), (value, prompt)))\n order.append(name)\n\n\"\"\"Create your variable definitions in order\n\"\"\"\nmakeVar(vars, order, 's', 'this is a string', 'your string here')\nmakeVar(vars, order, 'i', 1, 'your int here')\nmakeVar(vars, order, 'f', 3.14, 'your float here')\n\n\"\"\"Use a list comprehension to prepare your input\n\"\"\"\nmy_list = [[name, vars[name]['prompt'], vars[name]['value']] for name in order]\ninput_panel = Input(my_list)\n\nout_list = input_panel.results();\n\"\"\"Collect your output\n\"\"\"\nfor i in range(0, len(order)):\n vars[order[i]]['value'] = out_list[i];\n\n"
] |
[
16,
9,
3,
1,
0,
0
] |
[] |
[] |
[
"pass_by_reference",
"pass_by_value",
"python"
] |
stackoverflow_0000912526_pass_by_reference_pass_by_value_python.txt
|
Q:
Decoding html encoded strings in python
I have the following string...
"Scam, hoax, or the real deal, he’s gonna work his way to the bottom of the sordid tale, and hopefully end up with an arcade game in the process."
I need to turn it into this string...
Scam, hoax, or the real deal,
he’s gonna work his way to the
bottom of the sordid tale, and
hopefully end up with an arcade game
in the process.
This is pretty standard HTML encoding and I can't for the life of me figure out how to convert it in python.
I found this:
GitHub
And it's very close to working, however it does not output an apostrophe but instead some off unicode character.
Here is an example of the output from the GitHub script...
Scam, hoax, or the real deal, heâs
gonna work his way to the bottom of
the sordid tale, and hopefully end up
with an arcade game in the process.
A:
What's you're trying to do is called "HTML entity decoding" and it's covered in a number of past Stack Overflow questions, for example:
How to unescape apostrophes and such in Python?
Decoding HTML Entities With Python
Here's a code snippet using the Beautiful Soup HTML parsing library to decode your example:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from BeautifulSoup import BeautifulSoup
string = "Scam, hoax, or the real deal, he’s gonna work his way to the bottom of the sordid tale, and hopefully end up with an arcade game in the process."
s = BeautifulSoup(string,convertEntities=BeautifulSoup.HTML_ENTITIES).contents[0]
print s
Here's the output:
Scam, hoax, or the real deal, he’s
gonna work his way to the bottom of
the sordid tale, and hopefully end up
with an arcade game in the process.
|
Decoding html encoded strings in python
|
I have the following string...
"Scam, hoax, or the real deal, he’s gonna work his way to the bottom of the sordid tale, and hopefully end up with an arcade game in the process."
I need to turn it into this string...
Scam, hoax, or the real deal,
he’s gonna work his way to the
bottom of the sordid tale, and
hopefully end up with an arcade game
in the process.
This is pretty standard HTML encoding and I can't for the life of me figure out how to convert it in python.
I found this:
GitHub
And it's very close to working, however it does not output an apostrophe but instead some off unicode character.
Here is an example of the output from the GitHub script...
Scam, hoax, or the real deal, heâs
gonna work his way to the bottom of
the sordid tale, and hopefully end up
with an arcade game in the process.
|
[
"What's you're trying to do is called \"HTML entity decoding\" and it's covered in a number of past Stack Overflow questions, for example:\n\nHow to unescape apostrophes and such in Python?\nDecoding HTML Entities With Python\n\nHere's a code snippet using the Beautiful Soup HTML parsing library to decode your example:\n#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom BeautifulSoup import BeautifulSoup\n\nstring = \"Scam, hoax, or the real deal, he’s gonna work his way to the bottom of the sordid tale, and hopefully end up with an arcade game in the process.\"\ns = BeautifulSoup(string,convertEntities=BeautifulSoup.HTML_ENTITIES).contents[0]\nprint s\n\nHere's the output:\n\nScam, hoax, or the real deal, he’s\n gonna work his way to the bottom of\n the sordid tale, and hopefully end up\n with an arcade game in the process.\n\n"
] |
[
4
] |
[] |
[] |
[
"html",
"python",
"xml"
] |
stackoverflow_0000913933_html_python_xml.txt
|
Q:
on my local Windows machine, how do i write a script to download a comic strip every day and email it to myself?
on my local Windows machine, how do i write a script to download a comic strip every day and email it to myself?
such as
http://comics.com/peanuts/
Update: i know how to download the image as a file. the hard part is how to email it from my local Windows machine.
A:
This depends how precise you want to be. Downloading the entire web page wouldn't be too challenging - using wget, as Earwicker mentions above.
If you want the actual image file of the comic downloaded, you would need a bit more in your arsenal. In Python - because that's what I know best - I would imagine you'd need to use urllib to access the page, and then a regular expression to identify the correct part of the page. Therefore you will need to know the exact layout of the page and the absolute URL of the image.
For XKCD, for example, the following works:
#!/usr/bin/env python
import re, urllib
root_url = 'http://xkcd.com/'
img_url = r'http://imgs.xkcd.com/comics/'
dl_dir = '/path/to/download/directory/'
# Open the page URL and identify comic image URL
page = urllib.urlopen(root_url).read()
comic = re.match(r'%s[\w]+?\.(png|jpg)' % img_url, page)
# Generate the filename
fname = re.sub(img_url, '', comic)
# Download the image to the specified download directory
try:
image = urllib.urlretrieve(comic, '%s%s' % (dl_dir, fname))
except ContentTooShortError:
print 'Download interrupted.'
else:
print 'Download successful.'
You can then email it however you feel comfortable.
A:
A quick look on google reveals two command-line programs that you should be able to lash together in a batch file or using the scripting language of your choice.
http://www.gnu.org/software/wget/ - to do the download
http://www.beyondlogic.org/solutions/cmdlinemail/cmdlinemail.htm - to send the email
You can use the Windows Task Scheduler in control panel to make it run daily.
If you are using Python there are surely going to be convenient libraries to do the downloading/emailing parts - browse the official Python site.
A:
Configure feedburner on the RSS feed, subscribe yourself to the email alerts?
A:
Here is perhaps the shortest distance to your goal.
It's not simple... you will need to work out how to parse out the image, and the peanuts example seems to be an unpredictable URI, so it might be more difficult than it looks to get the image itself. Your best bet will be to read the HTML of the remote webpage, write a regex to parse out the image url. Then the mail function will work fine, send an HTML email by setting the headers in the mail() function to something like:
$headers = "MIME-Version: 1.0\r\n";
$headers .= "Content-type: text/html;";
$headers .= " charset=iso-8859-1\r\n";
With the image tags in the mail. This will let you receive emails with all your comic strips placed one after another. Your email software will do the HTTP requests to download the images for you, so you can avoid having to attach the images directly.
A:
Emailing it is easy. Pick a library in your favorite language and read the documentation. Send it through your regular email account, or create a new free GMail account for it.
http://docs.python.org/library/email
http://am.rubyonrails.org/
http://us.php.net/manual/en/function.mail.php
Sometimes attachments can indeed be tricky, though. If nothing else, give it a good whirl with whatever library you like most, and post another specific question about any problems you encounter.
A:
It's pretty simple if you already know how to download the file. Once its downloaded create a cronjob that emails it to yourself.
Using something like phpmailer would be the easiest way to email it
http://phpmailer.codeworxtech.com/index.php?pg=examplebmail
|
on my local Windows machine, how do i write a script to download a comic strip every day and email it to myself?
|
on my local Windows machine, how do i write a script to download a comic strip every day and email it to myself?
such as
http://comics.com/peanuts/
Update: i know how to download the image as a file. the hard part is how to email it from my local Windows machine.
|
[
"This depends how precise you want to be. Downloading the entire web page wouldn't be too challenging - using wget, as Earwicker mentions above.\nIf you want the actual image file of the comic downloaded, you would need a bit more in your arsenal. In Python - because that's what I know best - I would imagine you'd need to use urllib to access the page, and then a regular expression to identify the correct part of the page. Therefore you will need to know the exact layout of the page and the absolute URL of the image.\nFor XKCD, for example, the following works:\n#!/usr/bin/env python\n\nimport re, urllib\n\nroot_url = 'http://xkcd.com/'\nimg_url = r'http://imgs.xkcd.com/comics/'\n\ndl_dir = '/path/to/download/directory/'\n\n# Open the page URL and identify comic image URL\npage = urllib.urlopen(root_url).read()\ncomic = re.match(r'%s[\\w]+?\\.(png|jpg)' % img_url, page)\n\n# Generate the filename\nfname = re.sub(img_url, '', comic)\n\n# Download the image to the specified download directory\ntry:\n image = urllib.urlretrieve(comic, '%s%s' % (dl_dir, fname))\nexcept ContentTooShortError:\n print 'Download interrupted.'\nelse:\n print 'Download successful.'\n\nYou can then email it however you feel comfortable.\n",
"A quick look on google reveals two command-line programs that you should be able to lash together in a batch file or using the scripting language of your choice.\nhttp://www.gnu.org/software/wget/ - to do the download\nhttp://www.beyondlogic.org/solutions/cmdlinemail/cmdlinemail.htm - to send the email\nYou can use the Windows Task Scheduler in control panel to make it run daily.\nIf you are using Python there are surely going to be convenient libraries to do the downloading/emailing parts - browse the official Python site.\n",
"Configure feedburner on the RSS feed, subscribe yourself to the email alerts?\n",
"Here is perhaps the shortest distance to your goal.\nIt's not simple... you will need to work out how to parse out the image, and the peanuts example seems to be an unpredictable URI, so it might be more difficult than it looks to get the image itself. Your best bet will be to read the HTML of the remote webpage, write a regex to parse out the image url. Then the mail function will work fine, send an HTML email by setting the headers in the mail() function to something like:\n$headers = \"MIME-Version: 1.0\\r\\n\";\n$headers .= \"Content-type: text/html;\";\n$headers .= \" charset=iso-8859-1\\r\\n\";\n\nWith the image tags in the mail. This will let you receive emails with all your comic strips placed one after another. Your email software will do the HTTP requests to download the images for you, so you can avoid having to attach the images directly.\n",
"Emailing it is easy. Pick a library in your favorite language and read the documentation. Send it through your regular email account, or create a new free GMail account for it.\n\nhttp://docs.python.org/library/email\nhttp://am.rubyonrails.org/\nhttp://us.php.net/manual/en/function.mail.php\n\nSometimes attachments can indeed be tricky, though. If nothing else, give it a good whirl with whatever library you like most, and post another specific question about any problems you encounter.\n",
"It's pretty simple if you already know how to download the file. Once its downloaded create a cronjob that emails it to yourself. \nUsing something like phpmailer would be the easiest way to email it\nhttp://phpmailer.codeworxtech.com/index.php?pg=examplebmail\n"
] |
[
8,
3,
2,
1,
1,
1
] |
[] |
[] |
[
"php",
"python",
"ruby",
"scheduled_tasks"
] |
stackoverflow_0000909618_php_python_ruby_scheduled_tasks.txt
|
Q:
Using poll on file-like object returned by urllib2.urlopen()?
I've run into the bug described at http://bugs.python.org/issue1327971 while trying to poll a file-like object returned by urllib2.urlopen().
Unfortunately, being relatively new to Python, I can't actually determine from the responses how to get around the issue as they seem mostly geared towards fixing the bug, rather than hacking the code that triggers it to work.
Here is a distilled version of my code that throws the error:
import urllib2, select
if __name__ == "__main__":
p = select.poll()
url = "http://localhost/"
fd = urllib2.urlopen(url)
p.register(fd, select.POLLIN | select.POLLERR | select.POLLHUP | select.POLLNVAL)
result = p.poll()
for fd, event in result:
if event == select.POLLIN:
while 1:
buf = fd.read(4096)
if not buf:
break
print buf
And the error which is raised when I run it on python 2.6:
Traceback (most recent call last):
File "/home/shab/py/test.py", line 9, in <module>
p.register(fd, select.POLLIN | select.POLLERR | select.POLLHUP | select.POLLNVAL)
File "/usr/lib/python2.6/socket.py", line 287, in fileno
return self._sock.fileno()
AttributeError: HTTPResponse instance has no attribute 'fileno'
Update: I do not want to modify the system libraries.
A:
If you don't want to modify you system libraries you also can patch httplib on the fly to match the patch in the bug report:
import httplib
@property
def http_fileno(self):
return self.fp.fileno
@http_fileno.setter
def http_fileno(self, value):
self.fp.fileno = value
httplib.HTTPResponse.fileno = http_fileno
# and now on with the previous code
# ...
You then get an error on fd.read(4096) because the fd returned by poll is a raw file descriptor value, not a file-like object. You probably need to use the original file object to read the data, not the value returned by poll.
A:
It looks like you want to modify urllib with this patch. Keep in mind, there's a reason this code hasn't been released. It hasn't been completely reviewed.
EDIT: Actually, I think you want to modify httplib with the other patch.
|
Using poll on file-like object returned by urllib2.urlopen()?
|
I've run into the bug described at http://bugs.python.org/issue1327971 while trying to poll a file-like object returned by urllib2.urlopen().
Unfortunately, being relatively new to Python, I can't actually determine from the responses how to get around the issue as they seem mostly geared towards fixing the bug, rather than hacking the code that triggers it to work.
Here is a distilled version of my code that throws the error:
import urllib2, select
if __name__ == "__main__":
p = select.poll()
url = "http://localhost/"
fd = urllib2.urlopen(url)
p.register(fd, select.POLLIN | select.POLLERR | select.POLLHUP | select.POLLNVAL)
result = p.poll()
for fd, event in result:
if event == select.POLLIN:
while 1:
buf = fd.read(4096)
if not buf:
break
print buf
And the error which is raised when I run it on python 2.6:
Traceback (most recent call last):
File "/home/shab/py/test.py", line 9, in <module>
p.register(fd, select.POLLIN | select.POLLERR | select.POLLHUP | select.POLLNVAL)
File "/usr/lib/python2.6/socket.py", line 287, in fileno
return self._sock.fileno()
AttributeError: HTTPResponse instance has no attribute 'fileno'
Update: I do not want to modify the system libraries.
|
[
"If you don't want to modify you system libraries you also can patch httplib on the fly to match the patch in the bug report:\nimport httplib\n\n@property\ndef http_fileno(self):\n return self.fp.fileno\n\n@http_fileno.setter\ndef http_fileno(self, value):\n self.fp.fileno = value\n\nhttplib.HTTPResponse.fileno = http_fileno\n\n# and now on with the previous code\n# ...\n\nYou then get an error on fd.read(4096) because the fd returned by poll is a raw file descriptor value, not a file-like object. You probably need to use the original file object to read the data, not the value returned by poll.\n",
"It looks like you want to modify urllib with this patch. Keep in mind, there's a reason this code hasn't been released. It hasn't been completely reviewed.\nEDIT: Actually, I think you want to modify httplib with the other patch.\n"
] |
[
1,
0
] |
[] |
[] |
[
"python",
"python_2.6"
] |
stackoverflow_0000913913_python_python_2.6.txt
|
Q:
Python script embedded in Windows Registry
We all know that windows has the feature that you can right click on a file and numerous options are shown. Well you can add a value to this menu. I followed this guide : jfitz.com/tips/rclick_custom.html
Basically I have a script that runs when I right click on a certain file type.
Alright, so everything is going flawlessly, however when I finish and try to test it out Windows tells me that it only accepts .exe's.
Is their a way around this without having to use py2exe or something similar?
**EDIT
Ok , so if my script takes the first argument as it's parameter how would I put this into the registry?
"C:\Python26\pythonw.exe C:\Users\daved\Documents\Python\backup.py "%1""
?
A:
yes, call pythonw.exe and pass the script path as a parameter
"C:\Python26\pythonw.exe" "C:\Users\daved\Documents\Python\backup.py" "%1"
It's also recommended (but not required) to use the extension .pyw when your script doesn't run in a console.
|
Python script embedded in Windows Registry
|
We all know that windows has the feature that you can right click on a file and numerous options are shown. Well you can add a value to this menu. I followed this guide : jfitz.com/tips/rclick_custom.html
Basically I have a script that runs when I right click on a certain file type.
Alright, so everything is going flawlessly, however when I finish and try to test it out Windows tells me that it only accepts .exe's.
Is their a way around this without having to use py2exe or something similar?
**EDIT
Ok , so if my script takes the first argument as it's parameter how would I put this into the registry?
"C:\Python26\pythonw.exe C:\Users\daved\Documents\Python\backup.py "%1""
?
|
[
"yes, call pythonw.exe and pass the script path as a parameter\n\"C:\\Python26\\pythonw.exe\" \"C:\\Users\\daved\\Documents\\Python\\backup.py\" \"%1\"\n\nIt's also recommended (but not required) to use the extension .pyw when your script doesn't run in a console.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"winapi"
] |
stackoverflow_0000914106_python_winapi.txt
|
Q:
Class attributes with a "calculated" name
When defining class attributes through "calculated" names, as in:
class C(object):
for name in (....):
exec("%s = ..." % (name,...))
is there a different way of handling the numerous attribute definitions than by using an exec? getattr(C, name) does not work because C is not defined, during class construction...
A:
How about:
class C(object):
blah blah
for name in (...):
setattr(C, name, "....")
That is, do the attribute setting after the definition.
A:
class C (object):
pass
c = C()
c.__dict__['foo'] = 42
c.foo # returns 42
A:
If your entire class is "calculated", then may I suggest the type callable. This is especially useful if your original container was a dict:
d = dict(('member-%d' % k, k*100) for k in range(10))
C = type('C', (), d)
This would give you the same results as
class C(object):
member-0 = 0
member-1 = 100
...
If your needs are really complex, consider metaclasses. (In fact, type is a metaclass =)
A:
What about using metaclasses for this purpose?
Check out Question 100003 : What is a metaclass in Python?.
|
Class attributes with a "calculated" name
|
When defining class attributes through "calculated" names, as in:
class C(object):
for name in (....):
exec("%s = ..." % (name,...))
is there a different way of handling the numerous attribute definitions than by using an exec? getattr(C, name) does not work because C is not defined, during class construction...
|
[
"How about:\nclass C(object):\n blah blah\n\nfor name in (...):\n setattr(C, name, \"....\")\n\nThat is, do the attribute setting after the definition.\n",
"class C (object):\n pass\n\nc = C()\nc.__dict__['foo'] = 42\nc.foo # returns 42\n\n",
"If your entire class is \"calculated\", then may I suggest the type callable. This is especially useful if your original container was a dict:\nd = dict(('member-%d' % k, k*100) for k in range(10))\nC = type('C', (), d)\n\nThis would give you the same results as\nclass C(object):\n member-0 = 0\n member-1 = 100\n ...\n\nIf your needs are really complex, consider metaclasses. (In fact, type is a metaclass =)\n",
"What about using metaclasses for this purpose?\nCheck out Question 100003 : What is a metaclass in Python?.\n"
] |
[
11,
3,
2,
0
] |
[] |
[] |
[
"class_attributes",
"exec",
"python"
] |
stackoverflow_0000912412_class_attributes_exec_python.txt
|
Q:
Python metaclasses
I've been hacking classes in Python like this:
def hack(f,aClass) :
class MyClass(aClass) :
def f(self) :
f()
return MyClass
A = hack(afunc,A)
Which looks pretty clean to me. It takes a class, A, creates a new class derived from it that has an extra method, calling f, and then reassigns the new class to A.
How does this differ from metaclass hacking in Python? What are the advantages of using a metaclass over this?
A:
The definition of a class in Python is an instance of type (or an instance of a subclass of type). In other words, the class definition itself is an object. With metaclasses, you have the ability to control the type instance that becomes the class definition.
When a metaclass is invoked, you have the ability to completely re-write the class definition. You have access to all the proposed attributes of the class, its ancestors, etc. More than just injecting a method or removing a method, you can radically alter the inheritance tree, the type, and pretty much any other aspect. You can also chain metaclasses together for a very dynamic and totally convoluted experience.
I suppose the real benefit, though is that the class's type remains the class's type. In your example, typing:
a_inst = A()
type(a_inst)
will show that it is an instance of MyClass. Yes, isinstance(a_inst, aClass) would return True, but you've introduced a subclass, rather than a dynamically re-defined class. The distinction there is probably the key.
As rjh points out, the anonymous inner class also has performance and extensibility implications. A metaclass is processed only once, and the moment that the class is defined, and never again. Users of your API can also extend your metaclass because it is not enclosed within a function, so you gain a certain degree of extensibility.
This slightly old article actually has a good explanation that compares exactly the "function decoration" approach you used in the example with metaclasses, and shows the history of the Python metaclass evolution in that context: http://www.ibm.com/developerworks/linux/library/l-pymeta.html
A:
A metaclass is the class of a class. IMO, the bloke here covered it quite serviceably, including some use-cases. See Stack Overflow question "MetaClass", "new", "cls" and "super" - what is the mechanism exactly?.
A:
You can use the type callable as well.
def hack(f, aClass):
newfunc = lambda self: f()
return type('MyClass', (aClass,), {'f': newfunc})
I find using type the easiest way to get into the metaclass world.
|
Python metaclasses
|
I've been hacking classes in Python like this:
def hack(f,aClass) :
class MyClass(aClass) :
def f(self) :
f()
return MyClass
A = hack(afunc,A)
Which looks pretty clean to me. It takes a class, A, creates a new class derived from it that has an extra method, calling f, and then reassigns the new class to A.
How does this differ from metaclass hacking in Python? What are the advantages of using a metaclass over this?
|
[
"The definition of a class in Python is an instance of type (or an instance of a subclass of type). In other words, the class definition itself is an object. With metaclasses, you have the ability to control the type instance that becomes the class definition.\nWhen a metaclass is invoked, you have the ability to completely re-write the class definition. You have access to all the proposed attributes of the class, its ancestors, etc. More than just injecting a method or removing a method, you can radically alter the inheritance tree, the type, and pretty much any other aspect. You can also chain metaclasses together for a very dynamic and totally convoluted experience.\nI suppose the real benefit, though is that the class's type remains the class's type. In your example, typing:\na_inst = A()\ntype(a_inst)\n\nwill show that it is an instance of MyClass. Yes, isinstance(a_inst, aClass) would return True, but you've introduced a subclass, rather than a dynamically re-defined class. The distinction there is probably the key.\nAs rjh points out, the anonymous inner class also has performance and extensibility implications. A metaclass is processed only once, and the moment that the class is defined, and never again. Users of your API can also extend your metaclass because it is not enclosed within a function, so you gain a certain degree of extensibility.\nThis slightly old article actually has a good explanation that compares exactly the \"function decoration\" approach you used in the example with metaclasses, and shows the history of the Python metaclass evolution in that context: http://www.ibm.com/developerworks/linux/library/l-pymeta.html\n",
"A metaclass is the class of a class. IMO, the bloke here covered it quite serviceably, including some use-cases. See Stack Overflow question \"MetaClass\", \"new\", \"cls\" and \"super\" - what is the mechanism exactly?.\n",
"You can use the type callable as well.\ndef hack(f, aClass):\n newfunc = lambda self: f()\n return type('MyClass', (aClass,), {'f': newfunc})\n\nI find using type the easiest way to get into the metaclass world.\n"
] |
[
5,
1,
1
] |
[] |
[] |
[
"metaclass",
"python"
] |
stackoverflow_0000618960_metaclass_python.txt
|
Q:
How to save indention format of file in Python
I am saving all the words from a file like so:
sentence = " "
fileName = sys.argv[1]
fileIn = open(sys.argv[1],"r")
for line in open(sys.argv[1]):
for word in line.split(" "):
sentence += word
Everything works okay when outputting it except the formatting.
I am moving source code, is there any way I can save the indention?
A:
Since you state, that you want to move source code files, why not just copy/move them?
import shutil
shutil.move(src, dest)
If you read source file,
fh = open("yourfilename", "r")
content = fh.read()
should load your file as it is (with indention), or not?
A:
When you invoke line.split(), you remove all leading spaces.
What's wrong with just reading the file into a single string?
textWithIndentation = open(sys.argv[1], "r").read()
A:
Split removes all spaces:
>>> a=" a b c"
>>> a.split(" ")
['', '', '', 'a', 'b', '', '', 'c']
As you can see, the resulting array doesn't contain any spaces anymore. But you can see these strange empty strings (''). They denote that there has been a space. To revert the effect of split, use join(" "):
>>> l=a.split(" ")
>>> " ".join(l)
' a b c'
or in your code:
sentence += " " + word
Or you can use a regular expression to get all spaces at the start of the line:
>>> import re
>>> re.match(r'^\s*', " a b c").group(0)
' '
|
How to save indention format of file in Python
|
I am saving all the words from a file like so:
sentence = " "
fileName = sys.argv[1]
fileIn = open(sys.argv[1],"r")
for line in open(sys.argv[1]):
for word in line.split(" "):
sentence += word
Everything works okay when outputting it except the formatting.
I am moving source code, is there any way I can save the indention?
|
[
"Since you state, that you want to move source code files, why not just copy/move them?\nimport shutil\nshutil.move(src, dest)\n\nIf you read source file, \nfh = open(\"yourfilename\", \"r\")\ncontent = fh.read()\n\nshould load your file as it is (with indention), or not? \n",
"When you invoke line.split(), you remove all leading spaces.\nWhat's wrong with just reading the file into a single string?\ntextWithIndentation = open(sys.argv[1], \"r\").read()\n\n",
"Split removes all spaces:\n>>> a=\" a b c\"\n>>> a.split(\" \")\n['', '', '', 'a', 'b', '', '', 'c']\n\nAs you can see, the resulting array doesn't contain any spaces anymore. But you can see these strange empty strings (''). They denote that there has been a space. To revert the effect of split, use join(\" \"):\n>>> l=a.split(\" \")\n>>> \" \".join(l)\n' a b c'\n\nor in your code:\nsentence += \" \" + word\n\nOr you can use a regular expression to get all spaces at the start of the line:\n>>> import re\n>>> re.match(r'^\\s*', \" a b c\").group(0)\n' '\n\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000914626_python.txt
|
Q:
best practice for user preferences in $HOME in Python
For some small programs in Python, I would like to set, store and retrieve user preferences in a file in a portable (multi-platform) way.
I am thinking about a very simple ConfigParser file like "~/.program" or "~/.program/program.cfg".
Is os.path.expanduser() the best way for achieving this or is there something more easy/straightforward?
A:
os.path.expanduser("~")
is more portable than
os.environ['HOME']
so it should be ok to use the first.
A:
You can use os.environ:
import os
print os.environ["HOME"]
|
best practice for user preferences in $HOME in Python
|
For some small programs in Python, I would like to set, store and retrieve user preferences in a file in a portable (multi-platform) way.
I am thinking about a very simple ConfigParser file like "~/.program" or "~/.program/program.cfg".
Is os.path.expanduser() the best way for achieving this or is there something more easy/straightforward?
|
[
"os.path.expanduser(\"~\")\n\nis more portable than \nos.environ['HOME']\n\nso it should be ok to use the first.\n",
"You can use os.environ:\nimport os\nprint os.environ[\"HOME\"]\n\n"
] |
[
8,
0
] |
[] |
[] |
[
"preferences",
"python"
] |
stackoverflow_0000914675_preferences_python.txt
|
Q:
Does PyS60 has a reliable garbage collection?
I have heard it many times that garbage collection in PyS60 is not up to to the mark. This imposes a lot of limits on writing cleaner code. Can I at least rely that the non cyclic references are cleaned up after a function exists.
A:
PyS60 as of version 1.9.0 uses Python 2.5.1 core and has no problems with garbage collection.
A:
Mostly you can, but occasionally PyS60 needs a little "help". Unbind keys, always cancel timers, might have to manually delete some classes etc. Nothing too bad.
Btw current 1.9.x branch uses python core 2.5.4. In my opinion 1.9.5 is the first version, which might be better that 1.4.5. Worth taking a look, especially if you want to play with 5800 XpressMusic :)
|
Does PyS60 has a reliable garbage collection?
|
I have heard it many times that garbage collection in PyS60 is not up to to the mark. This imposes a lot of limits on writing cleaner code. Can I at least rely that the non cyclic references are cleaned up after a function exists.
|
[
"PyS60 as of version 1.9.0 uses Python 2.5.1 core and has no problems with garbage collection.\n",
"Mostly you can, but occasionally PyS60 needs a little \"help\". Unbind keys, always cancel timers, might have to manually delete some classes etc. Nothing too bad.\nBtw current 1.9.x branch uses python core 2.5.4. In my opinion 1.9.5 is the first version, which might be better that 1.4.5. Worth taking a look, especially if you want to play with 5800 XpressMusic :)\n"
] |
[
3,
0
] |
[] |
[] |
[
"nokia",
"pys60",
"python",
"s60",
"symbian"
] |
stackoverflow_0000595290_nokia_pys60_python_s60_symbian.txt
|
Q:
Does the stack limit of Symbian also apply to PyS60?
Symbian has a stack limit of 8kB. Does this also apply to the function calling in PyS60 apps?
A:
Yes, PyS60 is based on CPython, thus uses the C stack.
A:
Increasing the Symbian stack size is done through a parameter in the mmp file.
This is valid when you create a native application that the toolchain will turn into an exe file.
If you were to upgrade the Python runtime on your phone, with a version you built yourself, you could increase the stack size of the runtime process itself.
A:
There is a difference between python runtime and python apps. Also from PyS60 app developer point of view, it's the heapsize that's more interesting...
Version 1.9.5 comes by default with heapsize 100k min and 4M max. Of course you can define those by yourself when creating the SIS package to release and distribute your application.
Sorry if I answered right question with wrong answer (stack vs heap).
Stack is usually "enough", but with deep enough recursion you can run out of it. Have done it - and fixed some endless loops :) Never had any real stack problems. Usually it's the heap that runs out, esp with graphics manipulation.
A:
I would assume that PyS60 should be doing the memory management for you, as your program will probably be constrained by the resources of PyS60.
|
Does the stack limit of Symbian also apply to PyS60?
|
Symbian has a stack limit of 8kB. Does this also apply to the function calling in PyS60 apps?
|
[
"Yes, PyS60 is based on CPython, thus uses the C stack.\n",
"Increasing the Symbian stack size is done through a parameter in the mmp file.\nThis is valid when you create a native application that the toolchain will turn into an exe file.\nIf you were to upgrade the Python runtime on your phone, with a version you built yourself, you could increase the stack size of the runtime process itself.\n",
"There is a difference between python runtime and python apps. Also from PyS60 app developer point of view, it's the heapsize that's more interesting...\nVersion 1.9.5 comes by default with heapsize 100k min and 4M max. Of course you can define those by yourself when creating the SIS package to release and distribute your application.\nSorry if I answered right question with wrong answer (stack vs heap).\nStack is usually \"enough\", but with deep enough recursion you can run out of it. Have done it - and fixed some endless loops :) Never had any real stack problems. Usually it's the heap that runs out, esp with graphics manipulation.\n",
"I would assume that PyS60 should be doing the memory management for you, as your program will probably be constrained by the resources of PyS60.\n"
] |
[
3,
1,
1,
0
] |
[] |
[] |
[
"nokia",
"pys60",
"python",
"symbian"
] |
stackoverflow_0000595296_nokia_pys60_python_symbian.txt
|
Q:
About the optional argument in Canvas in PyS60
In Python for Symbian60 blit() is defined as:
blit(image [,target=(0,0), source=((0,0),image.size), mask=None, scale=0 ])
In the optional parameter source what is the significance of image.size?
A:
My guess is that blit() will automatically use the result of image.size when you don't specify anything else (and thus blitting the whole image from (0,0) to (width,height)).
If you want only a smaller part of the image copied, you can use the source parameter to define a different rectangle to copy.
A:
Think that source=((0,0)) is the top left corner and image.size is the bottom right corner. You blit whatever is between those two points.
Similar for target, btw.
|
About the optional argument in Canvas in PyS60
|
In Python for Symbian60 blit() is defined as:
blit(image [,target=(0,0), source=((0,0),image.size), mask=None, scale=0 ])
In the optional parameter source what is the significance of image.size?
|
[
"My guess is that blit() will automatically use the result of image.size when you don't specify anything else (and thus blitting the whole image from (0,0) to (width,height)).\nIf you want only a smaller part of the image copied, you can use the source parameter to define a different rectangle to copy.\n",
"Think that source=((0,0)) is the top left corner and image.size is the bottom right corner. You blit whatever is between those two points.\nSimilar for target, btw.\n"
] |
[
0,
0
] |
[] |
[] |
[
"nokia",
"pys60",
"python"
] |
stackoverflow_0000585957_nokia_pys60_python.txt
|
Q:
Migrating from python 2.4 to python 2.6
I'm migrating a legacy codebase at work from python 2.4 to python 2.6. This is being done as part of a push to remove the 'legacy' tag and make a maintainable, extensible foundation for active development, so I'm getting a chance to "do things right", including refactoring to use new 2.6 features if that leads to cleaner, more robust code. (I'm already in raptures over the 'with' statement :)). Any good tips for the migration? Best practices, design patterns, etc? I'm mostly a ruby programmer; I've learnt some python 2.4 while working with this code but know nothing about modern python design principles, so feel free to suggest things that you might think are obvious.
A:
Read the Python 3.0 changes. The point of 2.6 is to aim for 3.0.
From 2.4 to 2.6 you gained a lot of things. These are the the most important. I'm making this answer community wiki so other folks can edit it.
Generator functions and the yield statement.
More consistent use of various types like list and dict -- they can be extended directly.
from __future__ import with_statement
from __future__ import print_function
Exceptions are new style classes, and there's more consistent exception handling. String exceptions have been removed. Attempting to use them raises a TypeError
A:
I guess you have already found them, but reference and for others, here are the lists of new features in those two versions:
http://docs.python.org/whatsnew/2.5.html
http://docs.python.org/whatsnew/2.6.html
Apart from picking features from those documents, I suggest using the opportunity (if needed) to make the code conform to the standard Python code style in PEP 8.
There are some automated tools that can help you getting the Python style right: pep8.py implements the PEP 8 checks and pylint gives a larger report that also includes things like undefined variables, unused imports, etc. pyflakes is a smaller and faster pylint.
|
Migrating from python 2.4 to python 2.6
|
I'm migrating a legacy codebase at work from python 2.4 to python 2.6. This is being done as part of a push to remove the 'legacy' tag and make a maintainable, extensible foundation for active development, so I'm getting a chance to "do things right", including refactoring to use new 2.6 features if that leads to cleaner, more robust code. (I'm already in raptures over the 'with' statement :)). Any good tips for the migration? Best practices, design patterns, etc? I'm mostly a ruby programmer; I've learnt some python 2.4 while working with this code but know nothing about modern python design principles, so feel free to suggest things that you might think are obvious.
|
[
"Read the Python 3.0 changes. The point of 2.6 is to aim for 3.0.\nFrom 2.4 to 2.6 you gained a lot of things. These are the the most important. I'm making this answer community wiki so other folks can edit it.\n\nGenerator functions and the yield statement.\nMore consistent use of various types like list and dict -- they can be extended directly.\nfrom __future__ import with_statement\nfrom __future__ import print_function\nExceptions are new style classes, and there's more consistent exception handling. String exceptions have been removed. Attempting to use them raises a TypeError\n\n",
"I guess you have already found them, but reference and for others, here are the lists of new features in those two versions:\n\nhttp://docs.python.org/whatsnew/2.5.html\nhttp://docs.python.org/whatsnew/2.6.html\n\nApart from picking features from those documents, I suggest using the opportunity (if needed) to make the code conform to the standard Python code style in PEP 8.\nThere are some automated tools that can help you getting the Python style right: pep8.py implements the PEP 8 checks and pylint gives a larger report that also includes things like undefined variables, unused imports, etc. pyflakes is a smaller and faster pylint.\n"
] |
[
5,
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000915135_python.txt
|
Q:
Clashing guidelines
While coding in Python it's better to code by following the guidelines of PEP8.
And while coding for Symbian it's better to follow its coding standards.
But when I code for PyS60 which guidelines should I follow? Till now I have been following PEP8, but this code shows the opposite. Do I need to rework my code?
A:
"Do I need to rework my code?"
Does it add value to rework you code?
How many folks will help you develop code who
A) don't know PEP 8
B) only know PyS60 coding standards because that's the only code they've ever seen.
and
C) cannot be taught anything different than the PyS60 coding standards?
List all the people who you'll work with that meet all three criteria. Then decide which is cheaper: rework your code or fire them.
A:
I don't see anything in your code sample that is obviously bogus. It's not the style I'd use, but neither is it hard to read, and it's not so far from PEP8 that I'd call it “the opposite”.
PEP8 shouldn't be seen as hard-and-fast law to which all code must conform, character by rigid character. It is a baseline for readable Python. When you go a little bit Java-programmer and get that antsiness about making the spacing around every operator consistent, go back and read the start of PEP8 again. The bit with the hobgoblin.
Don't get hung up on lengthy ‘reworking’ of code that is functional, readable, and at least in the same general vicinity as PEP8.
A:
Use the style of the API(s) you're interfacing the most. That's a simple rule that works in most places (where you can see the code, i.e. Java/C# is a bit hard(er).. :)
A:
i'd say use PEP8, but as mentioned above, don't get too hung up on it. when coding IN symbian c++ you should use symbian coding standards, but not necessarily if your program is merely running on the platform. don't get confused between symbian the OS and symbian c++ the (psuedo) language.
A:
Your example code is just that person's personal style. It's NOT following official PyS60 coding convension, there is no such a thing! Write whatever style gives you the best results.
Having said that I would recommend using PEP8, but only if you plan to use pylint to give you some additional confidence in your project.
I've done nothing but PyS60 stuff, never real python. Used pylint to speedup development time and to automatically point me some potential defects before I run into them in real life.
|
Clashing guidelines
|
While coding in Python it's better to code by following the guidelines of PEP8.
And while coding for Symbian it's better to follow its coding standards.
But when I code for PyS60 which guidelines should I follow? Till now I have been following PEP8, but this code shows the opposite. Do I need to rework my code?
|
[
"\"Do I need to rework my code?\"\nDoes it add value to rework you code?\nHow many folks will help you develop code who \nA) don't know PEP 8\nB) only know PyS60 coding standards because that's the only code they've ever seen.\nand\nC) cannot be taught anything different than the PyS60 coding standards?\nList all the people who you'll work with that meet all three criteria. Then decide which is cheaper: rework your code or fire them.\n",
"I don't see anything in your code sample that is obviously bogus. It's not the style I'd use, but neither is it hard to read, and it's not so far from PEP8 that I'd call it “the opposite”.\nPEP8 shouldn't be seen as hard-and-fast law to which all code must conform, character by rigid character. It is a baseline for readable Python. When you go a little bit Java-programmer and get that antsiness about making the spacing around every operator consistent, go back and read the start of PEP8 again. The bit with the hobgoblin.\nDon't get hung up on lengthy ‘reworking’ of code that is functional, readable, and at least in the same general vicinity as PEP8.\n",
"Use the style of the API(s) you're interfacing the most. That's a simple rule that works in most places (where you can see the code, i.e. Java/C# is a bit hard(er).. :)\n",
"i'd say use PEP8, but as mentioned above, don't get too hung up on it. when coding IN symbian c++ you should use symbian coding standards, but not necessarily if your program is merely running on the platform. don't get confused between symbian the OS and symbian c++ the (psuedo) language.\n",
"Your example code is just that person's personal style. It's NOT following official PyS60 coding convension, there is no such a thing! Write whatever style gives you the best results.\nHaving said that I would recommend using PEP8, but only if you plan to use pylint to give you some additional confidence in your project.\nI've done nothing but PyS60 stuff, never real python. Used pylint to speedup development time and to automatically point me some potential defects before I run into them in real life.\n"
] |
[
2,
2,
1,
0,
0
] |
[] |
[] |
[
"coding_style",
"pep8",
"pys60",
"python",
"symbian"
] |
stackoverflow_0000648299_coding_style_pep8_pys60_python_symbian.txt
|
Q:
How to convert specific character sequences in a string to upper case using Python?
I am looking to accomplish the following and am wondering if anyone has a suggestion as to how best go about it.
I have a string, say 'this-is,-toronto.-and-this-is,-boston', and I would like to convert all occurrences of ',-[a-z]' to ',-[A-Z]'. In this case the result of the conversion would be 'this-is,-Toronto.-and-this-is,-Boston'.
I've been trying to get something working with re.sub(), but as yet haven't figured out how how
testString = 'this-is,-toronto.-and-this-is,-boston'
re.sub(r',_([a-z])', r',_??', testString)
Thanks!
A:
re.sub can take a function which returns the replacement string:
import re
s = 'this-is,-toronto.-and-this-is,-boston'
t = re.sub(',-[a-z]', lambda x: x.group(0).upper(), s)
print t
prints
this-is,-Toronto.-and-this-is,-Boston
|
How to convert specific character sequences in a string to upper case using Python?
|
I am looking to accomplish the following and am wondering if anyone has a suggestion as to how best go about it.
I have a string, say 'this-is,-toronto.-and-this-is,-boston', and I would like to convert all occurrences of ',-[a-z]' to ',-[A-Z]'. In this case the result of the conversion would be 'this-is,-Toronto.-and-this-is,-Boston'.
I've been trying to get something working with re.sub(), but as yet haven't figured out how how
testString = 'this-is,-toronto.-and-this-is,-boston'
re.sub(r',_([a-z])', r',_??', testString)
Thanks!
|
[
"re.sub can take a function which returns the replacement string:\nimport re\n\ns = 'this-is,-toronto.-and-this-is,-boston'\nt = re.sub(',-[a-z]', lambda x: x.group(0).upper(), s)\nprint t\n\nprints\nthis-is,-Toronto.-and-this-is,-Boston\n\n"
] |
[
11
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000915391_python.txt
|
Q:
Regular Expression for Stripping Strings from Source Code
I'm looking for a regular expression that will replace strings in an input source code with some constant string value such as "string", and that will also take into account escaping the string-start character that is denoted by a double string-start character (e.g. "he said ""hello""").
To clarify, I will provide some examples of input and expected output:
input: print("hello world, how are you?")
output: print("string")
input: print("hello" + "world")
output: print("string" + "string")
# here's the tricky part:
input: print("He told her ""how you doin?"", and she said ""I'm fine, thanks""")
output: print("string")
I'm working in Python, but I guess this is language agnostic.
EDIT: According to one of the answers, this requirement may not be fit for a regular expression. I'm not sure that's true but I'm not an expert. If I try to phrase my requirement with words, what I'm looking for is to find sets of characters that are between double quotes, wherein even groups of adjacent double quotes should be disregarded, and that sounds to me like it can be figured by a DFA.
Thanks.
A:
If you're parsing Python code, save yourself the hassle and let the standard library's parser module do the heavy lifting.
If you're writing your own parser for some custom language, it's awfully tempting to start out by just hacking together a bunch of regexes, but don't do it. You'll dig yourself into an unmaintainable mess. Read up on parsing techniques and do it right (wikipedia can help).
This regex does the trick for all three of your examples:
re.sub(r'"(?:""|[^"])+"', '"string"', original)
A:
Maybe:
re.sub(r"[^\"]\"[^\"].*[^\"]\"[^\"]",'"string"',input)
EDIT:
No that won't work for the final example.
I don't think your requirements are regular: they can't be matched by a regular expression. This is because at the heart of the matter, you need to match any odd number of " grouped together, as that is your delimiter.
I think you'll have to do it manually, counting "s.
A:
There's a very good string-matching regular expression over at ActiveState. If it doesn't work straight out for your last example it should be a fairly trivial repeat to group adjacent quoted strings together.
|
Regular Expression for Stripping Strings from Source Code
|
I'm looking for a regular expression that will replace strings in an input source code with some constant string value such as "string", and that will also take into account escaping the string-start character that is denoted by a double string-start character (e.g. "he said ""hello""").
To clarify, I will provide some examples of input and expected output:
input: print("hello world, how are you?")
output: print("string")
input: print("hello" + "world")
output: print("string" + "string")
# here's the tricky part:
input: print("He told her ""how you doin?"", and she said ""I'm fine, thanks""")
output: print("string")
I'm working in Python, but I guess this is language agnostic.
EDIT: According to one of the answers, this requirement may not be fit for a regular expression. I'm not sure that's true but I'm not an expert. If I try to phrase my requirement with words, what I'm looking for is to find sets of characters that are between double quotes, wherein even groups of adjacent double quotes should be disregarded, and that sounds to me like it can be figured by a DFA.
Thanks.
|
[
"If you're parsing Python code, save yourself the hassle and let the standard library's parser module do the heavy lifting.\nIf you're writing your own parser for some custom language, it's awfully tempting to start out by just hacking together a bunch of regexes, but don't do it. You'll dig yourself into an unmaintainable mess. Read up on parsing techniques and do it right (wikipedia can help).\nThis regex does the trick for all three of your examples:\nre.sub(r'\"(?:\"\"|[^\"])+\"', '\"string\"', original)\n\n",
"Maybe:\nre.sub(r\"[^\\\"]\\\"[^\\\"].*[^\\\"]\\\"[^\\\"]\",'\"string\"',input)\n\nEDIT:\nNo that won't work for the final example.\nI don't think your requirements are regular: they can't be matched by a regular expression. This is because at the heart of the matter, you need to match any odd number of \" grouped together, as that is your delimiter.\nI think you'll have to do it manually, counting \"s.\n",
"There's a very good string-matching regular expression over at ActiveState. If it doesn't work straight out for your last example it should be a fairly trivial repeat to group adjacent quoted strings together.\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"python",
"regex",
"string"
] |
stackoverflow_0000914913_python_regex_string.txt
|
Q:
Project Euler Problem 245
I'm onto problem 245 now but have hit some problems. I've done some work on it already but don't feel I've made any real steps towards solving it. Here's what I've got so far:
We need to find n=ab with a and b positive integers. We can also assume gcd(a, b) = 1 without loss of generality and thus phi(n) = phi(ab) = phi(a)phi(b).
We are trying to solve:
Hence:
At this point I figured it would be a good idea to actually see how these numbers were distributed. I hacked together a brute-force program that I used to find all (composite) solutions up to 104:
15, 85, 255, 259, 391, 589, 1111, 3193, 4171, 4369, 12361, 17473, 21845, 25429, 28243, 47989, 52537, 65535, 65641, 68377, 83767, 91759
Importantly it looks like there won't be too many less than the 1011 limit the problem asks. The most interesting/ useful bit I discovered was that k was quite small even for the large values of n. In fact the largest k was only 138. (Additionally, it seems k is always even.)
Considering this, I would guess it is possible to consider every value of k and find what value(s) n can be with that value of k.
Returning to the original equation, note that it can be rewritten as:
Since we know k:
And that's about as far as I have got; I'm still pursuing some of my routes but I wonder if I'm missing the point! With a brute force approach I have found the sum up to 108 which is 5699973227 (only 237 solutions for n).
I'm pretty much out of ideas; can anyone give away some hints?
Update: A lot of work has been done by many people and together we've been able to prove several things. Here's a list:
n is always odd and k is always even. k <= 105.5. n must be squarefree.
I have found every solution for when n=pq (2 prime factors) with p>q. I used the fact that for 2 primes q = k+factor(k^2-k+1) and p = k+[k^2-k+1]/factor(k^2-k+1). We also know for 2 primes k < q < 2k.
For n with 2 of more prime factors, all of n's primes are greater than k.
A:
Project Euler isn't fond of discussing problems on public forums like StackOverflow. All tasks are made to be done solo, if you encounter problems you may ask help for a specific mathematical or programming concept, but you can't just decide to ask how to solve the problem at hand - takes away the point of project Euler.
Point is to learn and come up with solutions yourself, and learn new concepts.
A:
Let me continue what jug started, but try a somewhat different approach. The goal again is to just find the numbers that have two distinct factors n=pq. As you already pointed out we are looking for the numbers such that n-phi(n) divides n-1. I.e., if n=pq then that means we are looking for p,q such that
p+q-1 divides pq-1
Assume we fix p and are looking for all primes q satisfying the equation above. The equation above doesn't look very easy to solve, hence the next step is to eliminate q as much as possible. In particular, we use that if a divides b then a also divides b + ka for any integer k. Hence
p+q-1 divides pq - 1 - p(p+q-1)
and simplifying this leads to the condition
p+q-1 divides p^2 - p + 1.
We may assume that p is the smaller prime factor of n. Then p is smaller than the square root of 1011. Hence it is possible to find all numbers with two factors by iterating through all primes p below the square root of 1011, then find the divisors of p^2-p+1, solve for q and check if q is prime and pq is a solution of the problem.
This of course, still leaves the integers with more than two prime factors. A somewhat similar approach works here too, but is more involved and needs further optimizations.
One question I can't answer is why is this problem formulated so complicated. Couldn't the authors just have asked for the sum of composite integers where n-phi(n) divides n-1. So maybe I'm missing a big hint there.
Now, that the solutions with two prime factors are known, I'll try to find a potential algorithm for finding solutions with more than 2 prime factors. The goal is to find an algorithm that given a composite integer m finds all primes q such that mq is a solution. I.e., q must be such that
mq - phi(mq) divides mq - 1.
Let
F = mq - phi(mq).
Then of course
F = (m-phi(m)) q + phi(m).
As in the case of two prime factors it is possible to find a condition for F, by eliminating q from the left hand side of the equation above. Since F divides mq-1 it also divides
(m-phi(m))(mq - 1)
and hence also
m F - (m-phi(m))(mq - 1) = m phi(m) + m - phi(m).
Thus by finding all the divisors F of m phi(m) + m - phi(m) and by checking if
(F - phi(m))/ (m - phi(m)) is prime it is possible to find all solutions mq for a given m.
Since only the divisors F that satisfy
F == phi(m) (mod m - phi(m))
can lead to new solutions, this fact can somtimes be used to optimze the factorization of
m phi(m) + m - phi(m).
A:
Multiply primes. What I did, is first check every 2-prime product; store the ones that are successes. Then using the stored products, check those with more primes (every 3-prime product shown in your brute force has a 2-prime subset that works). Use these stored products, and try again with 4 primes, 5 primes etc.
The only downside is that you need a good sieve or list of primes.
Here is a list of the ones for N<=(10^7):
2 primes
15,85,259,391,589,1111,3193,4171,4369,12361,17473,25429,28243,47989,52537,65641,
68377,83767,91759,100777,120019,144097,186367,268321,286357,291919,316171,327937
,346063,353029,360301,404797,406867,524851,531721,558013,563767,633727,705667,73
8607,910489,970141,1013539,1080769,1093987,1184233,1185421,1223869,1233823,12618
07,1264693,1455889,1487371,1529641,1574383,1612381,1617379,1657531,1793689,20163
79,2095087,2130871,2214031,2299459,2500681,2553709,2609689,2617963,2763697,30475
21,3146677,3397651,3514603,3539017,3820909,3961219,4078927,4186993,4197901,44997
07,4552411,4935883,4975687,5103841,5299351,5729257,5829877,5864581,6017299,62364
01,6802531,6856609,8759011,9059233,9203377,9301603,9305311,9526747,9536899,95832
79,9782347,9900217
3 primes
255,21845,335923,3817309
4 primes
65535
5 primes
83623935
A:
In order not to give too much away, I'd suggest two things:
Analyze the sequence of numbers you've produced though brute-force: they all share a common characteristic. If you find what it is, you may then have a shot at brute forcing your way to a solution.
Find a more sophisticated factoring algorithm. Or even better: rather than finding the factors from the numbers, build the numbers from the factors...
EDIT: The patterns you wll find will only add to your understading, and hopefully show you how you could have achieved the same amount of knowledge by an adequate manipulation of the analytical expression. Without knowing that pattern, I'm afraid that there is no path to a solution. Plus, this is probably among the hardest Project Euler problems, so you need not worry about finding the solution without a lot of sweat and toil...
A:
no direct help for this problem, but maybe interesting for future math projects: instead of using WolframAlpha to analyze the sequence, I'd recommend "The On-Line Encyclopedia of Integer Sequences" on research.att.com.
Have fun solving all Euler problems!
A:
I haven't found a full solution, but I would like to share my thoughts. Perhaps someone could help.
I believe that one should try to reduce the problem to
(source: texify.com)
complexity.The following facts can be used to make the search more effective:
Any solution must be an odd number
Any solution must be a multiple of distinct primes (no square number factors are allowed)
Others have pointed out about these and it is easy to prove them using only the basic properties of the totient function.
I'll start by analyzing all prime and composite numbers up to sqrt(10^11). This is not a big task and the time required should be well below the 1 minute requirement. All solutions above the square root are of the form:
a*b, where at least one of a,b < sqrt(10^11)
While iterating the range 0..sqrt(10^11), I will search for multiples of the number in iteration that are solutions. I will only cover the case of multiplying a number below the square root with a single prime. The solution set I will get this way will be a superset of the two-prime factors solution set. It will still not be the complete solution set, as solutions of the form p1p2p3, where p1p2,p2p3,p1p3>sqrt(10^11) will not be found.
Let b be the number below the square root and a the prime to multiply it.
(source: texify.com)
We have:
(source: texify.com)
Based on the facts that
phi(a) = a - 1 and phi(a)*phi(b) = phi(a*b) if a, b coprime
we have
(source: texify.com)
The 'modulo' part on the right can be written as:
(source: texify.com)
Let me temporarily accept that
(source: texify.com)
Then I could solve the above equation for a (m=1), verify that the result is prime and then I will have the only solution that is a multiple of b. If the m is not within the limits to be the actual modulo, then I need to either solve the equation for different values of k:
(source: texify.com)
(k values must be somehow limited) or prove that in that case the will be a higher b < sqrt(10^11) to cover for this.
There is a special case for b prime or b composite and mb = 0. In that case:
(source: texify.com)
This can be calculated. For b a prime number:
(source: texify.com)
I need to find a prime a that satisfies the equation:
(source: texify.com)
For example, let b=3, phi(b)=2.
I need to solve:
k[3a-2(a-1)] - 6 = 1 => k(a + 2) = 5
For k=1, a=7, a prime (solution)
For all other values of k, the above equation can't be satisfied.
|
Project Euler Problem 245
|
I'm onto problem 245 now but have hit some problems. I've done some work on it already but don't feel I've made any real steps towards solving it. Here's what I've got so far:
We need to find n=ab with a and b positive integers. We can also assume gcd(a, b) = 1 without loss of generality and thus phi(n) = phi(ab) = phi(a)phi(b).
We are trying to solve:
Hence:
At this point I figured it would be a good idea to actually see how these numbers were distributed. I hacked together a brute-force program that I used to find all (composite) solutions up to 104:
15, 85, 255, 259, 391, 589, 1111, 3193, 4171, 4369, 12361, 17473, 21845, 25429, 28243, 47989, 52537, 65535, 65641, 68377, 83767, 91759
Importantly it looks like there won't be too many less than the 1011 limit the problem asks. The most interesting/ useful bit I discovered was that k was quite small even for the large values of n. In fact the largest k was only 138. (Additionally, it seems k is always even.)
Considering this, I would guess it is possible to consider every value of k and find what value(s) n can be with that value of k.
Returning to the original equation, note that it can be rewritten as:
Since we know k:
And that's about as far as I have got; I'm still pursuing some of my routes but I wonder if I'm missing the point! With a brute force approach I have found the sum up to 108 which is 5699973227 (only 237 solutions for n).
I'm pretty much out of ideas; can anyone give away some hints?
Update: A lot of work has been done by many people and together we've been able to prove several things. Here's a list:
n is always odd and k is always even. k <= 105.5. n must be squarefree.
I have found every solution for when n=pq (2 prime factors) with p>q. I used the fact that for 2 primes q = k+factor(k^2-k+1) and p = k+[k^2-k+1]/factor(k^2-k+1). We also know for 2 primes k < q < 2k.
For n with 2 of more prime factors, all of n's primes are greater than k.
|
[
"Project Euler isn't fond of discussing problems on public forums like StackOverflow. All tasks are made to be done solo, if you encounter problems you may ask help for a specific mathematical or programming concept, but you can't just decide to ask how to solve the problem at hand - takes away the point of project Euler.\nPoint is to learn and come up with solutions yourself, and learn new concepts.\n",
"Let me continue what jug started, but try a somewhat different approach. The goal again is to just find the numbers that have two distinct factors n=pq. As you already pointed out we are looking for the numbers such that n-phi(n) divides n-1. I.e., if n=pq then that means we are looking for p,q such that\n p+q-1 divides pq-1\n\nAssume we fix p and are looking for all primes q satisfying the equation above. The equation above doesn't look very easy to solve, hence the next step is to eliminate q as much as possible. In particular, we use that if a divides b then a also divides b + ka for any integer k. Hence \n p+q-1 divides pq - 1 - p(p+q-1)\n\nand simplifying this leads to the condition \n p+q-1 divides p^2 - p + 1.\n\nWe may assume that p is the smaller prime factor of n. Then p is smaller than the square root of 1011. Hence it is possible to find all numbers with two factors by iterating through all primes p below the square root of 1011, then find the divisors of p^2-p+1, solve for q and check if q is prime and pq is a solution of the problem.\nThis of course, still leaves the integers with more than two prime factors. A somewhat similar approach works here too, but is more involved and needs further optimizations.\nOne question I can't answer is why is this problem formulated so complicated. Couldn't the authors just have asked for the sum of composite integers where n-phi(n) divides n-1. So maybe I'm missing a big hint there.\n\nNow, that the solutions with two prime factors are known, I'll try to find a potential algorithm for finding solutions with more than 2 prime factors. The goal is to find an algorithm that given a composite integer m finds all primes q such that mq is a solution. I.e., q must be such that\n mq - phi(mq) divides mq - 1.\n\nLet \n F = mq - phi(mq).\n\nThen of course \n F = (m-phi(m)) q + phi(m).\n\nAs in the case of two prime factors it is possible to find a condition for F, by eliminating q from the left hand side of the equation above. Since F divides mq-1 it also divides \n (m-phi(m))(mq - 1) \n\nand hence also \n m F - (m-phi(m))(mq - 1) = m phi(m) + m - phi(m).\n\nThus by finding all the divisors F of m phi(m) + m - phi(m) and by checking if \n(F - phi(m))/ (m - phi(m)) is prime it is possible to find all solutions mq for a given m.\nSince only the divisors F that satisfy\n F == phi(m) (mod m - phi(m))\n\ncan lead to new solutions, this fact can somtimes be used to optimze the factorization of\nm phi(m) + m - phi(m).\n",
"Multiply primes. What I did, is first check every 2-prime product; store the ones that are successes. Then using the stored products, check those with more primes (every 3-prime product shown in your brute force has a 2-prime subset that works). Use these stored products, and try again with 4 primes, 5 primes etc.\nThe only downside is that you need a good sieve or list of primes.\nHere is a list of the ones for N<=(10^7):\n2 primes\n15,85,259,391,589,1111,3193,4171,4369,12361,17473,25429,28243,47989,52537,65641,\n68377,83767,91759,100777,120019,144097,186367,268321,286357,291919,316171,327937\n,346063,353029,360301,404797,406867,524851,531721,558013,563767,633727,705667,73\n8607,910489,970141,1013539,1080769,1093987,1184233,1185421,1223869,1233823,12618\n07,1264693,1455889,1487371,1529641,1574383,1612381,1617379,1657531,1793689,20163\n79,2095087,2130871,2214031,2299459,2500681,2553709,2609689,2617963,2763697,30475\n21,3146677,3397651,3514603,3539017,3820909,3961219,4078927,4186993,4197901,44997\n07,4552411,4935883,4975687,5103841,5299351,5729257,5829877,5864581,6017299,62364\n01,6802531,6856609,8759011,9059233,9203377,9301603,9305311,9526747,9536899,95832\n79,9782347,9900217\n3 primes\n255,21845,335923,3817309\n4 primes\n65535\n5 primes\n83623935\n",
"In order not to give too much away, I'd suggest two things:\n\nAnalyze the sequence of numbers you've produced though brute-force: they all share a common characteristic. If you find what it is, you may then have a shot at brute forcing your way to a solution.\nFind a more sophisticated factoring algorithm. Or even better: rather than finding the factors from the numbers, build the numbers from the factors...\n\n\nEDIT: The patterns you wll find will only add to your understading, and hopefully show you how you could have achieved the same amount of knowledge by an adequate manipulation of the analytical expression. Without knowing that pattern, I'm afraid that there is no path to a solution. Plus, this is probably among the hardest Project Euler problems, so you need not worry about finding the solution without a lot of sweat and toil...\n",
"no direct help for this problem, but maybe interesting for future math projects: instead of using WolframAlpha to analyze the sequence, I'd recommend \"The On-Line Encyclopedia of Integer Sequences\" on research.att.com.\nHave fun solving all Euler problems!\n",
"I haven't found a full solution, but I would like to share my thoughts. Perhaps someone could help.\nI believe that one should try to reduce the problem to\n\n(source: texify.com)\ncomplexity.The following facts can be used to make the search more effective:\n\nAny solution must be an odd number\nAny solution must be a multiple of distinct primes (no square number factors are allowed)\n\nOthers have pointed out about these and it is easy to prove them using only the basic properties of the totient function.\nI'll start by analyzing all prime and composite numbers up to sqrt(10^11). This is not a big task and the time required should be well below the 1 minute requirement. All solutions above the square root are of the form:\na*b, where at least one of a,b < sqrt(10^11)\n\nWhile iterating the range 0..sqrt(10^11), I will search for multiples of the number in iteration that are solutions. I will only cover the case of multiplying a number below the square root with a single prime. The solution set I will get this way will be a superset of the two-prime factors solution set. It will still not be the complete solution set, as solutions of the form p1p2p3, where p1p2,p2p3,p1p3>sqrt(10^11) will not be found.\nLet b be the number below the square root and a the prime to multiply it.\n\n(source: texify.com)\nWe have:\n\n(source: texify.com)\nBased on the facts that\nphi(a) = a - 1 and phi(a)*phi(b) = phi(a*b) if a, b coprime\n\nwe have\n\n(source: texify.com)\nThe 'modulo' part on the right can be written as:\n\n(source: texify.com)\nLet me temporarily accept that\n\n(source: texify.com)\nThen I could solve the above equation for a (m=1), verify that the result is prime and then I will have the only solution that is a multiple of b. If the m is not within the limits to be the actual modulo, then I need to either solve the equation for different values of k:\n\n(source: texify.com)\n(k values must be somehow limited) or prove that in that case the will be a higher b < sqrt(10^11) to cover for this.\nThere is a special case for b prime or b composite and mb = 0. In that case:\n\n(source: texify.com)\nThis can be calculated. For b a prime number:\n\n(source: texify.com)\nI need to find a prime a that satisfies the equation:\n\n(source: texify.com)\nFor example, let b=3, phi(b)=2.\nI need to solve:\nk[3a-2(a-1)] - 6 = 1 => k(a + 2) = 5\nFor k=1, a=7, a prime (solution)\nFor all other values of k, the above equation can't be satisfied.\n"
] |
[
9,
4,
3,
1,
1,
0
] |
[] |
[] |
[
"algorithm",
"math",
"python"
] |
stackoverflow_0000875027_algorithm_math_python.txt
|
Q:
Python Best Practices: Abstract Syntax Trees
Modifying Abstract Syntax Trees
I would like to be able to build and modify an ast and then optionally write it out as python byte code for execution later without overhead.
I have been hacking around with the ast docs for python3.0 and python2.6, but I can't seem to find any good sources on best practices for this type of code.
Question
What are some best practices and guidelines for modifying abstract syntax trees in python?
[edit]
Unknown states that byteplay is a good example of such a library.
Also, benford cites GeniuSQL which uses abstract syntax trees to transform python code to SQL.
A:
Other than the manual and the source code, you are on your own. This subject and python bytecode are very undocumented.
Alternatively you could try using this python bytecode library which I have heard good thing about but haven't tried it yet:
http://code.google.com/p/byteplay/
A:
I think geniusql is doing something along those lines to translate an ast into sql... There was a talk on it but I can't find it - and I'm not allowed to link anyway :-(
|
Python Best Practices: Abstract Syntax Trees
|
Modifying Abstract Syntax Trees
I would like to be able to build and modify an ast and then optionally write it out as python byte code for execution later without overhead.
I have been hacking around with the ast docs for python3.0 and python2.6, but I can't seem to find any good sources on best practices for this type of code.
Question
What are some best practices and guidelines for modifying abstract syntax trees in python?
[edit]
Unknown states that byteplay is a good example of such a library.
Also, benford cites GeniuSQL which uses abstract syntax trees to transform python code to SQL.
|
[
"Other than the manual and the source code, you are on your own. This subject and python bytecode are very undocumented.\nAlternatively you could try using this python bytecode library which I have heard good thing about but haven't tried it yet:\nhttp://code.google.com/p/byteplay/\n",
"I think geniusql is doing something along those lines to translate an ast into sql... There was a talk on it but I can't find it - and I'm not allowed to link anyway :-(\n"
] |
[
6,
2
] |
[] |
[] |
[
"abstract_syntax_tree",
"python"
] |
stackoverflow_0000911930_abstract_syntax_tree_python.txt
|
Q:
How to debug PYGTK program
When python raise an exception in the middle of a pygtk signal handling callback, the exception is catched by the gtk main loop, its value printed and the main loop just continue, ignoring it.
If you want to debug, with something like pdb (python -m pdb myscript.py), you want that when the exception occure PDB jump on it and you can start debuging. Because of that it's not possible.
How can i debug pygtk program then ?
A:
You can't make pdb jump to the exception, since the exception is caught and silenced by gtk's main loop.
One of the alternatives is using pdb.set_trace():
import pdb
pdb.set_trace()
See pdb documentation.
Alternatively you can just use Winpdb:
It is a platform independent graphical GPL Python debugger with support for remote debugging over a network, multiple threads, namespace modification, embedded debugging, encrypted communication and is up to 20 times faster than pdb.
Features:
GPL license. Winpdb is Free Software.
Compatible with CPython 2.3 through 2.6 and Python 3000
Compatible with wxPython 2.6 through 2.8
Platform independent, and tested on Ubuntu Gutsy and Windows XP.
User Interfaces: rpdb2 is console based, while winpdb requires wxPython 2.6 or later.
(source: winpdb.org)
|
How to debug PYGTK program
|
When python raise an exception in the middle of a pygtk signal handling callback, the exception is catched by the gtk main loop, its value printed and the main loop just continue, ignoring it.
If you want to debug, with something like pdb (python -m pdb myscript.py), you want that when the exception occure PDB jump on it and you can start debuging. Because of that it's not possible.
How can i debug pygtk program then ?
|
[
"You can't make pdb jump to the exception, since the exception is caught and silenced by gtk's main loop.\nOne of the alternatives is using pdb.set_trace():\nimport pdb\npdb.set_trace()\n\nSee pdb documentation.\nAlternatively you can just use Winpdb:\nIt is a platform independent graphical GPL Python debugger with support for remote debugging over a network, multiple threads, namespace modification, embedded debugging, encrypted communication and is up to 20 times faster than pdb.\nFeatures:\n\nGPL license. Winpdb is Free Software.\nCompatible with CPython 2.3 through 2.6 and Python 3000\nCompatible with wxPython 2.6 through 2.8\nPlatform independent, and tested on Ubuntu Gutsy and Windows XP.\nUser Interfaces: rpdb2 is console based, while winpdb requires wxPython 2.6 or later.\n\n\n(source: winpdb.org) \n"
] |
[
5
] |
[] |
[] |
[
"debugging",
"gtk",
"pygtk",
"python"
] |
stackoverflow_0000916674_debugging_gtk_pygtk_python.txt
|
Q:
Set up svnperms pre-commit hook
I'm trying to implement svnperms into a repository, but am having difficulty with a few things:
pre-commit has the execute permissions:
-rwxrwxr-x 1 svnadm svn 3018 May 27 10:11 pre-commit
This is my call to svnperms within pre-commit:
# Check that the author of this commit has the rights to perform
# the commit on the files and directories being modified.
SVNPERMS=/usr/local/svn/scripts/svnperms.py
$SVNPERMS -r $REPOS -t $TXN || exit 1
I've got svnperms.py installed in the location specified:
# ls -l /usr/local/svn/scripts
total 24
-rwxrwxr-x 1 svnadm svn 11840 May 25 07:48 svnperms.py
svnperms.py is in UNIX format - no ^M line endings.
TortoiseSVN comes back with:
Command: Commit
Modified: C:\projects\Sandbox\Trunk\Test.txt
Sending content: C:\projects\Sandbox\Trunk\Test.txt
Error: Commit failed (details follow):
Error: 'pre-commit' hook failed with error output:
Error: No such file or directory: python
Calling svnperms with no paramters shows:
/usr/local/svn/scripts/svnperms.py
missing required option(s): repository, either transaction or a revision
Usage: svnperms.py OPTIONS
Options:
-r PATH Use repository at PATH to check transactions
-t TXN Query transaction TXN for commit information
-f PATH Use PATH as configuration file (default is repository
path + /conf/svnperms.conf)
-s NAME Use section NAME as permission section (default is
repository name, extracted from repository path)
-R REV Query revision REV for commit information (for tests)
-A AUTHOR Check commit as if AUTHOR had commited it (for tests)
-h Show this message
The UNIX admin has recently installed python, but wasn't sure if all the dependencies are there. If that's the issue, I'm not sure how to determine that, since python seems to be working based on the output above.
What am I missing?
A:
My guess is that the location of the python binary is not in $PATH for the svn server. The shabang line of svnperms.py reads:
#!/usr/bin/env python
But that assumes that the executable lies in the $PATH of the caller. If you don't have permissions to modify the runtime environment of your subversion server, try replacing python in the shabang line with the path given by which python when you run it interactively.
|
Set up svnperms pre-commit hook
|
I'm trying to implement svnperms into a repository, but am having difficulty with a few things:
pre-commit has the execute permissions:
-rwxrwxr-x 1 svnadm svn 3018 May 27 10:11 pre-commit
This is my call to svnperms within pre-commit:
# Check that the author of this commit has the rights to perform
# the commit on the files and directories being modified.
SVNPERMS=/usr/local/svn/scripts/svnperms.py
$SVNPERMS -r $REPOS -t $TXN || exit 1
I've got svnperms.py installed in the location specified:
# ls -l /usr/local/svn/scripts
total 24
-rwxrwxr-x 1 svnadm svn 11840 May 25 07:48 svnperms.py
svnperms.py is in UNIX format - no ^M line endings.
TortoiseSVN comes back with:
Command: Commit
Modified: C:\projects\Sandbox\Trunk\Test.txt
Sending content: C:\projects\Sandbox\Trunk\Test.txt
Error: Commit failed (details follow):
Error: 'pre-commit' hook failed with error output:
Error: No such file or directory: python
Calling svnperms with no paramters shows:
/usr/local/svn/scripts/svnperms.py
missing required option(s): repository, either transaction or a revision
Usage: svnperms.py OPTIONS
Options:
-r PATH Use repository at PATH to check transactions
-t TXN Query transaction TXN for commit information
-f PATH Use PATH as configuration file (default is repository
path + /conf/svnperms.conf)
-s NAME Use section NAME as permission section (default is
repository name, extracted from repository path)
-R REV Query revision REV for commit information (for tests)
-A AUTHOR Check commit as if AUTHOR had commited it (for tests)
-h Show this message
The UNIX admin has recently installed python, but wasn't sure if all the dependencies are there. If that's the issue, I'm not sure how to determine that, since python seems to be working based on the output above.
What am I missing?
|
[
"My guess is that the location of the python binary is not in $PATH for the svn server. The shabang line of svnperms.py reads:\n#!/usr/bin/env python\n\nBut that assumes that the executable lies in the $PATH of the caller. If you don't have permissions to modify the runtime environment of your subversion server, try replacing python in the shabang line with the path given by which python when you run it interactively.\n"
] |
[
6
] |
[] |
[] |
[
"pre_commit",
"python",
"svn",
"unix"
] |
stackoverflow_0000916758_pre_commit_python_svn_unix.txt
|
Q:
Is there a list of Python packages that are not 64 bit compatible somewhere?
I am going to move to a 64 bit machine and a 64 bit OS (Windows) and am trying to figure out if any of the extensions/packages I am using are going to be lost when I make the move. I can't seem to find whether someone has built a list of known issues as flagged on the Python 2.5 release page. I have been using 2.5 but will at this time move to 2.6. I see that the potential conflicts will arise because of the module relying on a C extension module that would not be compatible in a 64 bit environment. But I don't know how to anticipate them. I want to move to a 64 bit system to because my IT guys told me that is the only way to make a meaningful move up the memory ladder.
A:
We're running 2.5 on a 64-bit Red Hat Enterprise Linux server.
Everything appears to be working.
I would suggest you do what we did.
Get a VM.
Load up the app.
Test it.
It was easier than trying to do research.
A:
Perhaps you should figure out what "make a meaningful move up the memory ladder" means. Do you currently need to address more than 4GB of RAM? If not then you don't need a 64-bit system.
A:
It really depends on the specific modules you are using. I am running several 64-bit Linux systems and I have yet to come across problems with any of the C modules that I use.
Most C modules can be built from source, so you should read about the Python distribution utility distutils to see how you can build these modules if you cannot find 64-bit binaries.
Whether a specific module will work in a 64-bit environment depends on how the code was written. Many modules work correctly when compiled for 64-bits, however there is a chance that it won't. Many popular modules such those from SciPy work just fine.
In short you will either need to just try the module on a 64-bit system or you will have to find the developer/project page and determine if there is a 64-bit build or if there are known bugs.
A:
It seems like you already know this, but it's worth pointing out for the sake of completeness. With that said, remember that you shouldn't have any problems with pure Python packages.
Secondly, you also don't necessarily have to install the 64-bit version of Python unless you're planning on running a program that will take up greater than 4 GB of memory. The 32-bit version of Python should work perfectly fine on 64-bit windows.
|
Is there a list of Python packages that are not 64 bit compatible somewhere?
|
I am going to move to a 64 bit machine and a 64 bit OS (Windows) and am trying to figure out if any of the extensions/packages I am using are going to be lost when I make the move. I can't seem to find whether someone has built a list of known issues as flagged on the Python 2.5 release page. I have been using 2.5 but will at this time move to 2.6. I see that the potential conflicts will arise because of the module relying on a C extension module that would not be compatible in a 64 bit environment. But I don't know how to anticipate them. I want to move to a 64 bit system to because my IT guys told me that is the only way to make a meaningful move up the memory ladder.
|
[
"We're running 2.5 on a 64-bit Red Hat Enterprise Linux server.\nEverything appears to be working.\nI would suggest you do what we did. \n\nGet a VM.\nLoad up the app.\nTest it.\n\nIt was easier than trying to do research.\n",
"Perhaps you should figure out what \"make a meaningful move up the memory ladder\" means. Do you currently need to address more than 4GB of RAM? If not then you don't need a 64-bit system.\n",
"It really depends on the specific modules you are using. I am running several 64-bit Linux systems and I have yet to come across problems with any of the C modules that I use.\nMost C modules can be built from source, so you should read about the Python distribution utility distutils to see how you can build these modules if you cannot find 64-bit binaries. \nWhether a specific module will work in a 64-bit environment depends on how the code was written. Many modules work correctly when compiled for 64-bits, however there is a chance that it won't. Many popular modules such those from SciPy work just fine.\nIn short you will either need to just try the module on a 64-bit system or you will have to find the developer/project page and determine if there is a 64-bit build or if there are known bugs. \n",
"It seems like you already know this, but it's worth pointing out for the sake of completeness. With that said, remember that you shouldn't have any problems with pure Python packages. \nSecondly, you also don't necessarily have to install the 64-bit version of Python unless you're planning on running a program that will take up greater than 4 GB of memory. The 32-bit version of Python should work perfectly fine on 64-bit windows.\n"
] |
[
4,
3,
1,
1
] |
[] |
[] |
[
"64_bit",
"package",
"python"
] |
stackoverflow_0000916952_64_bit_package_python.txt
|
Q:
How does Python OOP compare to PHP OOP?
I'm basically wondering if Python has any OOP shortcomings like PHP does. PHP has been developing their OOP practices for the last few versions. It's getting better in PHP but it's still not perfect. I'm new to Python and I'm just wondering if Python's OOP support is better or just comparable.
If there are some issues in Python OOP which don't follow proper OOP practices I would definitely like to know those. PHP for instance, doesn't allow for multiple inheritance as far as I'm aware.
Thanks Everyone!
Edit:
How about support for Public and Private? or support of variable types. I think these are important regarding building OOP software.
A:
I would say that Python's OOP support is much better given the fact that it was introduced into the language in its infancy as opposed to PHP which bolted OOP onto an existing procedural model.
A:
Python's OOP support is very strong; it does allow multiple inheritance, and everything is manipulable as a first-class object (including classes, methods, etc).
Polymorphism is expressed through duck typing. For example, you can iterate over a list, a tuple, a dictionary, a file, a web resource, and more all in the same way.
There are a lot of little pedantic things that are debatably not OO, like getting the length of a sequence with len(list) rather than list.len(), but it's best not to worry about them.
A:
One aspect of Python's OOP model that is unusual is its encapsulation mechanism. Basically, Python assumes that programmers don't do bad things, and so it doesn't go out of its way to any extent to protect private member variables or methods.
It works by mangling names of members that begin with a two underscores and ending with fewer than two. Such identifiers are everywhere changed so that they have the class name prepended, with an additional underscore before that. thus:
class foo:
def public(self):
return self.__private()
def __private(self):
return 5
print foo().public()
print foo()._foo__private()
names beginning and ending with two (or more) underscores are not mangled, so __init__ the method python uses for constructing new instances, is left alone.
Here's a link explaining it in more detail.
A:
I think they're comparable at this point. As a simple test, I doubt there's any pattern in Design Patterns or Patterns of Enterprise Application Architecture, arguably the two most influential books in OOP, that is impossible to implement in either language.
Both languages have come along by leaps and bounds since their infancies.
As far as multiple inheritance, it often creates more problems than it solves, and is, these days, commonly left out of languages as an intentional design decision.
A:
Also: Python has native operator overloading, unlike PHP (although it does exist an extension). Love it or hate it, it's there.
A:
If you are looking for "more pure" OOP, you should be looking at SmallTalk and/or Ruby.
PHP has grown considerably with it's support for OOP, but because of the way it works (reloads everything every time), things can get really slow if OOP best practices are followed. Which is one of the reasons you don't hear about PHP on Rails much.
|
How does Python OOP compare to PHP OOP?
|
I'm basically wondering if Python has any OOP shortcomings like PHP does. PHP has been developing their OOP practices for the last few versions. It's getting better in PHP but it's still not perfect. I'm new to Python and I'm just wondering if Python's OOP support is better or just comparable.
If there are some issues in Python OOP which don't follow proper OOP practices I would definitely like to know those. PHP for instance, doesn't allow for multiple inheritance as far as I'm aware.
Thanks Everyone!
Edit:
How about support for Public and Private? or support of variable types. I think these are important regarding building OOP software.
|
[
"I would say that Python's OOP support is much better given the fact that it was introduced into the language in its infancy as opposed to PHP which bolted OOP onto an existing procedural model.\n",
"Python's OOP support is very strong; it does allow multiple inheritance, and everything is manipulable as a first-class object (including classes, methods, etc). \nPolymorphism is expressed through duck typing. For example, you can iterate over a list, a tuple, a dictionary, a file, a web resource, and more all in the same way.\nThere are a lot of little pedantic things that are debatably not OO, like getting the length of a sequence with len(list) rather than list.len(), but it's best not to worry about them.\n",
"One aspect of Python's OOP model that is unusual is its encapsulation mechanism. Basically, Python assumes that programmers don't do bad things, and so it doesn't go out of its way to any extent to protect private member variables or methods. \nIt works by mangling names of members that begin with a two underscores and ending with fewer than two. Such identifiers are everywhere changed so that they have the class name prepended, with an additional underscore before that. thus: \nclass foo:\n def public(self):\n return self.__private()\n def __private(self):\n return 5\n\nprint foo().public()\nprint foo()._foo__private()\n\nnames beginning and ending with two (or more) underscores are not mangled, so __init__ the method python uses for constructing new instances, is left alone. \nHere's a link explaining it in more detail.\n",
"I think they're comparable at this point. As a simple test, I doubt there's any pattern in Design Patterns or Patterns of Enterprise Application Architecture, arguably the two most influential books in OOP, that is impossible to implement in either language. \nBoth languages have come along by leaps and bounds since their infancies.\nAs far as multiple inheritance, it often creates more problems than it solves, and is, these days, commonly left out of languages as an intentional design decision.\n",
"Also: Python has native operator overloading, unlike PHP (although it does exist an extension). Love it or hate it, it's there.\n",
"If you are looking for \"more pure\" OOP, you should be looking at SmallTalk and/or Ruby.\nPHP has grown considerably with it's support for OOP, but because of the way it works (reloads everything every time), things can get really slow if OOP best practices are followed. Which is one of the reasons you don't hear about PHP on Rails much.\n"
] |
[
20,
8,
8,
3,
3,
1
] |
[] |
[] |
[
"comparison",
"oop",
"php",
"python"
] |
stackoverflow_0000916962_comparison_oop_php_python.txt
|
Q:
Does Ruby have an equivalent of Python's twisted framework as a networking abstraction layer?
From my understanding, Python's twisted framework provides a higher-level abstraction for networking communications (?).
I am looking for a Ruby equivalent of twisted to use in a Rails application.
A:
Take a look at EventMachine. It's not as extensive as Twisted but it's built around the same concepts of event-driven network programming.
|
Does Ruby have an equivalent of Python's twisted framework as a networking abstraction layer?
|
From my understanding, Python's twisted framework provides a higher-level abstraction for networking communications (?).
I am looking for a Ruby equivalent of twisted to use in a Rails application.
|
[
"Take a look at EventMachine. It's not as extensive as Twisted but it's built around the same concepts of event-driven network programming.\n"
] |
[
7
] |
[] |
[] |
[
"python",
"ruby",
"ruby_on_rails",
"twisted"
] |
stackoverflow_0000917369_python_ruby_ruby_on_rails_twisted.txt
|
Q:
Sorting a tuple that contains lists
I have a similar question to this one but instead my tuple contains lists, as follows:
mytuple = (
["tomato", 3],
["say", 2],
["say", 5],
["I", 4],
["you", 1],
["tomato", 6],
)
What's the most efficient way of sorting this?
A:
You can get a sorted tuple easy enough:
>>> sorted(mytuple)
[['I', 4], ['say', 2], ['say', 5], ['tomato', 3], ['tomato', 6], ['you', 1]]
This will sort based on the items in the list. If the first two match, it compares the second, etc.
If you have a different criteria, you can provide a comparison function.
Updated: As a commenter noted, this returns a list. You can get another tuple like so:
>>> tuple(sorted(mytuple))
(['I', 4], ['say', 2], ['say', 5], ['tomato', 3], ['tomato', 6], ['you', 1])
A:
You cannot sort a tuple.
What you can do is use sorted() which will not sort the tuple, but will create a sorted list from your tuple. If you really need a sorted tuple, you can then cast the return from sorted as a tuple:
mytuple = tuple(sorted(mytuple, key=lambda row: row[1]))
This can be a waste of memory since you are creating a list and then discarding it (and also discarding the original tuple). Chances are you don't need a tuple. Much more efficient would be to start with a list and sort that.
A:
You will have to instantiate a new tuple, unfortunately: something like
mytuple = sorted(mytuple)
should do the trick. sorted won't return a tuple, though. wrap the call in tuple() if you need that. This could potentially be costly if the data set is long.
If you need to set on the second element in the sublists, you can use the key parameter to the sorted function. You'll need a helper function for that:
mytuple = sorted(mytuple, key=lambda row: row[1])
A:
The technique used in the accepted answer to that question (sorted(..., key=itemgetter(...))) should work with any iterable of this kind. Based on the data you present here, I think the exact solution presented there is what you want.
|
Sorting a tuple that contains lists
|
I have a similar question to this one but instead my tuple contains lists, as follows:
mytuple = (
["tomato", 3],
["say", 2],
["say", 5],
["I", 4],
["you", 1],
["tomato", 6],
)
What's the most efficient way of sorting this?
|
[
"You can get a sorted tuple easy enough:\n>>> sorted(mytuple)\n[['I', 4], ['say', 2], ['say', 5], ['tomato', 3], ['tomato', 6], ['you', 1]]\n\nThis will sort based on the items in the list. If the first two match, it compares the second, etc.\nIf you have a different criteria, you can provide a comparison function.\nUpdated: As a commenter noted, this returns a list. You can get another tuple like so:\n>>> tuple(sorted(mytuple))\n(['I', 4], ['say', 2], ['say', 5], ['tomato', 3], ['tomato', 6], ['you', 1])\n\n",
"You cannot sort a tuple. \nWhat you can do is use sorted() which will not sort the tuple, but will create a sorted list from your tuple. If you really need a sorted tuple, you can then cast the return from sorted as a tuple:\nmytuple = tuple(sorted(mytuple, key=lambda row: row[1]))\n\nThis can be a waste of memory since you are creating a list and then discarding it (and also discarding the original tuple). Chances are you don't need a tuple. Much more efficient would be to start with a list and sort that. \n",
"You will have to instantiate a new tuple, unfortunately: something like\nmytuple = sorted(mytuple)\n\nshould do the trick. sorted won't return a tuple, though. wrap the call in tuple() if you need that. This could potentially be costly if the data set is long.\nIf you need to set on the second element in the sublists, you can use the key parameter to the sorted function. You'll need a helper function for that: \nmytuple = sorted(mytuple, key=lambda row: row[1])\n\n",
"The technique used in the accepted answer to that question (sorted(..., key=itemgetter(...))) should work with any iterable of this kind. Based on the data you present here, I think the exact solution presented there is what you want.\n"
] |
[
7,
5,
1,
1
] |
[] |
[] |
[
"list",
"python",
"sorting",
"tuples"
] |
stackoverflow_0000917202_list_python_sorting_tuples.txt
|
Q:
How can I handle a mouseMiddleDrag event in PythonCard?
I would like to use the middle mouse button to drag an image in an application written in Python and using PythonCard/wxPython for the GUI.
The latest version of PythonCard only implements a "left mouse button drag" event and I am trying to modify PythonCard to handle a "middle mouse button drag" as well.
Here is the relevant code from Lib\site-packages\PythonCard\event.py :
class MouseMoveEvent(MouseEvent, InsteadOfTypeEvent):
name = 'mouseMove'
binding = wx.EVT_MOTION
id = wx.wxEVT_MOTION
def translateEventType(self, aWxEvent):
if aWxEvent.Dragging():
return MouseDragEvent.id
else:
return self.id
class MouseDragEvent(MouseMoveEvent):
name = 'mouseDrag'
id = wx.NewEventType()
class MouseMiddleDragEvent(MouseMoveEvent): #My addition
name = 'mouseMiddleDrag'
id = wx.NewEventType()
My addition does not work. What can I do instead? Is there a specific wxPython method that I could use to bypass PythonCard?
A:
It turns out the the mouseDrag event is active regardless of which button on the mouse is pressed. To filter the middle mouse button, you need to call the MiddleIsDown() method from the MouseEvent.
def on_mouseDrag( self, event ):
do_stuff()
if event.MiddleIsDown():
do_other_stuff()
|
How can I handle a mouseMiddleDrag event in PythonCard?
|
I would like to use the middle mouse button to drag an image in an application written in Python and using PythonCard/wxPython for the GUI.
The latest version of PythonCard only implements a "left mouse button drag" event and I am trying to modify PythonCard to handle a "middle mouse button drag" as well.
Here is the relevant code from Lib\site-packages\PythonCard\event.py :
class MouseMoveEvent(MouseEvent, InsteadOfTypeEvent):
name = 'mouseMove'
binding = wx.EVT_MOTION
id = wx.wxEVT_MOTION
def translateEventType(self, aWxEvent):
if aWxEvent.Dragging():
return MouseDragEvent.id
else:
return self.id
class MouseDragEvent(MouseMoveEvent):
name = 'mouseDrag'
id = wx.NewEventType()
class MouseMiddleDragEvent(MouseMoveEvent): #My addition
name = 'mouseMiddleDrag'
id = wx.NewEventType()
My addition does not work. What can I do instead? Is there a specific wxPython method that I could use to bypass PythonCard?
|
[
"It turns out the the mouseDrag event is active regardless of which button on the mouse is pressed. To filter the middle mouse button, you need to call the MiddleIsDown() method from the MouseEvent.\ndef on_mouseDrag( self, event ): \n do_stuff()\n\n if event.MiddleIsDown():\n do_other_stuff()\n\n"
] |
[
1
] |
[] |
[] |
[
"mouse",
"python",
"pythoncard",
"user_interface",
"wxpython"
] |
stackoverflow_0000916435_mouse_python_pythoncard_user_interface_wxpython.txt
|
Q:
Reinstall /Library/Python on OS X Leopard
I accidentally removed /Library/Python on OS X Leopard. How can I reinstall that?
A:
If you'd like, I'll create a tarball from a pristine installation. I'm using MacOSX 10.5.7, and only 12K.
A:
I'm using 10.4, but unless the installation changed dramatically in 10.5, /Library/Python is just a place to install local (user-installed) packages; the actual Python install is under /System. On 10.4, I have the following structure:
/Library/
Python/
2.3/
README
site-packages/
README
So just re-creating that structure may suffice. (But instead of 2.3, use the version of Python installed on 10.5.)
A:
/Library/Python contains your python site-packages, which is the local software you've installed using commands like python setup.py install. The pieces here are third-party packages, not items installed by Apple - your actual Python installation is still safe in /System/Library/etc...
In other words, the default OS leaves these directories mostly blank... nothing in there is critical (just a readme and a path file).
In this case, you'll have to :
Recreate the directory structure:
Re-install your third-party libraries.
The directory structure on a default OS X install is:
/Library/Python/2.3/site-packages
/Library/Python/2.5/site-packages
|
Reinstall /Library/Python on OS X Leopard
|
I accidentally removed /Library/Python on OS X Leopard. How can I reinstall that?
|
[
"If you'd like, I'll create a tarball from a pristine installation. I'm using MacOSX 10.5.7, and only 12K.\n",
"I'm using 10.4, but unless the installation changed dramatically in 10.5, /Library/Python is just a place to install local (user-installed) packages; the actual Python install is under /System. On 10.4, I have the following structure:\n/Library/\n Python/\n 2.3/\n README\n site-packages/\n README\n\nSo just re-creating that structure may suffice. (But instead of 2.3, use the version of Python installed on 10.5.)\n",
"/Library/Python contains your python site-packages, which is the local software you've installed using commands like python setup.py install. The pieces here are third-party packages, not items installed by Apple - your actual Python installation is still safe in /System/Library/etc... \nIn other words, the default OS leaves these directories mostly blank... nothing in there is critical (just a readme and a path file).\nIn this case, you'll have to :\n\nRecreate the directory structure:\nRe-install your third-party libraries.\n\nThe directory structure on a default OS X install is:\n\n/Library/Python/2.3/site-packages\n /Library/Python/2.5/site-packages\n\n"
] |
[
1,
1,
1
] |
[] |
[] |
[
"macos",
"osx_leopard",
"python",
"reinstall"
] |
stackoverflow_0000917876_macos_osx_leopard_python_reinstall.txt
|
Q:
When Should I Start Thinking About Moving to Python 3?
Possible Duplicate:
Why won't you switch to Python 3.x?
I see there are already a lot of duplicate questions asking whether or not new Python programmers should learn 2 or 3. I am not asking that question.
I am already a Python 2 programmer. I started tinkering with it some years ago. I started using it almost exclusively for my personal projects about a year ago. I even recently switched from a PHP job to a Python job. However, all this has been with Python 2.
Python 3 is out now, and I know that is is not backwards compatible with 2, although it is similar. I don't think I'm going to have any problem learning Python 3. However, I am going to have a problem transitioning old code, if it becomes necessary. Also, if development efforts move from Python 2 to 3, I can't be stuck developing on a deprecated platform.
It seems that for the moment, Python 2 is still going strong, and there isn't really any push to transition to 3. That can't last forever, though. When should I start to make a move?
A:
The best answer I can give you is change when you need to. If you have no need for Python 3, then don't switch. If you aren't sure if you need to switch, chances are that you don't.
That said, once Python 3 becomes the more widely used version (in a few years, not anytime soon), you'll probably want to switch just because it will be more supported (more libraries, etc).
If you don't have any Python 2-specific libraries, you could write new projects in Python 3 just to ease the transition, but you don't need to at this point.
A:
If you can switch now, you might as well. Learning the newest will always help in the future.
Being that you have been using 2, then there is no concern that you won't know how to use that.
|
When Should I Start Thinking About Moving to Python 3?
|
Possible Duplicate:
Why won't you switch to Python 3.x?
I see there are already a lot of duplicate questions asking whether or not new Python programmers should learn 2 or 3. I am not asking that question.
I am already a Python 2 programmer. I started tinkering with it some years ago. I started using it almost exclusively for my personal projects about a year ago. I even recently switched from a PHP job to a Python job. However, all this has been with Python 2.
Python 3 is out now, and I know that is is not backwards compatible with 2, although it is similar. I don't think I'm going to have any problem learning Python 3. However, I am going to have a problem transitioning old code, if it becomes necessary. Also, if development efforts move from Python 2 to 3, I can't be stuck developing on a deprecated platform.
It seems that for the moment, Python 2 is still going strong, and there isn't really any push to transition to 3. That can't last forever, though. When should I start to make a move?
|
[
"The best answer I can give you is change when you need to. If you have no need for Python 3, then don't switch. If you aren't sure if you need to switch, chances are that you don't.\nThat said, once Python 3 becomes the more widely used version (in a few years, not anytime soon), you'll probably want to switch just because it will be more supported (more libraries, etc).\nIf you don't have any Python 2-specific libraries, you could write new projects in Python 3 just to ease the transition, but you don't need to at this point.\n",
"If you can switch now, you might as well. Learning the newest will always help in the future.\nBeing that you have been using 2, then there is no concern that you won't know how to use that.\n"
] |
[
3,
0
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0000917987_python_python_3.x.txt
|
Q:
Is there a way in Python to index a list of containers (tuples, lists, dictionaries) by an element of a container?
I have been poking around for a recipe / example to index a list of tuples without taking a modification of the decorate, sort, undecorate approach.
For example:
l=[(a,b,c),(x,c,b),(z,c,b),(z,c,d),(a,d,d),(x,d,c) . . .]
The approach I have been using is to build a dictionary using defaultdict of the second element
from collections import defaultdict
tdict=defaultdict(int)
for myTuple in l:
tdict[myTuple[1]]+=1
Then I have to build a list consisting of only the second item in the tuple for each item in the list. While there are a number of ways to get there a simple approach is to:
tempList=[myTuple[1] for myTuple in l]
and then generate an index of each item in tdict
indexDict=defaultdict(dict)
for key in tdict:
indexDict[key]['index']=tempList.index(key)
Clearly this does not seem very Pythonic. I have been trying to find examples or insights thinking that I should be able to use something magical to get the index directly. No such luck so far.
Note, I understand that I can take my approach a little more directly and not generating tdict.
output could be a dictionary with the index
indexDict={'b':{'index':0},'c':{'index':1},'d':{'index':4},. . .}
After learning a lot from Nadia's responses I think the answer is no.
While her response works I think it is more complicated than needed. I would simply
def build_index(someList):
indexDict={}
for item in enumerate(someList):
if item[1][1] not in indexDict:
indexDict[item[1][1]]=item[0]
return indexDict
A:
This will generate the result you want
dict((myTuple[1], index) for index, myTuple in enumerate(l))
>>> l = [(1, 2, 3), (4, 5, 6), (1, 4, 6)]
>>> dict((myTuple[1], index) for index, myTuple in enumerate(l))
{2: 0, 4: 2, 5: 1}
And if you insist on using a dictionary to represent the index:
dict((myTuple[1], {'index': index}) for index, myTuple in enumerate(l))
The result will be:
{2: {'index': 0}, 4: {'index': 2}, 5: {'index': 1}}
EDIT
If you want to handle key collision then you'll have to extend the solution like this:
def build_index(l):
indexes = [(myTuple[1], index) for index, myTuple in enumerate(l)]
d = {}
for e, index in indexes:
d[e] = min(index, d.get(e, index))
return d
>>> l = [(1, 2, 3), (4, 5, 6), (1, 4, 6), (2, 4, 6)]
>>> build_index(l)
{2: 0, 4: 2, 5: 1}
EDIT 2
And a more generalized and compact solution (in a similar definition to sorted)
def index(l, key):
d = {}
for index, myTuple in enumerate(l):
d[key(myTuple)] = min(index, d.get(key(myTuple), index))
return d
>>> index(l, lambda a: a[1])
{2: 0, 4: 2, 5: 1}
So the answer to your question is yes: There is a way in Python to index a list of containers (tuples, lists, dictionaries) by an element of a container without preprocessing. But your request of storing the result in a dictionary makes it impossible to be a one liner. But there is no preprocessing here. The list is iterated only once.
A:
If i think this is what you're asking...
l = ['asd', 'asdxzc']
d = {}
for i, x in enumerate(l):
d[x] = {'index': i}
|
Is there a way in Python to index a list of containers (tuples, lists, dictionaries) by an element of a container?
|
I have been poking around for a recipe / example to index a list of tuples without taking a modification of the decorate, sort, undecorate approach.
For example:
l=[(a,b,c),(x,c,b),(z,c,b),(z,c,d),(a,d,d),(x,d,c) . . .]
The approach I have been using is to build a dictionary using defaultdict of the second element
from collections import defaultdict
tdict=defaultdict(int)
for myTuple in l:
tdict[myTuple[1]]+=1
Then I have to build a list consisting of only the second item in the tuple for each item in the list. While there are a number of ways to get there a simple approach is to:
tempList=[myTuple[1] for myTuple in l]
and then generate an index of each item in tdict
indexDict=defaultdict(dict)
for key in tdict:
indexDict[key]['index']=tempList.index(key)
Clearly this does not seem very Pythonic. I have been trying to find examples or insights thinking that I should be able to use something magical to get the index directly. No such luck so far.
Note, I understand that I can take my approach a little more directly and not generating tdict.
output could be a dictionary with the index
indexDict={'b':{'index':0},'c':{'index':1},'d':{'index':4},. . .}
After learning a lot from Nadia's responses I think the answer is no.
While her response works I think it is more complicated than needed. I would simply
def build_index(someList):
indexDict={}
for item in enumerate(someList):
if item[1][1] not in indexDict:
indexDict[item[1][1]]=item[0]
return indexDict
|
[
"This will generate the result you want\ndict((myTuple[1], index) for index, myTuple in enumerate(l))\n\n>>> l = [(1, 2, 3), (4, 5, 6), (1, 4, 6)]\n>>> dict((myTuple[1], index) for index, myTuple in enumerate(l))\n{2: 0, 4: 2, 5: 1}\n\nAnd if you insist on using a dictionary to represent the index:\ndict((myTuple[1], {'index': index}) for index, myTuple in enumerate(l))\n\nThe result will be:\n{2: {'index': 0}, 4: {'index': 2}, 5: {'index': 1}}\n\n\nEDIT\nIf you want to handle key collision then you'll have to extend the solution like this:\ndef build_index(l):\n indexes = [(myTuple[1], index) for index, myTuple in enumerate(l)]\n d = {}\n for e, index in indexes:\n d[e] = min(index, d.get(e, index))\n return d\n\n>>> l = [(1, 2, 3), (4, 5, 6), (1, 4, 6), (2, 4, 6)]\n>>> build_index(l)\n{2: 0, 4: 2, 5: 1}\n\n\nEDIT 2\nAnd a more generalized and compact solution (in a similar definition to sorted)\ndef index(l, key):\n d = {}\n for index, myTuple in enumerate(l):\n d[key(myTuple)] = min(index, d.get(key(myTuple), index))\n return d\n\n>>> index(l, lambda a: a[1])\n{2: 0, 4: 2, 5: 1}\n\nSo the answer to your question is yes: There is a way in Python to index a list of containers (tuples, lists, dictionaries) by an element of a container without preprocessing. But your request of storing the result in a dictionary makes it impossible to be a one liner. But there is no preprocessing here. The list is iterated only once.\n",
"If i think this is what you're asking...\nl = ['asd', 'asdxzc']\nd = {}\n\nfor i, x in enumerate(l):\n d[x] = {'index': i}\n\n"
] |
[
5,
0
] |
[] |
[] |
[
"containers",
"indexing",
"list",
"python"
] |
stackoverflow_0000918076_containers_indexing_list_python.txt
|
Q:
Python: Getting an IPv6 socket to receive packets destined for the Subnet-Routers Anycast address
How do you get a socket to receive packets destined for the IPv6 Subnet-Routers Anycast address?
I haven't been able to find any informationn on how to do this.
In a fit of desparation, I've tried using socket.setsockopt as you would to join a multicast group:
# 7 is the interface number
s = socket(AF_INET6, SOCK_DGRAM)
packed_iface_num = struct.pack("I", 7)
group = inet_pton(AF_INET6, 'fd36:d00d:d00d:47cb::') + packed_iface_num
# socket.error: (22, 'Invalid argument')
s.setsockopt(IPPROTO_IPV6, IPV6_JOIN_GROUP, group)
And also using bind
# socket.error: (99, 'Cannot assign requested address')
s.bind(('fd36:773e:6b4c:47cb::', 9876))
As expected, neither of these worked. Is there a way to do this?
A:
Instead of IPV6_JOIN_GROUP, try passing IPV6_JOIN_ANYCAST to your s.setsockopt() code. Unfortunately the Python socket module doesn't define it but you should be able to pass the integer equivalent instead. In Linux IPV6_JOIN_ANYCAST is 27 and IPV6_LEAVE_ANYCAST is 28. (defined in /usr/include/linux/in6.h)
The best documentation I could find is from this lkml e-mail describing the anycast patch to the Linux kernel:
The application interface for joining and leaving anycast groups is 2
new setsockopt() calls: IPV6_JOIN_ANYCAST and IPV6_LEAVE_ANYCAST. The arguments
are the same as the corresponding multicast operations.
May the dancing kame be with you!
A:
The IPV6_JOIN_ANYCAST and IPV6_LEAVE_ANYCAST socket options are non-standard Linux-isms.
If you'd like your code to be portable, then you should probably do it the standard way, i.e. assign the subnet routers anycast address to the appropriate interface, then bind your socket to the wildcard address and discard everything not sent to the subnet router anycast address. Remember, you're not supposed to send packets with the anycast address in the source field, and you can't open a read-only socket in the standard sockets API.
Assigning an interface address should be a privileged operation on any reasonable operating system, and that's the part that isn't going to be standard whatever you do. If you must do that programmatically, then it will mean (on BSD for example) using something like the SIOCAIFADDR_IN6 code and the ioctl() system call. Make sure to set the IN6_IFF_ANYCAST flag in the ifra_flags field of the interface alias request structure.
|
Python: Getting an IPv6 socket to receive packets destined for the Subnet-Routers Anycast address
|
How do you get a socket to receive packets destined for the IPv6 Subnet-Routers Anycast address?
I haven't been able to find any informationn on how to do this.
In a fit of desparation, I've tried using socket.setsockopt as you would to join a multicast group:
# 7 is the interface number
s = socket(AF_INET6, SOCK_DGRAM)
packed_iface_num = struct.pack("I", 7)
group = inet_pton(AF_INET6, 'fd36:d00d:d00d:47cb::') + packed_iface_num
# socket.error: (22, 'Invalid argument')
s.setsockopt(IPPROTO_IPV6, IPV6_JOIN_GROUP, group)
And also using bind
# socket.error: (99, 'Cannot assign requested address')
s.bind(('fd36:773e:6b4c:47cb::', 9876))
As expected, neither of these worked. Is there a way to do this?
|
[
"Instead of IPV6_JOIN_GROUP, try passing IPV6_JOIN_ANYCAST to your s.setsockopt() code. Unfortunately the Python socket module doesn't define it but you should be able to pass the integer equivalent instead. In Linux IPV6_JOIN_ANYCAST is 27 and IPV6_LEAVE_ANYCAST is 28. (defined in /usr/include/linux/in6.h)\nThe best documentation I could find is from this lkml e-mail describing the anycast patch to the Linux kernel:\n\nThe application interface for joining and leaving anycast groups is 2\nnew setsockopt() calls: IPV6_JOIN_ANYCAST and IPV6_LEAVE_ANYCAST. The arguments\nare the same as the corresponding multicast operations.\n\nMay the dancing kame be with you!\n",
"The IPV6_JOIN_ANYCAST and IPV6_LEAVE_ANYCAST socket options are non-standard Linux-isms.\nIf you'd like your code to be portable, then you should probably do it the standard way, i.e. assign the subnet routers anycast address to the appropriate interface, then bind your socket to the wildcard address and discard everything not sent to the subnet router anycast address. Remember, you're not supposed to send packets with the anycast address in the source field, and you can't open a read-only socket in the standard sockets API.\nAssigning an interface address should be a privileged operation on any reasonable operating system, and that's the part that isn't going to be standard whatever you do. If you must do that programmatically, then it will mean (on BSD for example) using something like the SIOCAIFADDR_IN6 code and the ioctl() system call. Make sure to set the IN6_IFF_ANYCAST flag in the ifra_flags field of the interface alias request structure.\n"
] |
[
2,
0
] |
[] |
[] |
[
"ipv6",
"networking",
"python"
] |
stackoverflow_0000597225_ipv6_networking_python.txt
|
Q:
Are asynchronous Django model queries possible?
I'm new to Django, but the application that I have in mind might end up having URLs that look like this:
http://mysite/compare/id_1/id_2
Where "id_1" and "id_2" are identifiers of two distinct Model objects. In the handler for "compare" I'd like to asynchronously, and in parallel, query and retrieve objects id_1 and id_2.
Is there any way to do this using a standard Django syntax? I'm hoping for pseudocode that ends up looking something like this:
import django.async
# Issue the model query, but set it up asynchronously.
# The next 2 lines don't actually touch my database
o1 = Object(id=id_1).async_fetch()
o2 = Object(id=id_2).async_fetch()
# Now that I know what I want to query, fire off a fetch to do them all
# in parallel, and wait for all queries to finish before proceeding.
async.Execute((o2,o2))
# Now the code can use data from o1 and o2 below...
A:
There aren't strictly asynchronous operations as you've described, but I think you can achieve the same effect by using django's in_bulk query operator, which takes a list of ids to query.
Something like this for the urls.py:
urlpatterns = patterns('',
(r'^compare/(\d+)/(\d+)/$', 'my.compareview'),
)
And this for the view:
def compareview(request, id1, id2):
# in_bulk returns a dict: { obj_id1: <MyModel instance>,
# obj_id2: <MyModel instance> }
# the SQL pulls all at once, rather than sequentially... arguably
# better than async as it pulls in one DB hit, rather than two
# happening at the same time
comparables = MyModel.objects.in_bulk([id1, id2])
o1, o2 = (comparables.get(id1), comparables.get(id2))
|
Are asynchronous Django model queries possible?
|
I'm new to Django, but the application that I have in mind might end up having URLs that look like this:
http://mysite/compare/id_1/id_2
Where "id_1" and "id_2" are identifiers of two distinct Model objects. In the handler for "compare" I'd like to asynchronously, and in parallel, query and retrieve objects id_1 and id_2.
Is there any way to do this using a standard Django syntax? I'm hoping for pseudocode that ends up looking something like this:
import django.async
# Issue the model query, but set it up asynchronously.
# The next 2 lines don't actually touch my database
o1 = Object(id=id_1).async_fetch()
o2 = Object(id=id_2).async_fetch()
# Now that I know what I want to query, fire off a fetch to do them all
# in parallel, and wait for all queries to finish before proceeding.
async.Execute((o2,o2))
# Now the code can use data from o1 and o2 below...
|
[
"There aren't strictly asynchronous operations as you've described, but I think you can achieve the same effect by using django's in_bulk query operator, which takes a list of ids to query.\nSomething like this for the urls.py:\nurlpatterns = patterns('',\n (r'^compare/(\\d+)/(\\d+)/$', 'my.compareview'),\n)\n\nAnd this for the view:\ndef compareview(request, id1, id2):\n # in_bulk returns a dict: { obj_id1: <MyModel instance>, \n # obj_id2: <MyModel instance> }\n # the SQL pulls all at once, rather than sequentially... arguably\n # better than async as it pulls in one DB hit, rather than two\n # happening at the same time\n comparables = MyModel.objects.in_bulk([id1, id2])\n o1, o2 = (comparables.get(id1), comparables.get(id2)) \n\n"
] |
[
11
] |
[] |
[] |
[
"django",
"django_models",
"multithreading",
"mysql",
"python"
] |
stackoverflow_0000918298_django_django_models_multithreading_mysql_python.txt
|
Q:
Using mocking to test derived classes in Python
I have code that looks like this:
import xmlrpclib
class Base(object):
def __init__(self, server):
self.conn = xmlrpclib.ServerProxy(server)
def foo(self):
return self.conn.do_something()
class Derived(Base):
def foo(self):
if Base.foo():
return self.conn.do_something_else()
How should I use mocking to test the behavior of the Derived class? I don't want to assume that whatever the XML-RPC connection talks to will actually exist, but I feel like mocking the xmlrpclib module requires too much knowledge of the implementation of the Base class (which I have other tests for).
Or, I guess, should I even use mocking to test this? If not, how would you test it?
A:
You could create a fake ServerProxy class, and substitute that for testing.
Something like this:
class FakeServerProxy(object):
def __init__(self, server):
self.server = server
def do_something(self):
pass
def do_something_else(self):
pass
def test_derived():
xmlrpclib.ServerProxy = FakeServerProxy
derived = Derived(None)
derived.foo()
A:
With some trivial refactoring (call to do_something_else is extracted), you can test Derived.foo logic without needing to "know" about XMLRPC.
import xmlrpclib
class Base(object):
def __init__(self, server):
self.conn = xmlrpclib.ServerProxy(server)
def foo(self):
return self.conn.do_something()
class Derived(Base):
def foo(self):
if Base.foo(self):
return self._bar()
def _bar(self):
# moved to its own method so that you
# you can stub it out to avoid any XMLRPCs
# actually taking place.
return self.conn.do_something_else()
import mox
d = Derived('http://deep-thought/unanswered/tagged/life+universe+everything')
m = mox.Mox()
m.StubOutWithMock(Base, 'foo')
m.StubOutWithMock(d, '_bar')
Base.foo(d).AndReturn(True)
d._bar() # Will be called becase Boo.foo returns True.
m.ReplayAll()
d.foo()
m.UnsetStubs()
m.VerifyAll()
Alternatively or even preferably, you may extract the call to do_something_else into bar method on Base. Which makes sense, if we agree that Base encapsulates all your XMLRPC actions.
The example uses pymox mocking library, but the gist of it stays the same regardless of your mocking style.
|
Using mocking to test derived classes in Python
|
I have code that looks like this:
import xmlrpclib
class Base(object):
def __init__(self, server):
self.conn = xmlrpclib.ServerProxy(server)
def foo(self):
return self.conn.do_something()
class Derived(Base):
def foo(self):
if Base.foo():
return self.conn.do_something_else()
How should I use mocking to test the behavior of the Derived class? I don't want to assume that whatever the XML-RPC connection talks to will actually exist, but I feel like mocking the xmlrpclib module requires too much knowledge of the implementation of the Base class (which I have other tests for).
Or, I guess, should I even use mocking to test this? If not, how would you test it?
|
[
"You could create a fake ServerProxy class, and substitute that for testing.\nSomething like this:\nclass FakeServerProxy(object):\n def __init__(self, server):\n self.server = server\n def do_something(self):\n pass\n def do_something_else(self):\n pass\n\ndef test_derived():\n xmlrpclib.ServerProxy = FakeServerProxy\n derived = Derived(None)\n derived.foo()\n\n",
"With some trivial refactoring (call to do_something_else is extracted), you can test Derived.foo logic without needing to \"know\" about XMLRPC.\nimport xmlrpclib\n\nclass Base(object):\n def __init__(self, server):\n self.conn = xmlrpclib.ServerProxy(server)\n\n def foo(self):\n return self.conn.do_something()\n\nclass Derived(Base):\n def foo(self):\n if Base.foo(self):\n return self._bar()\n def _bar(self):\n # moved to its own method so that you\n # you can stub it out to avoid any XMLRPCs\n # actually taking place.\n return self.conn.do_something_else()\n\nimport mox\n\nd = Derived('http://deep-thought/unanswered/tagged/life+universe+everything')\n\nm = mox.Mox()\nm.StubOutWithMock(Base, 'foo')\nm.StubOutWithMock(d, '_bar')\nBase.foo(d).AndReturn(True)\nd._bar() # Will be called becase Boo.foo returns True.\nm.ReplayAll()\n\nd.foo()\n\nm.UnsetStubs()\nm.VerifyAll()\n\nAlternatively or even preferably, you may extract the call to do_something_else into bar method on Base. Which makes sense, if we agree that Base encapsulates all your XMLRPC actions.\nThe example uses pymox mocking library, but the gist of it stays the same regardless of your mocking style.\n"
] |
[
4,
3
] |
[] |
[] |
[
"mocking",
"python"
] |
stackoverflow_0000412472_mocking_python.txt
|
Q:
Appengine - Possible to get an entity using only key string without model name?
I want to be able to have a view that will act upon a number of different types of objects
all the view will get is the key string eg:
agpwb2xsdGhyZWFkchULEg9wb2xsY29yZV9hbnN3ZXIYAww
without knowing the model type, is it possible to retrieve the entity from just that key string?
thanks
A:
No superclassing required, just use db.get():
from google.appengine.ext import db
key_str = 'agpwb2xsdGhyZWFkchULEg9wb2xsY29yZV9hbnN3ZXIYAww'
entity = db.get(key_str)
A:
If you design your models so they all use a common superclass it should be possible to retrieve your objects by using something like:
entity = CommonSuperclass.get('agpwb2xsdGhyZWFkchULEg9wb2xsY29yZV9hbnN3ZXIYAww')
|
Appengine - Possible to get an entity using only key string without model name?
|
I want to be able to have a view that will act upon a number of different types of objects
all the view will get is the key string eg:
agpwb2xsdGhyZWFkchULEg9wb2xsY29yZV9hbnN3ZXIYAww
without knowing the model type, is it possible to retrieve the entity from just that key string?
thanks
|
[
"No superclassing required, just use db.get():\nfrom google.appengine.ext import db\nkey_str = 'agpwb2xsdGhyZWFkchULEg9wb2xsY29yZV9hbnN3ZXIYAww'\nentity = db.get(key_str)\n\n",
"If you design your models so they all use a common superclass it should be possible to retrieve your objects by using something like:\nentity = CommonSuperclass.get('agpwb2xsdGhyZWFkchULEg9wb2xsY29yZV9hbnN3ZXIYAww')\n\n"
] |
[
11,
1
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0000776324_google_app_engine_python.txt
|
Q:
How do I make a command line text editor?
I have gotten to know my way around a few programming languages, and I'd like to try my hand at making a command-line text editor -- something that runs in the terminal, like vim/emacs/nano, but is pure text (no guis, please). Preferably, I'd like to do this in python. Where do I start? Are there any (python) libraries to do command-line applications?
A:
try python curses module , it is a command-line graphic operation library.
A:
Take a look at Curses Programming in Python and this as well.
A:
Another option if you want to write a TUI (Text User Interface) without having to descend to curses is Snack, which comes with Newt.
A:
Kids today! Sheesh! When I was starting out, curses was not in widespread use!
My first text editors worked on actual mechanical Teletype devices with actual paper (not a philosophical "TTY" device with a scrolling screen!)
This still works nicely as a way to edit.
Use the cmd module to implement a bunch of commands. Use the 'ex' man page for hints as to what you need. Do not read about the vi commands; avoid reading about vim.
Look at older man pages for just the "EX COMMANDS" section. For example, here: http://www.manpagez.com/man/1/ex/.
Implement the append, add, change, delete, global, insert, join, list, move, print, quit, substitute and write commands and you'll be happy.
A:
Curses type libraries and resources will get you into the textual user interfaces, and provide very nice, relatively easy to use windows, menus, editors, etc.
Then you'll want to look into code highlighting modules for python.
It's a fun process dealing with the limitations of textual interfaces, and you can learn a lot by going down this road. Good luck!
-Adam
A:
I would recommend the excellent urwid toolkit (http://excess.org/article/2009/03/urwid-0984-released) - it's much easier to use than straight curses.
A:
Well, what do you mean by a GUI? If you just want to create something that can be used on a console, look into the curses module in the Python standard library, which allows you to simulate a primitive GUI of sorts on a console.
A:
A not very serious suggestions: a line editor can be implemented without curses.
These things are pretty primitive, of course, and not a lot of fun to work in. But they can be implemented with very little code, and would give you a chance to fool around with various schemes for maintaining the file state in memory pretty quickly.
And they would put you in touch with the programmers of the early seventies (when they had teletypes and the first glass teletypes, but after punched cards were a bit passe...).
A:
Not quite a reference to a Python library, but The Craft of Text Editing by Craig A. Finseth might be of interest you.
A:
Another option without curses is Python Slang
Newt is a library written on top of Slang.
|
How do I make a command line text editor?
|
I have gotten to know my way around a few programming languages, and I'd like to try my hand at making a command-line text editor -- something that runs in the terminal, like vim/emacs/nano, but is pure text (no guis, please). Preferably, I'd like to do this in python. Where do I start? Are there any (python) libraries to do command-line applications?
|
[
"try python curses module , it is a command-line graphic operation library.\n",
"Take a look at Curses Programming in Python and this as well. \n",
"Another option if you want to write a TUI (Text User Interface) without having to descend to curses is Snack, which comes with Newt.\n",
"Kids today! Sheesh! When I was starting out, curses was not in widespread use!\nMy first text editors worked on actual mechanical Teletype devices with actual paper (not a philosophical \"TTY\" device with a scrolling screen!)\nThis still works nicely as a way to edit. \nUse the cmd module to implement a bunch of commands. Use the 'ex' man page for hints as to what you need. Do not read about the vi commands; avoid reading about vim. \nLook at older man pages for just the \"EX COMMANDS\" section. For example, here: http://www.manpagez.com/man/1/ex/.\nImplement the append, add, change, delete, global, insert, join, list, move, print, quit, substitute and write commands and you'll be happy.\n",
"Curses type libraries and resources will get you into the textual user interfaces, and provide very nice, relatively easy to use windows, menus, editors, etc.\nThen you'll want to look into code highlighting modules for python.\nIt's a fun process dealing with the limitations of textual interfaces, and you can learn a lot by going down this road. Good luck!\n-Adam\n",
"I would recommend the excellent urwid toolkit (http://excess.org/article/2009/03/urwid-0984-released) - it's much easier to use than straight curses.\n",
"Well, what do you mean by a GUI? If you just want to create something that can be used on a console, look into the curses module in the Python standard library, which allows you to simulate a primitive GUI of sorts on a console.\n",
"A not very serious suggestions: a line editor can be implemented without curses.\nThese things are pretty primitive, of course, and not a lot of fun to work in. But they can be implemented with very little code, and would give you a chance to fool around with various schemes for maintaining the file state in memory pretty quickly.\nAnd they would put you in touch with the programmers of the early seventies (when they had teletypes and the first glass teletypes, but after punched cards were a bit passe...).\n",
"Not quite a reference to a Python library, but The Craft of Text Editing by Craig A. Finseth might be of interest you.\n",
"Another option without curses is Python Slang\nNewt is a library written on top of Slang.\n"
] |
[
25,
10,
8,
7,
5,
3,
2,
2,
2,
0
] |
[] |
[] |
[
"python",
"text_editor",
"tui"
] |
stackoverflow_0000688302_python_text_editor_tui.txt
|
Q:
Is something like ConfigParser appropriate for saving state (key, value) between runs?
I want to save a set of key, value pairs (string, int) between runs of a Python program, reload them on subsequent runs and write the changes to be available on the next run.
I don't think of this data as a configuration file, but it would fit the ConfigParser capabilities quite well. I would only need two [sections]. It's only a few hundred pairs and very simple so I don't think it's necessary to do an actual database.
Is it appropriate to use ConfigParser in this way? I've also considered using Perl and XML::Simple. What about that? Is there a way to do this in bash without Python or Perl?
A:
Well, you have better options. You can for example use pickle or json format.
Pickle serializing module is very easy to use.
import cPickle
cPickle.dump(obj, open('save.p', 'wb'))
obj = cPickle.load(open('save.p', 'rb'))
The format is not human readable and unpickling is not secure against erroneous or maliciously constructed data. You should not unpickle untrusted data.
If you are using python 2.6 there is a builtin module called json. It is as easy as pickle to use:
import json
encoded = json.dumps(obj)
obj = json.loads(encoded)
Json format is human readable and is very similar to the dictionary string representation in python. And doesn't have any security issues like pickle.
If you are using an earlier version of python you can simplejson instead.
A:
For me, PyYAML works well for these kind of things. I used to use pickle or ConfigParser before.
A:
ConfigParser is a fine way of doing it. There are other ways (the json and cPickle modules already mentioned may be useful) that are also good, depending on whether you want to have text files or binary files and if you want code to work simply in older versions of Python or not.
You may want to have a thin abstraction layer on top of your chosen way to make it easier to change your mind.
A:
Sounds like a job for a dbm. Basically it is a hash that lives external to your program. There are many implementations. In Perl it is trivial to tie a dbm to a hash (i.e. make it look like a dbm is really a normal hash variable). I don't know if there is any equivalent in mechanism in Python, but I would be surprised if there weren't.
A:
Re doing it in bash: If your strings are valid identifiers, you could use environment variables and env.
A:
If you can update the state key by key then any of the DBM databases will work. If you need really high performance and compact storage then Tokyo Cabinet - http://tokyocabinet.sourceforge.net/ is the cool toy.
If you want to save and load the whole thing at once (to maybe keep old versions or some such) and don't have too much data then just use JSON. It's much nicer to work with than XML. I don't know how the JSON implementation is in Python, but in Perl the JSON::XS module is insanely fast.
|
Is something like ConfigParser appropriate for saving state (key, value) between runs?
|
I want to save a set of key, value pairs (string, int) between runs of a Python program, reload them on subsequent runs and write the changes to be available on the next run.
I don't think of this data as a configuration file, but it would fit the ConfigParser capabilities quite well. I would only need two [sections]. It's only a few hundred pairs and very simple so I don't think it's necessary to do an actual database.
Is it appropriate to use ConfigParser in this way? I've also considered using Perl and XML::Simple. What about that? Is there a way to do this in bash without Python or Perl?
|
[
"Well, you have better options. You can for example use pickle or json format.\nPickle serializing module is very easy to use.\nimport cPickle\ncPickle.dump(obj, open('save.p', 'wb')) \nobj = cPickle.load(open('save.p', 'rb'))\n\nThe format is not human readable and unpickling is not secure against erroneous or maliciously constructed data. You should not unpickle untrusted data.\nIf you are using python 2.6 there is a builtin module called json. It is as easy as pickle to use:\nimport json\nencoded = json.dumps(obj)\nobj = json.loads(encoded)\n\nJson format is human readable and is very similar to the dictionary string representation in python. And doesn't have any security issues like pickle.\nIf you are using an earlier version of python you can simplejson instead.\n",
"For me, PyYAML works well for these kind of things. I used to use pickle or ConfigParser before.\n",
"ConfigParser is a fine way of doing it. There are other ways (the json and cPickle modules already mentioned may be useful) that are also good, depending on whether you want to have text files or binary files and if you want code to work simply in older versions of Python or not.\nYou may want to have a thin abstraction layer on top of your chosen way to make it easier to change your mind.\n",
"Sounds like a job for a dbm. Basically it is a hash that lives external to your program. There are many implementations. In Perl it is trivial to tie a dbm to a hash (i.e. make it look like a dbm is really a normal hash variable). I don't know if there is any equivalent in mechanism in Python, but I would be surprised if there weren't.\n",
"Re doing it in bash: If your strings are valid identifiers, you could use environment variables and env.\n",
"If you can update the state key by key then any of the DBM databases will work. If you need really high performance and compact storage then Tokyo Cabinet - http://tokyocabinet.sourceforge.net/ is the cool toy.\nIf you want to save and load the whole thing at once (to maybe keep old versions or some such) and don't have too much data then just use JSON. It's much nicer to work with than XML. I don't know how the JSON implementation is in Python, but in Perl the JSON::XS module is insanely fast.\n"
] |
[
16,
8,
2,
2,
0,
0
] |
[] |
[] |
[
"configparser",
"perl",
"python",
"xml"
] |
stackoverflow_0000916779_configparser_perl_python_xml.txt
|
Q:
Resize ctypes array
I'd like to resize a ctypes array. As you can see, ctypes.resize doesn't work like it could. I can write a function to resize an array, but I wanted to know some other solutions to this. Maybe I'm missing some ctypes trick or maybe I simply used resize wrong. The name c_long_Array_0 seems to tell me this may not work with resize.
>>> from ctypes import *
>>> c_int * 0
<class '__main__.c_long_Array_0'>
>>> intType = c_int * 0
>>> foo = intType()
>>> foo
<__main__.c_long_Array_0 object at 0xb7ed9e84>
>>> foo[0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: invalid index
>>> resize(foo, sizeof(c_int * 1))
>>> foo[0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: invalid index
>>> foo
<__main__.c_long_Array_0 object at 0xb7ed9e84>
>>> sizeof(c_int * 0)
0
>>> sizeof(c_int * 1)
4
Edit: Maybe go with something like:
>>> ctypes_resize = resize
>>> def resize(arr, type):
... tmp = type()
... for i in range(len(arr)):
... tmp[i] = arr[i]
... return tmp
...
...
>>> listType = c_int * 0
>>> list = listType()
>>> list = resize(list, c_int * 1)
>>> list[0]
0
>>>
But that's ugly passing the type instead of the size. It works for its purpose and that's it.
A:
from ctypes import *
list = (c_int*1)()
def customresize(array, new_size):
resize(array, sizeof(array._type_)*new_size)
return (array._type_*new_size).from_address(addressof(array))
list[0] = 123
list = customresize(list, 5)
>>> list[0]
123
>>> list[4]
0
|
Resize ctypes array
|
I'd like to resize a ctypes array. As you can see, ctypes.resize doesn't work like it could. I can write a function to resize an array, but I wanted to know some other solutions to this. Maybe I'm missing some ctypes trick or maybe I simply used resize wrong. The name c_long_Array_0 seems to tell me this may not work with resize.
>>> from ctypes import *
>>> c_int * 0
<class '__main__.c_long_Array_0'>
>>> intType = c_int * 0
>>> foo = intType()
>>> foo
<__main__.c_long_Array_0 object at 0xb7ed9e84>
>>> foo[0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: invalid index
>>> resize(foo, sizeof(c_int * 1))
>>> foo[0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: invalid index
>>> foo
<__main__.c_long_Array_0 object at 0xb7ed9e84>
>>> sizeof(c_int * 0)
0
>>> sizeof(c_int * 1)
4
Edit: Maybe go with something like:
>>> ctypes_resize = resize
>>> def resize(arr, type):
... tmp = type()
... for i in range(len(arr)):
... tmp[i] = arr[i]
... return tmp
...
...
>>> listType = c_int * 0
>>> list = listType()
>>> list = resize(list, c_int * 1)
>>> list[0]
0
>>>
But that's ugly passing the type instead of the size. It works for its purpose and that's it.
|
[
"from ctypes import *\n\nlist = (c_int*1)()\n\ndef customresize(array, new_size):\n resize(array, sizeof(array._type_)*new_size)\n return (array._type_*new_size).from_address(addressof(array))\n\nlist[0] = 123\nlist = customresize(list, 5)\n\n>>> list[0]\n123\n>>> list[4]\n0\n\n"
] |
[
10
] |
[] |
[] |
[
"ctypes",
"python"
] |
stackoverflow_0000919369_ctypes_python.txt
|
Q:
What is a cyclic data structure good for?
I was just reading through "Learning Python" by Mark Lutz and came across this code sample:
>>> L = ['grail']
>>> L.append(L)
>>> L
['grail', [...]]
It was identified as a cyclic data structure.
So I was wondering, and here is my question:
What is a 'cyclic data structure' used for in real life programming?
There seems to be a little confusion, which i think stems from the very brief code sample... here's a few more lines using the same object L
>>> L[0]
'grail'
>>> L[1][0]
'grail'
>>> L[1][1][0]
'grail'
A:
Lots of things. Circular buffer, for example: you have some collection of data with a front and a back, but an arbitrary number of nodes, and the "next" item from the last should take you back to the first.
Graph structures are often cyclic; acyclicity is a special case. Consider, for example, a graph containing all the cities and roads in a traveling salesman problem.
Okay, here's a particular example for you. I set up a collection of towns here in Colorado:
V=["Boulder", "Denver", "Colorado Springs", "Pueblo", "Limon"]
I then set up pairs of cities where there is a road connecting them.
E=[["Boulder", "Denver"],
["Denver", "Colorado Springs"],
["Colorado Springs", "Pueblo"],
["Denver", "Limon"],
["Colorado Springs", "Limon"]]
This has a bunch of cycles. For example, you can drive from Colorado Springs, to Limon, to Denver, and back to Colorado Springs.
If you create a data structure that contains all the cities in V and all the roads in E, that's a graph data structure. This graph would have cycles.
A:
I recently created a cyclic data structure to represent the eight cardinal and ordinal directions. Its useful for each direction to know its neighbors. For instance, Direction.North knows that Direction.NorthEast and Direction.NorthWest are its neighbors.
This is cyclic because each neighor knows its neighbors until it goes full swing around (the "->" represents clockwise):
North -> NorthEast -> East -> SouthEast -> South -> SouthWest -> West -> NorthWest -> North -> ...
Notice we came back to North.
That allows me to do stuff like this (in C#):
public class Direction
{
...
public IEnumerable<Direction> WithTwoNeighbors
{
get {
yield return this;
yield return this.CounterClockwise;
yield return this.Clockwise;
}
}
}
...
public void TryToMove (Direction dir)
{
dir = dir.WithTwoNeighbors.Where (d => CanMove (d)).First ()
Move (dir);
}
This turned out to be quite handy and made a lot of things much less complicated.
A:
A nested structure could be used in a test case for a garbage collector.
A:
It is a bit confusing since it is a list that contains itself, but the way I made sense of it is to not think of L as a list, but a node, and instead of things in a list, you think of it as other nodes reachable by this node.
To put a more real world example, think of them as flight paths from a city.
So chicago = [denver, los angeles, new york city, chicago] (realistically you wouldn't list chicago in itself, but for the sake of example you can reach chicago from chicago)
Then you have denver = [phoenix, philedelphia] and so on.
phoenix = [chicago, new york city]
Now you have cyclic data both from
chicago -> chicago
but also
chicago -> denver -> phoenix -> chicago
Now you have:
chicago[0] == denver
chicago[0][0] == phoenix
chicago[0][0][0] == chicago
A:
L just contains a reference to itself as one of it's elements. Nothing really special about this.
There are some obvious uses of cyclical structures where the last element knows about the first element. But this functionality is already covered by regular python lists.
You can get the last element of L by using [-1]. You can use python lists as queues with append() and pop(). You can split python lists. Which are the regular uses of a cyclical data structure.
>>> L = ['foo', 'bar']
>>> L.append(L)
>>> L
['foo', 'bar', [...]]
>>> L[0]
'foo'
>>> L[1]
'bar'
>>> L[2]
['foo', 'bar', [...]]
>>> L[2].append('baz')
>>> L
['foo', 'bar', [...], 'baz']
>>> L[2]
['foo', 'bar', [...], 'baz']
>>> L[2].pop()
'baz'
>>> L
['foo', 'bar', [...]]
>>> L[2]
['foo', 'bar', [...]]
A:
The data structures iterated by deterministic finite automata are often cyclical.
A:
One example would be a linked list where the last item points the first. This would allow you to create a fixed number of items but always be able to get a next item.
A:
when doing lattice simulations cyclic/toroidal boundary conditions are often used. usually a simple lattice[i%L] would suffice, but i suppose one could create the lattice to be cyclic.
A:
Suppose you have limited storage, and data constantly accumulates. In many real life cases, you don't mind getting rid of old data, but you don't want to move data. You can use a cyclic vector; implemented using a vector v of size N with two special indices: begin and end, which initiate on 0.
Insertion of "new" data now goes like this:
v[end] = a;
end = (end+1) % N;
if (begin == end)
begin = (begin+1) % N;
You can insert "old" data and erase "old" or "new" data in a similar way.
Scanning the vector goes like this
for (i=begin; i != end; i = (i+1) % N) {
// do stuff
}
A:
Cyclic data structures are usually used to represent circular relationships. That sounds obvious, but it happens more than you think. I can't think of any times I've used terribly complicated cyclical data structures, but bidirectional relationships are fairly common. For example, suppose I wanted to make an IM client. I could do something like this:
class Client(object):
def set_remote(self, remote_client):
self.remote_client = remote_client
def send(self, msg):
self.remote_client.receive(msg)
def receive(self, msg):
print msg
Jill = Client()
Bob = Client()
Bob.set_remote(Jill)
Jill.set_remote(Bob)
Then if Bob wanted to send a message to Jill, you could just do this:
Bob.send("Hi, Jill!")
Of course, Jill may want to send a message back, so she could do this:
Jill.send("Hi, Bob!")
Admittedly, this is a bit of a contrived example, but it should give you an example of when you may want to use a cyclical data structure.
A:
Any kind of object hierarchy where parents know about their children and children know about their parents. I'm always having to deal with this in ORMs because I want databases to know their tables and tables to know what database they're a part of, and so on.
A:
Let's look at a single practical example.
Let us say we're programming a menu navigation for a game. We want to store for each menu-item
The entry's name
The other menu we'll reach after pressing it.
The action that would be performed when pressing the menu.
When a menu-item is pressed, we'll activate the menu-item action and then move to the next menu. So our menu would be a simple list of dictionaries, like so:
options,start_menu,about = [],[],[]
def do_nothing(): pass
about += [
{'name':"copyright by...",'action':None,'menu':about},
{'name':"back",'action':do_nothing,'menu':start_menu}
]
options += [
{'name':"volume up",'action':volumeUp,'menu':options},
{'name':"save",'action':save,'menu':start_menu},
{'name':"back without save",'action':do_nothing,'menu':start_menu}
]
start_menu += [
{'name':"Exit",'action':f,'menu':None}, # no next menu since we quite
{'name':"Options",'action':do_nothing,'menu':options},
{'name':"About",'action':do_nothing,'menu':about}
]
See how about is cyclic:
>>> print about
[{'action': None, 'menu': [...], 'name': 'copyright by...'},#etc.
# see the ellipsis (...)
When a menu item is pressed we'll issue the following on-click function:
def menu_item_pressed(item):
log("menu item '%s' pressed" % item['name'])
item['action']()
set_next_menu(item['menu'])
Now, if we wouldn't have cyclic data structures, we wouldn't be able to have a menu item that points to itself, and, for instance, after pressing the volume-up function we would have to leave the options menu.
If cyclic data structures wouldn't be possible, we'll have to implement it ourselves, for example the menu item would be:
class SelfReferenceMarkerClass: pass
#singleton global marker for self reference
SelfReferenceMarker = SelfReferenceMarkerClass()
about += [
{'name':"copyright by...",'action':None,'menu':srm},
{'name':"back",'action':do_nothing,'menu':start_menu}
]
the menu_item_pressed function would be:
def menu_item_pressed(item):
item['action']()
if (item['menu'] == SelfReferenceMarker):
set_next_menu(get_previous_menu())
else:
set_next_menu(item['menu'])
The first example is a little bit nicer, but yes, not supporting self references is not such a big deal IMHO, as it's easy to overcome this limitation.
The menus example is like a graph with self references, where we store the graph by lists of vertex pointers (every vertex is a list of pointers to other vertices). In this example we needed self edges (a vertex that points to itself), thus python's support for cyclic data structures is useful.
|
What is a cyclic data structure good for?
|
I was just reading through "Learning Python" by Mark Lutz and came across this code sample:
>>> L = ['grail']
>>> L.append(L)
>>> L
['grail', [...]]
It was identified as a cyclic data structure.
So I was wondering, and here is my question:
What is a 'cyclic data structure' used for in real life programming?
There seems to be a little confusion, which i think stems from the very brief code sample... here's a few more lines using the same object L
>>> L[0]
'grail'
>>> L[1][0]
'grail'
>>> L[1][1][0]
'grail'
|
[
"Lots of things. Circular buffer, for example: you have some collection of data with a front and a back, but an arbitrary number of nodes, and the \"next\" item from the last should take you back to the first.\nGraph structures are often cyclic; acyclicity is a special case. Consider, for example, a graph containing all the cities and roads in a traveling salesman problem.\n\nOkay, here's a particular example for you. I set up a collection of towns here in Colorado:\nV=[\"Boulder\", \"Denver\", \"Colorado Springs\", \"Pueblo\", \"Limon\"]\n\nI then set up pairs of cities where there is a road connecting them.\nE=[[\"Boulder\", \"Denver\"],\n [\"Denver\", \"Colorado Springs\"],\n [\"Colorado Springs\", \"Pueblo\"],\n [\"Denver\", \"Limon\"],\n [\"Colorado Springs\", \"Limon\"]]\n\nThis has a bunch of cycles. For example, you can drive from Colorado Springs, to Limon, to Denver, and back to Colorado Springs.\nIf you create a data structure that contains all the cities in V and all the roads in E, that's a graph data structure. This graph would have cycles.\n",
"I recently created a cyclic data structure to represent the eight cardinal and ordinal directions. Its useful for each direction to know its neighbors. For instance, Direction.North knows that Direction.NorthEast and Direction.NorthWest are its neighbors. \nThis is cyclic because each neighor knows its neighbors until it goes full swing around (the \"->\" represents clockwise):\nNorth -> NorthEast -> East -> SouthEast -> South -> SouthWest -> West -> NorthWest -> North -> ...\nNotice we came back to North.\nThat allows me to do stuff like this (in C#):\npublic class Direction\n{\n ...\n public IEnumerable<Direction> WithTwoNeighbors\n {\n get {\n yield return this;\n yield return this.CounterClockwise;\n yield return this.Clockwise;\n }\n }\n}\n...\npublic void TryToMove (Direction dir)\n{\n dir = dir.WithTwoNeighbors.Where (d => CanMove (d)).First ()\n Move (dir);\n}\n\nThis turned out to be quite handy and made a lot of things much less complicated.\n",
"A nested structure could be used in a test case for a garbage collector.\n",
"It is a bit confusing since it is a list that contains itself, but the way I made sense of it is to not think of L as a list, but a node, and instead of things in a list, you think of it as other nodes reachable by this node.\nTo put a more real world example, think of them as flight paths from a city.\nSo chicago = [denver, los angeles, new york city, chicago] (realistically you wouldn't list chicago in itself, but for the sake of example you can reach chicago from chicago)\nThen you have denver = [phoenix, philedelphia] and so on.\nphoenix = [chicago, new york city]\nNow you have cyclic data both from \n\nchicago -> chicago\n\nbut also \n\nchicago -> denver -> phoenix -> chicago\n\nNow you have:\nchicago[0] == denver\nchicago[0][0] == phoenix\nchicago[0][0][0] == chicago\n\n",
"L just contains a reference to itself as one of it's elements. Nothing really special about this. \nThere are some obvious uses of cyclical structures where the last element knows about the first element. But this functionality is already covered by regular python lists.\nYou can get the last element of L by using [-1]. You can use python lists as queues with append() and pop(). You can split python lists. Which are the regular uses of a cyclical data structure.\n>>> L = ['foo', 'bar']\n>>> L.append(L)\n>>> L\n['foo', 'bar', [...]]\n>>> L[0]\n'foo'\n>>> L[1]\n'bar'\n>>> L[2]\n['foo', 'bar', [...]]\n>>> L[2].append('baz')\n>>> L\n['foo', 'bar', [...], 'baz']\n>>> L[2]\n['foo', 'bar', [...], 'baz']\n>>> L[2].pop()\n'baz'\n>>> L\n['foo', 'bar', [...]]\n>>> L[2]\n['foo', 'bar', [...]]\n\n",
"The data structures iterated by deterministic finite automata are often cyclical.\n",
"One example would be a linked list where the last item points the first. This would allow you to create a fixed number of items but always be able to get a next item.\n",
"when doing lattice simulations cyclic/toroidal boundary conditions are often used. usually a simple lattice[i%L] would suffice, but i suppose one could create the lattice to be cyclic.\n",
"Suppose you have limited storage, and data constantly accumulates. In many real life cases, you don't mind getting rid of old data, but you don't want to move data. You can use a cyclic vector; implemented using a vector v of size N with two special indices: begin and end, which initiate on 0.\nInsertion of \"new\" data now goes like this:\nv[end] = a;\nend = (end+1) % N;\nif (begin == end)\n begin = (begin+1) % N;\n\nYou can insert \"old\" data and erase \"old\" or \"new\" data in a similar way.\nScanning the vector goes like this\nfor (i=begin; i != end; i = (i+1) % N) {\n // do stuff\n}\n\n",
"Cyclic data structures are usually used to represent circular relationships. That sounds obvious, but it happens more than you think. I can't think of any times I've used terribly complicated cyclical data structures, but bidirectional relationships are fairly common. For example, suppose I wanted to make an IM client. I could do something like this:\nclass Client(object):\n def set_remote(self, remote_client):\n self.remote_client = remote_client\n\n def send(self, msg):\n self.remote_client.receive(msg)\n\n def receive(self, msg):\n print msg\n\nJill = Client()\nBob = Client()\nBob.set_remote(Jill) \nJill.set_remote(Bob)\n\nThen if Bob wanted to send a message to Jill, you could just do this:\nBob.send(\"Hi, Jill!\")\n\nOf course, Jill may want to send a message back, so she could do this:\nJill.send(\"Hi, Bob!\")\n\nAdmittedly, this is a bit of a contrived example, but it should give you an example of when you may want to use a cyclical data structure.\n",
"Any kind of object hierarchy where parents know about their children and children know about their parents. I'm always having to deal with this in ORMs because I want databases to know their tables and tables to know what database they're a part of, and so on.\n",
"Let's look at a single practical example.\nLet us say we're programming a menu navigation for a game. We want to store for each menu-item\n\nThe entry's name\nThe other menu we'll reach after pressing it.\nThe action that would be performed when pressing the menu.\n\nWhen a menu-item is pressed, we'll activate the menu-item action and then move to the next menu. So our menu would be a simple list of dictionaries, like so:\noptions,start_menu,about = [],[],[]\n\ndef do_nothing(): pass\n\nabout += [\n {'name':\"copyright by...\",'action':None,'menu':about},\n {'name':\"back\",'action':do_nothing,'menu':start_menu}\n ]\noptions += [\n {'name':\"volume up\",'action':volumeUp,'menu':options},\n {'name':\"save\",'action':save,'menu':start_menu},\n {'name':\"back without save\",'action':do_nothing,'menu':start_menu}\n ]\nstart_menu += [\n {'name':\"Exit\",'action':f,'menu':None}, # no next menu since we quite\n {'name':\"Options\",'action':do_nothing,'menu':options},\n {'name':\"About\",'action':do_nothing,'menu':about}\n ]\n\nSee how about is cyclic:\n>>> print about\n[{'action': None, 'menu': [...], 'name': 'copyright by...'},#etc.\n# see the ellipsis (...)\n\nWhen a menu item is pressed we'll issue the following on-click function:\ndef menu_item_pressed(item):\n log(\"menu item '%s' pressed\" % item['name'])\n item['action']()\n set_next_menu(item['menu'])\n\nNow, if we wouldn't have cyclic data structures, we wouldn't be able to have a menu item that points to itself, and, for instance, after pressing the volume-up function we would have to leave the options menu.\nIf cyclic data structures wouldn't be possible, we'll have to implement it ourselves, for example the menu item would be:\nclass SelfReferenceMarkerClass: pass\n#singleton global marker for self reference\nSelfReferenceMarker = SelfReferenceMarkerClass()\nabout += [\n {'name':\"copyright by...\",'action':None,'menu':srm},\n {'name':\"back\",'action':do_nothing,'menu':start_menu}\n ]\n\nthe menu_item_pressed function would be:\ndef menu_item_pressed(item):\n item['action']()\n if (item['menu'] == SelfReferenceMarker):\n set_next_menu(get_previous_menu())\n else:\n set_next_menu(item['menu'])\n\nThe first example is a little bit nicer, but yes, not supporting self references is not such a big deal IMHO, as it's easy to overcome this limitation.\nThe menus example is like a graph with self references, where we store the graph by lists of vertex pointers (every vertex is a list of pointers to other vertices). In this example we needed self edges (a vertex that points to itself), thus python's support for cyclic data structures is useful.\n"
] |
[
18,
6,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"cyclic_reference",
"data_structures",
"python",
"recursion"
] |
stackoverflow_0000405540_cyclic_reference_data_structures_python_recursion.txt
|
Q:
python monitoring over serial port
Good afternoon,
I would ask some suggestion about the best way to monitor events over the serial port.
I'm using PySerial to write "commands" over the serial port towards some devices and
I would like to receive feedback about the status of this devices.
Wich is the best way: 1) fullfill a pipe and read into, 2) a new thread delegated to read only, or what?
Can I also ask for a simple code to implement the solution?
A:
For general tips on working with pyserial, look at the search S.Lott suggested in the comment.
Regarding the best strategy to implement your application - it all depends on how your protocols are defined. Do the devices immediately respond to queries? Or do they continually send data that must be monitored? This is important to define, as it certainly affects the way you'll want to handle the communication.
Generally, I've found it simple and stable to have a separate thread reading everything from the serial port and just pumping the data into a Queue. The main application logic then can query this queue whenever it needs to and read the data.
A:
The strategy choosen is to use python multiprocessing and queue
see:
http://www.ibm.com/developerworks/aix/library/au-threadingpython/index.html
and
http://www.ibm.com/developerworks/aix/library/au-multiprocessing/index.html?ca=dgr-lnxw9dPython-Multi&S_TACT=105AGX59&S_CMP=grsitelnxw9d
for reference
|
python monitoring over serial port
|
Good afternoon,
I would ask some suggestion about the best way to monitor events over the serial port.
I'm using PySerial to write "commands" over the serial port towards some devices and
I would like to receive feedback about the status of this devices.
Wich is the best way: 1) fullfill a pipe and read into, 2) a new thread delegated to read only, or what?
Can I also ask for a simple code to implement the solution?
|
[
"For general tips on working with pyserial, look at the search S.Lott suggested in the comment.\nRegarding the best strategy to implement your application - it all depends on how your protocols are defined. Do the devices immediately respond to queries? Or do they continually send data that must be monitored? This is important to define, as it certainly affects the way you'll want to handle the communication.\nGenerally, I've found it simple and stable to have a separate thread reading everything from the serial port and just pumping the data into a Queue. The main application logic then can query this queue whenever it needs to and read the data.\n",
"The strategy choosen is to use python multiprocessing and queue\nsee:\n\nhttp://www.ibm.com/developerworks/aix/library/au-threadingpython/index.html\nand\nhttp://www.ibm.com/developerworks/aix/library/au-multiprocessing/index.html?ca=dgr-lnxw9dPython-Multi&S_TACT=105AGX59&S_CMP=grsitelnxw9d\n\nfor reference \n"
] |
[
3,
1
] |
[] |
[] |
[
"pyserial",
"python"
] |
stackoverflow_0000911089_pyserial_python.txt
|
Q:
Stuck on official Django Tutorial
I am just started out learning Python and also started looking into Django a little bit. So I copied this piece of code from the tutorial:
# Create your models here.
class Poll(models.Model):
question = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
def __unicode__(self):
return self.question
def was_published_today(self):
return self.pub_date.date() == datetime.date.today()
class Choice(models.Model):
poll = models.ForeignKey(Poll)
choice = models.CharField(max_length=200)
votes = models.IntegerField()
def ___unicode__(self):
return self.choice #shouldn't this return the choice
When I play around with it in the shell, I just get the "question" of the Poll object, but for some reason it won't return of the "choice" of the Choice objects. I fail to see the difference. My output on the shell looks like this:
>>> Poll.objects.all()
[<Poll: What is up?>]
>>> Choice.objects.all()
[<Choice: Choice object>, <Choice: Choice object>, <Choice: Choice object>]
>>>
I was expecting for the Choice objects to return something else than "Choice object". Does anybody have an idea about where I failed and what I should look into?
EDIT: Way to make me feel like an idiot. Yes, the three underscores were the problem. I was looking at that for about an hour now.
A:
You have three underscores before "unicode__" on the Choice class, it should be only two like in your Poll class, like this:
def __unicode__(self):
return u'%s' % self.choice
A:
Your Unicode method has too many underscores. It should read:
def __unicode__(self):
return u'%s' % self.choice
A:
Change:
class Choice(models.Model):
poll = models.ForeignKey(Poll)
choice = models.CharField(max_length=200)
votes = models.IntegerField()
def ___unicode__(self):
return self.choice #shouldn't this return the choice
To:
class Choice(models.Model):
poll = models.ForeignKey(Poll)
choice = models.CharField(max_length=200)
votes = models.IntegerField()
def __unicode__(self):
return self.choice #shouldn't this return the choice
You had too many underscores in the second __unicode__ definition
A:
The official Django book is a bit outdated. But the comments to the paragraphs are really useful. It should be two underscores:
___unicode__(self):
should be __unicode__(self):
|
Stuck on official Django Tutorial
|
I am just started out learning Python and also started looking into Django a little bit. So I copied this piece of code from the tutorial:
# Create your models here.
class Poll(models.Model):
question = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
def __unicode__(self):
return self.question
def was_published_today(self):
return self.pub_date.date() == datetime.date.today()
class Choice(models.Model):
poll = models.ForeignKey(Poll)
choice = models.CharField(max_length=200)
votes = models.IntegerField()
def ___unicode__(self):
return self.choice #shouldn't this return the choice
When I play around with it in the shell, I just get the "question" of the Poll object, but for some reason it won't return of the "choice" of the Choice objects. I fail to see the difference. My output on the shell looks like this:
>>> Poll.objects.all()
[<Poll: What is up?>]
>>> Choice.objects.all()
[<Choice: Choice object>, <Choice: Choice object>, <Choice: Choice object>]
>>>
I was expecting for the Choice objects to return something else than "Choice object". Does anybody have an idea about where I failed and what I should look into?
EDIT: Way to make me feel like an idiot. Yes, the three underscores were the problem. I was looking at that for about an hour now.
|
[
"You have three underscores before \"unicode__\" on the Choice class, it should be only two like in your Poll class, like this:\ndef __unicode__(self):\n return u'%s' % self.choice\n\n",
"Your Unicode method has too many underscores. It should read:\ndef __unicode__(self):\n return u'%s' % self.choice\n\n",
"Change:\nclass Choice(models.Model):\n poll = models.ForeignKey(Poll)\n choice = models.CharField(max_length=200)\n votes = models.IntegerField()\n def ___unicode__(self):\n return self.choice #shouldn't this return the choice\n\nTo:\nclass Choice(models.Model):\n poll = models.ForeignKey(Poll)\n choice = models.CharField(max_length=200)\n votes = models.IntegerField()\n def __unicode__(self):\n return self.choice #shouldn't this return the choice\n\nYou had too many underscores in the second __unicode__ definition\n",
"The official Django book is a bit outdated. But the comments to the paragraphs are really useful. It should be two underscores:\n___unicode__(self):\n\nshould be __unicode__(self):\n"
] |
[
7,
4,
3,
2
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0000919927_django_python.txt
|
Q:
In python, how can you unload generated classes
I am working on a library that loads files (hfd5 - pytables) into an object structure. The actual classes being used for the structure is loaded as a string from the hdf5 file, and then
loaded in this fashion:
class NamespaceHolder(dict):
# stmt is the source code holding all the class defs
def execute(self, stmt):
exec stmt in self
The problem is, loading multiple classes like this, causes objects to appear in the uncollectible part of the garbage collection, namely the actual class definitions. I can also load this into a global dictionary, but the problem remains of orphaned classes. Is there any way to unload the classes?
The main problem is the class.mro attribute, which contains a reference back to the class itself, causing circular references that the garbage collector can't handle.
Here is a small test case to see for yourselves:
import gc
if __name__ == "__main__":
gc.enable()
gc.set_debug(gc.DEBUG_LEAK)
code = """
class DummyA(object):
pass
"""
context = {}
exec code in context
exec code in context
gc.collect()
print len(gc.garbage)
Just a note: I have already argued against using parsing off text in a file for creating classes earlier, but apparently they are set on using it here and see some benefits I don't, so going away from this solution isn't feasible now.
A:
I think the GC can cope with circular references, however you'll need to do is remove the reference from the globals() dict:
try:
del globals()['DummyA']
except KeyError:
pass
otherwise there will be a non-circular reference to the class object that will stop it being cleaned up.
A:
The gc.set_debug(gc.DEBUG_LEAK) causes the leak. Try this:
import gc
def foo():
code = """
class DummyA(object):
pass
"""
context = {}
exec code in context
exec code in context
gc.collect()
print len(gc.garbage), len(gc.get_objects())
gc.enable()
foo(); foo() # amount of objects doesn't increase
gc.set_debug(gc.DEBUG_LEAK)
foo() # leaks
|
In python, how can you unload generated classes
|
I am working on a library that loads files (hfd5 - pytables) into an object structure. The actual classes being used for the structure is loaded as a string from the hdf5 file, and then
loaded in this fashion:
class NamespaceHolder(dict):
# stmt is the source code holding all the class defs
def execute(self, stmt):
exec stmt in self
The problem is, loading multiple classes like this, causes objects to appear in the uncollectible part of the garbage collection, namely the actual class definitions. I can also load this into a global dictionary, but the problem remains of orphaned classes. Is there any way to unload the classes?
The main problem is the class.mro attribute, which contains a reference back to the class itself, causing circular references that the garbage collector can't handle.
Here is a small test case to see for yourselves:
import gc
if __name__ == "__main__":
gc.enable()
gc.set_debug(gc.DEBUG_LEAK)
code = """
class DummyA(object):
pass
"""
context = {}
exec code in context
exec code in context
gc.collect()
print len(gc.garbage)
Just a note: I have already argued against using parsing off text in a file for creating classes earlier, but apparently they are set on using it here and see some benefits I don't, so going away from this solution isn't feasible now.
|
[
"I think the GC can cope with circular references, however you'll need to do is remove the reference from the globals() dict:\ntry:\n del globals()['DummyA']\nexcept KeyError:\n pass\n\notherwise there will be a non-circular reference to the class object that will stop it being cleaned up.\n",
"The gc.set_debug(gc.DEBUG_LEAK) causes the leak. Try this:\nimport gc\n\ndef foo(): \n code = \"\"\"\nclass DummyA(object):\n pass \n\"\"\"\n context = {}\n exec code in context\n exec code in context\n\n gc.collect()\n print len(gc.garbage), len(gc.get_objects())\n\ngc.enable()\nfoo(); foo() # amount of objects doesn't increase\ngc.set_debug(gc.DEBUG_LEAK)\nfoo() # leaks\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"classloader",
"dynamic",
"garbage_collection",
"python"
] |
stackoverflow_0000919924_classloader_dynamic_garbage_collection_python.txt
|
Q:
I need a sample of python unit testing sqlalchemy model with nose
Can someone show me how to write unit tests for sqlalchemy model I created using nose.
I just need one simple example.
Thanks.
A:
You can simply create an in-memory SQLite database and bind your session to that.
Example:
from db import session # probably a contextbound sessionmaker
from db import model
from sqlalchemy import create_engine
def setup():
engine = create_engine('sqlite:///:memory:')
session.configure(bind=engine)
# You probably need to create some tables and
# load some test data, do so here.
# To create tables, you typically do:
model.metadata.create_all(engine)
def teardown():
session.remove()
def test_something():
instances = session.query(model.SomeObj).all()
eq_(0, len(instances))
session.add(model.SomeObj())
session.flush()
# ...
A:
Check out the fixture project. We used nose to test that and it's also a way to declaratively define data to test against, there will be some extensive examples for you to use there!
See also fixture documentation.
|
I need a sample of python unit testing sqlalchemy model with nose
|
Can someone show me how to write unit tests for sqlalchemy model I created using nose.
I just need one simple example.
Thanks.
|
[
"You can simply create an in-memory SQLite database and bind your session to that.\nExample:\n\nfrom db import session # probably a contextbound sessionmaker\nfrom db import model\n\nfrom sqlalchemy import create_engine\n\ndef setup():\n engine = create_engine('sqlite:///:memory:')\n session.configure(bind=engine)\n # You probably need to create some tables and \n # load some test data, do so here.\n\n # To create tables, you typically do:\n model.metadata.create_all(engine)\n\ndef teardown():\n session.remove()\n\n\ndef test_something():\n instances = session.query(model.SomeObj).all()\n eq_(0, len(instances))\n session.add(model.SomeObj())\n session.flush()\n # ...\n\n",
"Check out the fixture project. We used nose to test that and it's also a way to declaratively define data to test against, there will be some extensive examples for you to use there!\nSee also fixture documentation.\n"
] |
[
38,
2
] |
[] |
[] |
[
"nose",
"python",
"sqlalchemy",
"testing",
"unit_testing"
] |
stackoverflow_0000833626_nose_python_sqlalchemy_testing_unit_testing.txt
|
Q:
Python Cookies question
import cgitb
import Cookie, urllib2
from cookielib import FileCookieJar
cgitb.enable()
c = Cookie.SmartCookie()
c['ini'] = 1
savedc = FileCookieJar()
savedc.add_cookie_header(c.output())
savedc.save()
shoulden't this save the cookie?...
i've been reading over the python documentation like 1 million times, i just don't get it :(
please help someone :(
A:
Raf, all I can say is, Egads! The documentation certainly is not clear! I have used Python for years and this simple Stack Overflow question that I thought I'd quickly nab before getting started on real work for the day has taken me more than twenty minutes to answer. :-)
First: it turns out that the "Cookie" library and the "cookielib" library are completely separate and have nothing to do with each other. This is stated in the documentation, but you have to scroll down to the "See Also" section of each documentation page to find this out. It would be helpful if this were at the top of each page instead.
So, when you pass an object from the "Cookie" library into "cookielib", you're confusing the "cookielib" internals because it stores cookies inside of dictionaries and a "Cookie" cookie looks like — guess what! — a dictionary, so "cookielib" confuses it for one of its own internal data structures and saves other cookies inside of it. The error I get as a result is:
<type 'exceptions.AttributeError'>: 'str' object has no attribute 'discard'
args = ("'str' object has no attribute 'discard'",)
message = "'str' object has no attribute 'discard'"
Actually, that's the error I get after sticking a bunch of attributes on the Cookie.Cookie object that don't belong there, but that I added before I realized that I was engaged in the hopeless task of trying to get a Cookie.Cookie to behave like a cookielib.Cookie. :-) The earlier errors were all attribute-missing errors like:
<class 'Cookie.CookieError'>: Invalid Attribute name
args = ('Invalid Attribute name',)
message = 'Invalid Attribute name'
(And I'm putting the errors here in case some poor future soul mixes up the Cookie classes and does the Google searches I just did, none of which turned up any results for the errors I was getting!)
So before we proceed farther, I have to know: are you trying to act like a web server, delivering cookies to clients and trying to get them back intact when the client sends their next request, in which case I should show you how the "Cookie" module works? Or are you writing a web client, for testing or for fun, that messes with the cookies that it sends with a web request to a web site, in which case we should talk about "cookielib"?
A:
Make sure you name the file to store cookies in:
savedc = FileCookieJar('cookies.txt')
add_cookie_header takes a Request object; set_cookie takes a Cookie. As it says in the documentation, FileCookieJar.save "raises NotImplementedError. Subclasses may leave this method unimplemented." Guess you should have tried reading the documentation 1E6+1 times.
|
Python Cookies question
|
import cgitb
import Cookie, urllib2
from cookielib import FileCookieJar
cgitb.enable()
c = Cookie.SmartCookie()
c['ini'] = 1
savedc = FileCookieJar()
savedc.add_cookie_header(c.output())
savedc.save()
shoulden't this save the cookie?...
i've been reading over the python documentation like 1 million times, i just don't get it :(
please help someone :(
|
[
"Raf, all I can say is, Egads! The documentation certainly is not clear! I have used Python for years and this simple Stack Overflow question that I thought I'd quickly nab before getting started on real work for the day has taken me more than twenty minutes to answer. :-)\nFirst: it turns out that the \"Cookie\" library and the \"cookielib\" library are completely separate and have nothing to do with each other. This is stated in the documentation, but you have to scroll down to the \"See Also\" section of each documentation page to find this out. It would be helpful if this were at the top of each page instead.\nSo, when you pass an object from the \"Cookie\" library into \"cookielib\", you're confusing the \"cookielib\" internals because it stores cookies inside of dictionaries and a \"Cookie\" cookie looks like — guess what! — a dictionary, so \"cookielib\" confuses it for one of its own internal data structures and saves other cookies inside of it. The error I get as a result is:\n<type 'exceptions.AttributeError'>: 'str' object has no attribute 'discard'\n args = (\"'str' object has no attribute 'discard'\",)\n message = \"'str' object has no attribute 'discard'\" \n\nActually, that's the error I get after sticking a bunch of attributes on the Cookie.Cookie object that don't belong there, but that I added before I realized that I was engaged in the hopeless task of trying to get a Cookie.Cookie to behave like a cookielib.Cookie. :-) The earlier errors were all attribute-missing errors like:\n<class 'Cookie.CookieError'>: Invalid Attribute name\n args = ('Invalid Attribute name',)\n message = 'Invalid Attribute name' \n\n(And I'm putting the errors here in case some poor future soul mixes up the Cookie classes and does the Google searches I just did, none of which turned up any results for the errors I was getting!)\nSo before we proceed farther, I have to know: are you trying to act like a web server, delivering cookies to clients and trying to get them back intact when the client sends their next request, in which case I should show you how the \"Cookie\" module works? Or are you writing a web client, for testing or for fun, that messes with the cookies that it sends with a web request to a web site, in which case we should talk about \"cookielib\"?\n",
"Make sure you name the file to store cookies in:\nsavedc = FileCookieJar('cookies.txt')\n\nadd_cookie_header takes a Request object; set_cookie takes a Cookie. As it says in the documentation, FileCookieJar.save \"raises NotImplementedError. Subclasses may leave this method unimplemented.\" Guess you should have tried reading the documentation 1E6+1 times.\n"
] |
[
5,
0
] |
[] |
[] |
[
"cgi",
"cookies",
"python"
] |
stackoverflow_0000920472_cgi_cookies_python.txt
|
Q:
Python: how to store a draft email with BCC recipients to Exchange Server via IMAP?
I try to store a draft e-mail via IMAP to a folder running on MS Exchange. Everything ok, except that Bcc recipients don't get shown in the draft message stored on the server. Bcc recipients also don't receive the email if I send it with MS Outlook. If I read the message back with Python after I have stored it on the server, I can see the Bcc in the draft.
The following Python code reproduces this behavior:
import imaplib
import time
from email.MIMEMultipart import MIMEMultipart
from email.MIMEText import MIMEText
message = MIMEMultipart()
message['Subject'] = 'Test Draft'
message['From'] = '[email protected]'
message['to'] = '[email protected]'
message['cc'] = '[email protected]'
message['bcc'] = '[email protected]'
message.attach(MIMEText('This is a test.\n'))
server= imaplib.IMAP4('the.ser.ver.ip')
server.login('test', 'test')
server.append("Drafts"
,'\Draft'
,imaplib.Time2Internaldate(time.time())
,str(message))
server.logout()
If I run this code, a draft gets stored into the Draft folder on the Exchange Server. But if I look at the draft with MS Outlook, it does not include the bcc recipient (message['bcc'] = '[email protected]'). Message, to, from, cc ok, no error.
If I download drafts that already include a bcc from an Exchange folder, I can also see the bcc. Only uploading doesn't work for me.
Any help very much appreciated. Thanks. BTW, MAPI is not an option.
Update: X-Receiver didn't work for me. As for playing around with an IMAP-Folder in Outlook, I got an interesting result. If I access the draft via the IMAP-Folder in Outlook, I see the bcc. But if I access it via the MAPI-Folder, I don't see it. Will play a little bit around with that.
Conclusion: Actually, the code works just fine. See below for the answer that I found.
A:
Actually, the code works just fine. It creates the proper mail with all the right headers including bcc.
How does the mail client display bcc?
The mail client (e.g. Python or MS Outlook via IMAP or MAPI in my case) decides whether and how to display bcc-headers. Outlook for example doesn't display bcc headers from an IMAP folder. This is a feature to hide bcc recipients from each other where they have not been stripped away from the mail before (it is not clear from the standard whether one bcc recipient is allowed to see all other bcc recipients or not, see Wikipedia).
Who handles bcc upon sending an email?
Suppose now that we have drafted a message in a mail client and stored it in an IMAP or MAPI folder. The server providing the IMAP / MAPI folders leaves the draft message unchanged. What happens to the bcc-headers upon sending the mail is implementation dependent, and might depend both on the mail client and the mail transfer agent (e.g. MS Exchange Server in my case). In a nutshell, people do not agree whether the mail client or the mail transfer agent is reponsible for removing bcc headers. It seems however to be the case that a majority of developers is of the opinion that it is the mail client's business with the mail transfer agent not touching the mail (e.g. MS Exchange, MS SMTP, Exim, OpenWave). In this case, the mail transfer agent sends the email to the recipient as defined in the RCPT TO: of the SMTP communication, and leaves the email unchanged otherwise. Some other mail transfer agents strip bcc headers from emails however (e.g. sendmail, Lotus Notes). A very thorough discussion can be found on the Exim mailing list starting here.
In the case of MS Outlook and MS Exchange, MS Outlook never sends bcc (but sends individual emails for each bcc recipient) and MS Exchange does not touch the email headers, but sends the full email (possibly including bcc recipients) to the recipients defined in RCPT TO:.
Conclusion
I did not understand that there is no guaranteed behavior for bcc, and that usually the client handles bcc. I will rewrite my Python code to loop over bcc recipients and generate one email for each bcc recipient.
A:
It could be that way by design. After all, the whole point of bcc is that the recipients are hidden from each other.
I understand that you are not sending the e-mail, just storing it. But my guess is that Exchange's internal rules kick in when the message is IMAP.appended to the folder, causing the bcc field to be stripped away.
Obviously, when messages are saved to a folder using Outlook the bcc field is not stripped away. But I guess outlook communicates with Exchange using some internal mechanizm (MAPI?).
All the above is just guesswork.
Something fun you could try:
In an empty Outlook/MAPI profile, create a IMAP account. Set it up to store Drafts and Sent Items on the Exchange server.
See if outlook using IMAP can save bcc of Drafts correctly.
I tried the above using the Evolution e-mail client connected to Exchange over IMAP. Using outlook (connected the normal way), I then had a look in Drafts and Sent Items. The bcc field was missing in both places.
I belive this supports my theory.
A:
Try setting this alternate version of the BCC header:
X-Receiver: [email protected]
Exchange in particular will treat this like a BCC when you send it. But I bet it will not strip it when you write via IMAP. You can include more than one BCC recipient by duplicating this line.
This is a complete hack, obviously.
|
Python: how to store a draft email with BCC recipients to Exchange Server via IMAP?
|
I try to store a draft e-mail via IMAP to a folder running on MS Exchange. Everything ok, except that Bcc recipients don't get shown in the draft message stored on the server. Bcc recipients also don't receive the email if I send it with MS Outlook. If I read the message back with Python after I have stored it on the server, I can see the Bcc in the draft.
The following Python code reproduces this behavior:
import imaplib
import time
from email.MIMEMultipart import MIMEMultipart
from email.MIMEText import MIMEText
message = MIMEMultipart()
message['Subject'] = 'Test Draft'
message['From'] = '[email protected]'
message['to'] = '[email protected]'
message['cc'] = '[email protected]'
message['bcc'] = '[email protected]'
message.attach(MIMEText('This is a test.\n'))
server= imaplib.IMAP4('the.ser.ver.ip')
server.login('test', 'test')
server.append("Drafts"
,'\Draft'
,imaplib.Time2Internaldate(time.time())
,str(message))
server.logout()
If I run this code, a draft gets stored into the Draft folder on the Exchange Server. But if I look at the draft with MS Outlook, it does not include the bcc recipient (message['bcc'] = '[email protected]'). Message, to, from, cc ok, no error.
If I download drafts that already include a bcc from an Exchange folder, I can also see the bcc. Only uploading doesn't work for me.
Any help very much appreciated. Thanks. BTW, MAPI is not an option.
Update: X-Receiver didn't work for me. As for playing around with an IMAP-Folder in Outlook, I got an interesting result. If I access the draft via the IMAP-Folder in Outlook, I see the bcc. But if I access it via the MAPI-Folder, I don't see it. Will play a little bit around with that.
Conclusion: Actually, the code works just fine. See below for the answer that I found.
|
[
"Actually, the code works just fine. It creates the proper mail with all the right headers including bcc.\nHow does the mail client display bcc?\nThe mail client (e.g. Python or MS Outlook via IMAP or MAPI in my case) decides whether and how to display bcc-headers. Outlook for example doesn't display bcc headers from an IMAP folder. This is a feature to hide bcc recipients from each other where they have not been stripped away from the mail before (it is not clear from the standard whether one bcc recipient is allowed to see all other bcc recipients or not, see Wikipedia).\nWho handles bcc upon sending an email?\nSuppose now that we have drafted a message in a mail client and stored it in an IMAP or MAPI folder. The server providing the IMAP / MAPI folders leaves the draft message unchanged. What happens to the bcc-headers upon sending the mail is implementation dependent, and might depend both on the mail client and the mail transfer agent (e.g. MS Exchange Server in my case). In a nutshell, people do not agree whether the mail client or the mail transfer agent is reponsible for removing bcc headers. It seems however to be the case that a majority of developers is of the opinion that it is the mail client's business with the mail transfer agent not touching the mail (e.g. MS Exchange, MS SMTP, Exim, OpenWave). In this case, the mail transfer agent sends the email to the recipient as defined in the RCPT TO: of the SMTP communication, and leaves the email unchanged otherwise. Some other mail transfer agents strip bcc headers from emails however (e.g. sendmail, Lotus Notes). A very thorough discussion can be found on the Exim mailing list starting here.\nIn the case of MS Outlook and MS Exchange, MS Outlook never sends bcc (but sends individual emails for each bcc recipient) and MS Exchange does not touch the email headers, but sends the full email (possibly including bcc recipients) to the recipients defined in RCPT TO:.\nConclusion\nI did not understand that there is no guaranteed behavior for bcc, and that usually the client handles bcc. I will rewrite my Python code to loop over bcc recipients and generate one email for each bcc recipient.\n",
"It could be that way by design. After all, the whole point of bcc is that the recipients are hidden from each other.\nI understand that you are not sending the e-mail, just storing it. But my guess is that Exchange's internal rules kick in when the message is IMAP.appended to the folder, causing the bcc field to be stripped away. \nObviously, when messages are saved to a folder using Outlook the bcc field is not stripped away. But I guess outlook communicates with Exchange using some internal mechanizm (MAPI?).\nAll the above is just guesswork.\nSomething fun you could try:\n\nIn an empty Outlook/MAPI profile, create a IMAP account. Set it up to store Drafts and Sent Items on the Exchange server.\nSee if outlook using IMAP can save bcc of Drafts correctly.\n\nI tried the above using the Evolution e-mail client connected to Exchange over IMAP. Using outlook (connected the normal way), I then had a look in Drafts and Sent Items. The bcc field was missing in both places.\nI belive this supports my theory.\n",
"Try setting this alternate version of the BCC header:\nX-Receiver: [email protected]\n\nExchange in particular will treat this like a BCC when you send it. But I bet it will not strip it when you write via IMAP. You can include more than one BCC recipient by duplicating this line.\nThis is a complete hack, obviously.\n"
] |
[
6,
1,
1
] |
[] |
[] |
[
"bcc",
"email",
"exchange_server",
"imap",
"python"
] |
stackoverflow_0000771907_bcc_email_exchange_server_imap_python.txt
|
Q:
Swig / Python memory leak detected
I have a very complicated class for which I'm attempting to make Python wrappers in SWIG. When I create an instance of the item in Python, however, I'm unable to initialize certain data members without receiving the message:
>>> myVar = myModule.myDataType()
swig/python detected a memory leak of type 'MyDataType *', no destructor found.
Does anyone know what I need to do to address this? Is there a flag I could be using to generate destructors?
A:
SWIG always generates destructor wrappers (unless %nodefaultdtor directive is used). However, in case where it doesn't know anything about a type, it will generate an opaque pointer wrapper, which will cause leaks (and the above message).
Please check that myDataType is a type that is known by SWIG. Re-run SWIG with debug messages turned on and check for any messages similar to
Nothing is known about Foo base type - Bar. Ignored
Receiving a message as above means that SWIG doesn't know your type hierarchy to the full extent and thus operates on limited information - which could cause it to not generate a dtor.
|
Swig / Python memory leak detected
|
I have a very complicated class for which I'm attempting to make Python wrappers in SWIG. When I create an instance of the item in Python, however, I'm unable to initialize certain data members without receiving the message:
>>> myVar = myModule.myDataType()
swig/python detected a memory leak of type 'MyDataType *', no destructor found.
Does anyone know what I need to do to address this? Is there a flag I could be using to generate destructors?
|
[
"SWIG always generates destructor wrappers (unless %nodefaultdtor directive is used). However, in case where it doesn't know anything about a type, it will generate an opaque pointer wrapper, which will cause leaks (and the above message).\nPlease check that myDataType is a type that is known by SWIG. Re-run SWIG with debug messages turned on and check for any messages similar to\nNothing is known about Foo base type - Bar. Ignored\n\nReceiving a message as above means that SWIG doesn't know your type hierarchy to the full extent and thus operates on limited information - which could cause it to not generate a dtor.\n"
] |
[
12
] |
[
"The error message is pretty clear to me, you need to define a destructor for this type. \n"
] |
[
-13
] |
[
"memory_leaks",
"python",
"swig"
] |
stackoverflow_0000918180_memory_leaks_python_swig.txt
|
Q:
What if I want to store a None value in the memcache?
This is specifically related to the Google App Engine Memcache API, but I'm sure it also applies to other Memcache tools.
The dictionary .get() method allows you to specify a default value, such as dict.get('key', 'defaultval')
This can be useful if it's possible you might want to store None as a value in a dictionary.
However, the memcache.get() does not let you to do this. I've modified my @memoize decorator so it looks like this:
def memoize(keyformat, time=1000000):
"""Decorator to memoize functions using memcache."""
def decorator(fxn):
def wrapper(*args, **kwargs):
key = keyformat + str(args[1:]) + str(kwargs)
from google.appengine.api import memcache
data = memcache.get(key)
if Debug(): return fxn(*args, **kwargs)
if data:
if data is 'None': data = None
return data
data = fxn(*args, **kwargs)
if data is None: data = 'None'
memcache.set(key, data, time)
return data
return wrapper
return decorator
Now I'm sure there's a good argument that I shouldn't be storing None values in the first place, but let's put that aside for now. Is there a better way I can handle this besides converting None vals to strings and back?
A:
A possible way to do this is to create new class that defines None for this purpose, and assign instances of this to the cache (unfortunately you cannot extend None). Alternatively, you could use the empty string "", or avoid storing None/null values altogether (absence of the key implies None).
Then check for instances of your 'None' class when you check the result of mc.get(key) (is None, == "", etc)
A:
You could do something like what Haskell and Scala does and store an Option dictionary. The dictionary contains two keys: one key to indicate that it is valid and one key that is used to hold the data. Something like this:
{valid: true, data: whatyouwanttostore}
Then if get return None, you know that the cache was missed; if the result is a dictionary with None as the data, the you know that the data was in the cache but that it was false.
A:
Not really.
You could store a None value as an empty string, but there isn't really a way to store special data in a memcache.
What's the difference between the cache key not existing and the cache value being None? It's probably better to unify these two situations.
|
What if I want to store a None value in the memcache?
|
This is specifically related to the Google App Engine Memcache API, but I'm sure it also applies to other Memcache tools.
The dictionary .get() method allows you to specify a default value, such as dict.get('key', 'defaultval')
This can be useful if it's possible you might want to store None as a value in a dictionary.
However, the memcache.get() does not let you to do this. I've modified my @memoize decorator so it looks like this:
def memoize(keyformat, time=1000000):
"""Decorator to memoize functions using memcache."""
def decorator(fxn):
def wrapper(*args, **kwargs):
key = keyformat + str(args[1:]) + str(kwargs)
from google.appengine.api import memcache
data = memcache.get(key)
if Debug(): return fxn(*args, **kwargs)
if data:
if data is 'None': data = None
return data
data = fxn(*args, **kwargs)
if data is None: data = 'None'
memcache.set(key, data, time)
return data
return wrapper
return decorator
Now I'm sure there's a good argument that I shouldn't be storing None values in the first place, but let's put that aside for now. Is there a better way I can handle this besides converting None vals to strings and back?
|
[
"A possible way to do this is to create new class that defines None for this purpose, and assign instances of this to the cache (unfortunately you cannot extend None). Alternatively, you could use the empty string \"\", or avoid storing None/null values altogether (absence of the key implies None).\nThen check for instances of your 'None' class when you check the result of mc.get(key) (is None, == \"\", etc)\n",
"You could do something like what Haskell and Scala does and store an Option dictionary. The dictionary contains two keys: one key to indicate that it is valid and one key that is used to hold the data. Something like this:\n{valid: true, data: whatyouwanttostore}\n\nThen if get return None, you know that the cache was missed; if the result is a dictionary with None as the data, the you know that the data was in the cache but that it was false.\n",
"Not really.\nYou could store a None value as an empty string, but there isn't really a way to store special data in a memcache.\nWhat's the difference between the cache key not existing and the cache value being None? It's probably better to unify these two situations.\n"
] |
[
7,
4,
0
] |
[] |
[] |
[
"google_app_engine",
"memcached",
"python"
] |
stackoverflow_0000895386_google_app_engine_memcached_python.txt
|
Q:
Reporting charts and data for MS-Office users
We have lots of data and some charts repesenting one logical item. Charts and data is stored in various files. As a result, most users can easily access and re-use the information in their applications.
However, this not exactly a good way of storing data. Amongst other reasons, charts belong to some data, the charts and data have some meta-information that is not reflected in the file system, there are a lot of files, etc.
Ideally, we want
one big "file" that can store all
information (text, data and charts)
the "file" is human readable,
portable and accessible by
non-technical users
allows typical office applications
like MS Word or MS Excel to extract
text, data and charts easily.
light-weight, easy solution. Quick
and dirty is sufficient. Not many
users.
I am happy to use some scripting language like Python to generate the "file", third-party tools (ideally free as in beer), and everything that you find on a typical Windows-centric office computer.
Some ideas that we currently ponder:
using VB or pywin32 to script MS Word or Excel
creating html and publish it on a RESTful web server
Could you expand on the ideas above? Do you have any other ideas? What should we consider?
A:
I can only agree with Reef on the general concepts he presented:
You will almost certainly prefer the data in a database than in a single large file
You should not worry that the data is not directly manipulated by users because as Reef mentioned, it can only go wrong. And you would be suprised at how ugly it can get
Concerning the usage of MS Office integration tools I disagree with Reef. You can quite easily create an ActiveX Server (in Python if you like) that is accessible from the MS Office suite. As long as you have a solid infrastructure that allows some sort of file share, you could use that shared area to keep your code. I guess the mess Reef was talking about mostly is about keeping users' versions of your extract/import code in sync. If you do not use some sort of shared repository (a simple shared folder) or if your infrastructure fails you often so that the shared folder becomes unavailable you will be in great pain. Note what is also somewhat painful if you do not have the appropriate tools but deal with many users: The ActiveX Server is best registered on each machine.
So.. I just said MS Office integration is very doable. But whether it is the best thing to do is a different matter. I strongly believe you will serve your users better if you build a web-site that handles their data for them. This sort of tool however almost certainly becomes an "ongoing project". Often, even as an "ongoing project", the time saved by your users could still make it worth it. But sometimes, strategically, you want to give your users a poorer experience to control project costs. In that case the ActiveX Server I mentioned could be what you want.
A:
Instead of using one big file, You should use a database. Yes, You can store various types of files like gifs in the database if You like to.
The file would not be human readable or accessible by non-technical users, but this is good.
The database would have a website that Your non-technical users would use to insert, update and get data from. They would be able to display it on the page or export it to csv (or even xls - it's not that hard, I've seen some csv->xls converters). You could look into some open standard document formats, I think it should be quite easy to output data with in it. Do not try to output in "doc" format (but You could try "docx"). You should be able to easily teach the users how to export their data to a CSV and upload it to the site, or they could use the web interface to insert the data if they like to.
If You will allow Your users to mess with the raw data, they will break it (i have tried that, You have no idea how those guys could do that). The only way to prevent it is to make a web form that only allows them to perform certain actions that You exactly know how that they should suppose to perform.
The database + web page solution is the good one. Using VB or pywin32 to script MSOffice will get You in so much trouble I cannot even imagine.
You could use gnuplot or some other graphics library to draw (pretty straightforward to implement, it does all the hard work for You).
I am afraid that the "quick" and dirty solution is tempting, but I only can say one thing: it will not be quick. In a few weeks You will find that hacking around with MSOffice scripting is messy, buggy and unreliable and the non-technical guys will hate it and say that in other companies they used to have a simple web panel that did that. Then You will find that You will not be able to ask about the scripting because everyone uses the web interfaces nowadays, as they are quite easy to implement and maintain.
This is not a small project, it's a medium sized one, You need to remember this while writing it. It will take some time to do it and test it and You will have to add new features as the non-technical guys will start using it. I knew some passionate php teenagers who would be able to write this panel in a week, but as I understand You have some better resources so I hope You will come with a really reliable, modular, extensible solution with good usability and happy users.
Good luck!
|
Reporting charts and data for MS-Office users
|
We have lots of data and some charts repesenting one logical item. Charts and data is stored in various files. As a result, most users can easily access and re-use the information in their applications.
However, this not exactly a good way of storing data. Amongst other reasons, charts belong to some data, the charts and data have some meta-information that is not reflected in the file system, there are a lot of files, etc.
Ideally, we want
one big "file" that can store all
information (text, data and charts)
the "file" is human readable,
portable and accessible by
non-technical users
allows typical office applications
like MS Word or MS Excel to extract
text, data and charts easily.
light-weight, easy solution. Quick
and dirty is sufficient. Not many
users.
I am happy to use some scripting language like Python to generate the "file", third-party tools (ideally free as in beer), and everything that you find on a typical Windows-centric office computer.
Some ideas that we currently ponder:
using VB or pywin32 to script MS Word or Excel
creating html and publish it on a RESTful web server
Could you expand on the ideas above? Do you have any other ideas? What should we consider?
|
[
"I can only agree with Reef on the general concepts he presented:\n\nYou will almost certainly prefer the data in a database than in a single large file\nYou should not worry that the data is not directly manipulated by users because as Reef mentioned, it can only go wrong. And you would be suprised at how ugly it can get\n\nConcerning the usage of MS Office integration tools I disagree with Reef. You can quite easily create an ActiveX Server (in Python if you like) that is accessible from the MS Office suite. As long as you have a solid infrastructure that allows some sort of file share, you could use that shared area to keep your code. I guess the mess Reef was talking about mostly is about keeping users' versions of your extract/import code in sync. If you do not use some sort of shared repository (a simple shared folder) or if your infrastructure fails you often so that the shared folder becomes unavailable you will be in great pain. Note what is also somewhat painful if you do not have the appropriate tools but deal with many users: The ActiveX Server is best registered on each machine.\nSo.. I just said MS Office integration is very doable. But whether it is the best thing to do is a different matter. I strongly believe you will serve your users better if you build a web-site that handles their data for them. This sort of tool however almost certainly becomes an \"ongoing project\". Often, even as an \"ongoing project\", the time saved by your users could still make it worth it. But sometimes, strategically, you want to give your users a poorer experience to control project costs. In that case the ActiveX Server I mentioned could be what you want.\n",
"Instead of using one big file, You should use a database. Yes, You can store various types of files like gifs in the database if You like to.\nThe file would not be human readable or accessible by non-technical users, but this is good.\nThe database would have a website that Your non-technical users would use to insert, update and get data from. They would be able to display it on the page or export it to csv (or even xls - it's not that hard, I've seen some csv->xls converters). You could look into some open standard document formats, I think it should be quite easy to output data with in it. Do not try to output in \"doc\" format (but You could try \"docx\"). You should be able to easily teach the users how to export their data to a CSV and upload it to the site, or they could use the web interface to insert the data if they like to.\nIf You will allow Your users to mess with the raw data, they will break it (i have tried that, You have no idea how those guys could do that). The only way to prevent it is to make a web form that only allows them to perform certain actions that You exactly know how that they should suppose to perform.\nThe database + web page solution is the good one. Using VB or pywin32 to script MSOffice will get You in so much trouble I cannot even imagine.\nYou could use gnuplot or some other graphics library to draw (pretty straightforward to implement, it does all the hard work for You).\nI am afraid that the \"quick\" and dirty solution is tempting, but I only can say one thing: it will not be quick. In a few weeks You will find that hacking around with MSOffice scripting is messy, buggy and unreliable and the non-technical guys will hate it and say that in other companies they used to have a simple web panel that did that. Then You will find that You will not be able to ask about the scripting because everyone uses the web interfaces nowadays, as they are quite easy to implement and maintain.\nThis is not a small project, it's a medium sized one, You need to remember this while writing it. It will take some time to do it and test it and You will have to add new features as the non-technical guys will start using it. I knew some passionate php teenagers who would be able to write this panel in a week, but as I understand You have some better resources so I hope You will come with a really reliable, modular, extensible solution with good usability and happy users.\nGood luck!\n"
] |
[
2,
1
] |
[] |
[] |
[
"ms_office",
"python",
"reporting",
"scripting",
"web_services"
] |
stackoverflow_0000915726_ms_office_python_reporting_scripting_web_services.txt
|
Q:
Progress bar with long web requests
In a django application I am working on, I have just added the ability to archive a number of files (starting 50mb in total) to a zip file. Currently, i am doing it something like this:
get files to zip
zip all files
send HTML response
Obviously, this causes a big wait on line two where the files are being compressed. What can i do to make this processes a whole lot better for the user? Although having a progress bar would be the best, even if it just returned a static page saying 'please wait' or whatever.
Any thoughts and ideas would be loved.
A:
You should keep in mind showing the progress bar may not be a good idea, since you can get timeouts or get your server suffer from submitting lot of simultaneous requests.
Put the zipping task in the queue and have it callback to notify the user somehow - by e-mail for instance - that the process has finished.
Take a look at django-lineup
Your code will look pretty much like:
from lineup import registry
from lineup import _debug
def create_archive(queue_id, queue):
queue.set_param("zip_link", _create_archive(resource = queue.context_object, user = queue.user))
return queue
def create_archive_callback(queue_id, queue):
_send_email_notification(subject = queue.get_param("zip_link"), user = queue.user)
return queue
registry.register_job('create_archive', create_archive, callback = create_archive_callback)
In your views, create queued tasks by:
from lineup.factory import JobFactory
j = JobFactory()
j.create_job(self, 'create_archive', request.user, your_resource_object_containing_files_to_zip, { 'extra_param': 'value' })
Then run your queue processor (probably inside of a screen session):
./manage.py run_queue
Oh, and on the subject you might be also interested in estimating zip file creation time. I got pretty slick answers there.
A:
Fun fact: You might be able to use a progress bar to trick users into thinking that things are going faster than they really are.
http://www.chrisharrison.net/projects/progressbars/index.html
A:
You could use a 'log-file' to keep track of the zipped files, and of how many files still remain.
The procedural way should be like this:
Count the numbers of file, write it in a text file, in a format like totalfiles.filespreocessed
Every file you zip, simply update the file
So, if you have to zip 3 files, the log file will grown as:
3.0 -> begin, no file still processed
3.1 -> 1 file on 3 processed, 33% task complete
3.2 -> 2 file on 3 processed, 66% task complete
3.3 -> 3 file on 3 processed, 100% task complete
And then with a simple ajax function (an interval) check the log-file every second.
In python, open, read and rite a file such small should be very quick, but maybe can cause some requests trouble if you'll have many users doing that in the same time, but obviously you'll need to create a log file for each request, maybe with rand name, and delete it after the task is completed.
A problem could be that, for let the ajax read the log-file, you'll need to open and close the file handler in python every time you update it.
Eventually, for a more accurate progress meter, you culd even use the file size instead of the number of file as parameter.
|
Progress bar with long web requests
|
In a django application I am working on, I have just added the ability to archive a number of files (starting 50mb in total) to a zip file. Currently, i am doing it something like this:
get files to zip
zip all files
send HTML response
Obviously, this causes a big wait on line two where the files are being compressed. What can i do to make this processes a whole lot better for the user? Although having a progress bar would be the best, even if it just returned a static page saying 'please wait' or whatever.
Any thoughts and ideas would be loved.
|
[
"You should keep in mind showing the progress bar may not be a good idea, since you can get timeouts or get your server suffer from submitting lot of simultaneous requests.\nPut the zipping task in the queue and have it callback to notify the user somehow - by e-mail for instance - that the process has finished.\nTake a look at django-lineup\nYour code will look pretty much like:\nfrom lineup import registry\nfrom lineup import _debug\n\ndef create_archive(queue_id, queue):\n queue.set_param(\"zip_link\", _create_archive(resource = queue.context_object, user = queue.user))\n return queue\n\n\ndef create_archive_callback(queue_id, queue):\n _send_email_notification(subject = queue.get_param(\"zip_link\"), user = queue.user)\n return queue\n\nregistry.register_job('create_archive', create_archive, callback = create_archive_callback)\n\nIn your views, create queued tasks by:\n from lineup.factory import JobFactory\n j = JobFactory()\n j.create_job(self, 'create_archive', request.user, your_resource_object_containing_files_to_zip, { 'extra_param': 'value' })\n\nThen run your queue processor (probably inside of a screen session):\n./manage.py run_queue\n\nOh, and on the subject you might be also interested in estimating zip file creation time. I got pretty slick answers there.\n",
"Fun fact: You might be able to use a progress bar to trick users into thinking that things are going faster than they really are.\nhttp://www.chrisharrison.net/projects/progressbars/index.html\n",
"You could use a 'log-file' to keep track of the zipped files, and of how many files still remain.\nThe procedural way should be like this:\n\nCount the numbers of file, write it in a text file, in a format like totalfiles.filespreocessed\nEvery file you zip, simply update the file\n\nSo, if you have to zip 3 files, the log file will grown as:\n3.0 -> begin, no file still processed\n3.1 -> 1 file on 3 processed, 33% task complete\n3.2 -> 2 file on 3 processed, 66% task complete\n3.3 -> 3 file on 3 processed, 100% task complete\n\nAnd then with a simple ajax function (an interval) check the log-file every second.\nIn python, open, read and rite a file such small should be very quick, but maybe can cause some requests trouble if you'll have many users doing that in the same time, but obviously you'll need to create a log file for each request, maybe with rand name, and delete it after the task is completed.\nA problem could be that, for let the ajax read the log-file, you'll need to open and close the file handler in python every time you update it.\nEventually, for a more accurate progress meter, you culd even use the file size instead of the number of file as parameter.\n"
] |
[
4,
1,
0
] |
[
"Better than a static page, show a Javascript dialog (using Shadowbox, JQuery UI or some custom method) with a throbber ( you can get some at hxxp://www.ajaxload.info/ ). You can also show the throbber in your page, without dialogs. Most users only want to know their action is being handled, and can live without reliable progress information (\"Please wait, this could take some time...\")\nJQUery UI also has a progress bar API. You could make periodic AJAX queries to a didcated page on your website to get a progress report and change the progress bar accordingly. Depending on how often the archiving is ran, how many users can trigger it and how you authenticate your users, this could be quite hard.\n"
] |
[
-1
] |
[
"django",
"python"
] |
stackoverflow_0000919816_django_python.txt
|
Q:
How to use classes derived from Python's list class
This is a followup to question 912526 - How do I pass lots of variables to and from a function in Python?.
There are lots of variables that need to get passed around in the program I'm writing, and from my previous question I understand that I should put these variables into classes, and then pass around the classes.
Some of these variables come in repetetive sets - for a thin film calculation I need to track the optical properties (index of refraction, absorption, thickness, etc) for a number of layers.
Is the best way to store variables like this to create a class derived from a Python list to store the set of classes which each hold the variables for a single layer? And then put the functions that deal with the set of layers in class derived from the list, and the functions that deal with a specific layer in that class? Is there a better way to do this with a single class?
Using the two class approach in the following example, I'm able to set things up so that I can access variables using statments like
n1 = layers[5].n
This is the best way to do this, right?
#Test passing values to and from functions
class Layers(list):
def add(self,n,k,comment):
self.append( OneLayer(n,k,comment) )
def input_string(self):
input_string = []
for layer in self:
vars = layer.input_string()
for var in vars:
input_string.append( var )
return input_string
def set_layers(self,results):
for layer,i in enumerate(self):
j = i*layer.num_var
layer.set_layer( *results[j:j+2] )
class OneLayer(object):
def __init__(self,n,k,comment):
self.n = n
self.k = k
self.comment = comment
def input_string(self):
return [['f','Index of Refraction',self.n], ['f','Absorption',self.k],['s','Comment',self.comment]]
def set_layer(self,n,k,comment):
self.n = n; self.k=k; self.comment = comment
def num_var(self):
return 3
if __name__ == '__main__':
layers = Layers()
layers.add(1.0,0.0,'This vacuum sucks')
layers.add(1.5,0.0,'BK 7 Glass')
print layers[0].n
print layers.input_string()
layers[1].set_layer(1.77,0.0,'Sapphire')
print layers.input_string()
I get the following output from this test program:
1.0
[['f', 'Index of Refraction', 1.0], ['f', 'Absorption', 0.0], ['s', 'Comment', 'This vacuum sucks'], ['f', 'Index of Refraction', 1.5], ['f', 'Absorption', 0.0], ['s', 'Comment', 'BK 7 Glass']]
[['f', 'Index of Refraction', 1.0], ['f', 'Absorption', 0.0], ['s', 'Comment', 'This vacuum sucks'], ['f', 'Index of Refraction', 1.77], ['f', 'Absorption', 0.0], ['s', 'Comment', 'Sapphire']]
A:
There are several issues in your code:
1.If you make any list operation the result will be a native list:
layers1 = Layers()
layers2 = Layers()
layers1 + layers2 -> the result will be a native list
2.Why define input_string when you can override __repr__ or __str__
3.Why do you even have to derive from list in this case? You only need to derive from list if you want your class to behave exactly like a list. But in your case you seem to be looking for a container.
All you need to do to get your class to behave similar to a list is to override some special python methods http://docs.python.org/reference/datamodel.html#emulating-container-types
class Layers(object):
def __init__(self, container=None):
if container is None:
container = []
self.container = container
def add(self,n,k,comment):
self.container.append([n,k,comment])
def __str__(self):
return str(self.container)
def __repr__(self):
return str(self.container)
def __getitem__(self, key):
return Layers(self.container[key])
def __len__(self):
return len(self.container)
>>> l = Layers()
>>> l.add(1, 2, 'test')
>>> l.add(1, 2, 'test')
>>> l
[[1, 2, 'test'], [1, 2, 'test']]
>>> l[0]
[1, 2, 'test']
>>> len(l)
2
|
How to use classes derived from Python's list class
|
This is a followup to question 912526 - How do I pass lots of variables to and from a function in Python?.
There are lots of variables that need to get passed around in the program I'm writing, and from my previous question I understand that I should put these variables into classes, and then pass around the classes.
Some of these variables come in repetetive sets - for a thin film calculation I need to track the optical properties (index of refraction, absorption, thickness, etc) for a number of layers.
Is the best way to store variables like this to create a class derived from a Python list to store the set of classes which each hold the variables for a single layer? And then put the functions that deal with the set of layers in class derived from the list, and the functions that deal with a specific layer in that class? Is there a better way to do this with a single class?
Using the two class approach in the following example, I'm able to set things up so that I can access variables using statments like
n1 = layers[5].n
This is the best way to do this, right?
#Test passing values to and from functions
class Layers(list):
def add(self,n,k,comment):
self.append( OneLayer(n,k,comment) )
def input_string(self):
input_string = []
for layer in self:
vars = layer.input_string()
for var in vars:
input_string.append( var )
return input_string
def set_layers(self,results):
for layer,i in enumerate(self):
j = i*layer.num_var
layer.set_layer( *results[j:j+2] )
class OneLayer(object):
def __init__(self,n,k,comment):
self.n = n
self.k = k
self.comment = comment
def input_string(self):
return [['f','Index of Refraction',self.n], ['f','Absorption',self.k],['s','Comment',self.comment]]
def set_layer(self,n,k,comment):
self.n = n; self.k=k; self.comment = comment
def num_var(self):
return 3
if __name__ == '__main__':
layers = Layers()
layers.add(1.0,0.0,'This vacuum sucks')
layers.add(1.5,0.0,'BK 7 Glass')
print layers[0].n
print layers.input_string()
layers[1].set_layer(1.77,0.0,'Sapphire')
print layers.input_string()
I get the following output from this test program:
1.0
[['f', 'Index of Refraction', 1.0], ['f', 'Absorption', 0.0], ['s', 'Comment', 'This vacuum sucks'], ['f', 'Index of Refraction', 1.5], ['f', 'Absorption', 0.0], ['s', 'Comment', 'BK 7 Glass']]
[['f', 'Index of Refraction', 1.0], ['f', 'Absorption', 0.0], ['s', 'Comment', 'This vacuum sucks'], ['f', 'Index of Refraction', 1.77], ['f', 'Absorption', 0.0], ['s', 'Comment', 'Sapphire']]
|
[
"There are several issues in your code:\n1.If you make any list operation the result will be a native list:\nlayers1 = Layers()\nlayers2 = Layers()\nlayers1 + layers2 -> the result will be a native list\n\n2.Why define input_string when you can override __repr__ or __str__\n3.Why do you even have to derive from list in this case? You only need to derive from list if you want your class to behave exactly like a list. But in your case you seem to be looking for a container.\nAll you need to do to get your class to behave similar to a list is to override some special python methods http://docs.python.org/reference/datamodel.html#emulating-container-types\nclass Layers(object):\n def __init__(self, container=None):\n if container is None:\n container = []\n self.container = container\n\n def add(self,n,k,comment):\n self.container.append([n,k,comment])\n\n def __str__(self):\n return str(self.container)\n\n def __repr__(self):\n return str(self.container)\n\n def __getitem__(self, key):\n return Layers(self.container[key])\n\n def __len__(self):\n return len(self.container)\n\n>>> l = Layers()\n>>> l.add(1, 2, 'test')\n>>> l.add(1, 2, 'test')\n>>> l\n[[1, 2, 'test'], [1, 2, 'test']]\n>>> l[0]\n[1, 2, 'test']\n>>> len(l)\n2\n\n"
] |
[
9
] |
[] |
[] |
[
"class",
"python"
] |
stackoverflow_0000921334_class_python.txt
|
Q:
Using cookies with python to store searches
Hey i have a webpage for searching a database. i would like to be able to implement cookies using python to store what a user searches for and provide them with a recently searched field when they return. is there a way to implement this using the python Cookie library??
A:
Usually, we do the following.
Use a framework.
Establish a session. Ideally, ask for a username of some kind. If you don't want to ask for names or anything, you can try to the browser's IP address as the key for the session (this can turn into a nightmare, but you can try it.)
Using the session identification (username or IP address), save the searches in a database on your server.
When the person logs in again, retrieve their query information from your local database.
Moral of the story. Don't trust the cookie to have anything it but session identification. And even then, it will get hijacked either on purpose or accidentally.
Intentional hijacking is the way one person poses as another.
Accident hijacking occurs when multiple people share the same IP address (because they share the same computer).
A:
To use cookies you can use whichever API for cookies your framework is using.
Here's a CherryPy full working example for doing what you want, store searches and provide them later.
import cherrypy
import json
class Root(object):
def index(self):
last_search = cherrypy.request.cookie.get('terms', None)
if last_search:
last_search = ','.join(json.loads(last_search.value))
else:
last_search = 'None'
return """
<html>
<head>
<meta http-equiv="Content-type" content="text/html; charset=utf-8" />
<title>Search</title>
</head>
<body>
<h1>Search</h1>
<form action="do_search" method="get">
<p>Please type your search terms:
<input type="text" name="query" /></p>
<p>Hint: Last 5 used terms: %s</p>
<p><input type="submit" value="Search →" /></p>
</form>
</body>
""" % (last_search,)
index.exposed = True
def do_search(self, query):
results = ['some', 'results', 'here', 'simulating', 'a', 'search']
print cherrypy.request.cookie
last_search = cherrypy.request.cookie.get('terms', None)
if last_search:
last_search = json.loads(last_search.value)[-4:] # get last 4
else:
last_search = []
last_search.append(query) # append new term
cherrypy.response.cookie['terms'] = json.dumps(last_search)
return """
<html>
<head>
<meta http-equiv="Content-type" content="text/html; charset=utf-8" />
<title>Search</title>
</head>
<body>
<h1>Search Results for %r</h1>
<p>%s</p>
<p><a href="%s">click here to search again</a>
</p>
</body>
""" % (query, ', '.join(results), cherrypy.url('/'))
do_search.exposed = True
application = cherrypy.tree.mount(Root(), '/')
if __name__ == '__main__':
cherrypy.quickstart()
NOTES:
It uses json to serialize the list and store it in the cookie as a string. Python json was introduced in python 2.6, so if you need it before 2.6, so if you don't have 2.6 you can use simplejson instead to run the example.
Sorry about the HTML in the code. That would ideally be outside the code, in a template file, using a template engine such as jinja2.
|
Using cookies with python to store searches
|
Hey i have a webpage for searching a database. i would like to be able to implement cookies using python to store what a user searches for and provide them with a recently searched field when they return. is there a way to implement this using the python Cookie library??
|
[
"Usually, we do the following.\n\nUse a framework.\nEstablish a session. Ideally, ask for a username of some kind. If you don't want to ask for names or anything, you can try to the browser's IP address as the key for the session (this can turn into a nightmare, but you can try it.)\nUsing the session identification (username or IP address), save the searches in a database on your server.\nWhen the person logs in again, retrieve their query information from your local database.\n\nMoral of the story. Don't trust the cookie to have anything it but session identification. And even then, it will get hijacked either on purpose or accidentally.\n\nIntentional hijacking is the way one person poses as another.\nAccident hijacking occurs when multiple people share the same IP address (because they share the same computer).\n\n",
"To use cookies you can use whichever API for cookies your framework is using.\nHere's a CherryPy full working example for doing what you want, store searches and provide them later.\nimport cherrypy\nimport json\n\nclass Root(object):\n def index(self):\n last_search = cherrypy.request.cookie.get('terms', None)\n if last_search: \n last_search = ','.join(json.loads(last_search.value))\n else:\n last_search = 'None'\n return \"\"\"\n<html>\n<head>\n <meta http-equiv=\"Content-type\" content=\"text/html; charset=utf-8\" />\n <title>Search</title>\n</head>\n<body>\n <h1>Search</h1>\n <form action=\"do_search\" method=\"get\">\n <p>Please type your search terms: \n <input type=\"text\" name=\"query\" /></p>\n <p>Hint: Last 5 used terms: %s</p>\n <p><input type=\"submit\" value=\"Search →\" /></p>\n </form>\n</body>\n\"\"\" % (last_search,)\n index.exposed = True\n\n def do_search(self, query):\n results = ['some', 'results', 'here', 'simulating', 'a', 'search']\n print cherrypy.request.cookie\n last_search = cherrypy.request.cookie.get('terms', None)\n if last_search:\n last_search = json.loads(last_search.value)[-4:] # get last 4\n else:\n last_search = []\n last_search.append(query) # append new term\n cherrypy.response.cookie['terms'] = json.dumps(last_search)\n return \"\"\"\n<html>\n<head>\n <meta http-equiv=\"Content-type\" content=\"text/html; charset=utf-8\" />\n <title>Search</title>\n</head>\n<body>\n <h1>Search Results for %r</h1>\n <p>%s</p>\n <p><a href=\"%s\">click here to search again</a>\n </p>\n</body>\n\"\"\" % (query, ', '.join(results), cherrypy.url('/'))\n do_search.exposed = True\n\napplication = cherrypy.tree.mount(Root(), '/')\n\nif __name__ == '__main__':\n cherrypy.quickstart()\n\nNOTES:\nIt uses json to serialize the list and store it in the cookie as a string. Python json was introduced in python 2.6, so if you need it before 2.6, so if you don't have 2.6 you can use simplejson instead to run the example.\nSorry about the HTML in the code. That would ideally be outside the code, in a template file, using a template engine such as jinja2.\n"
] |
[
1,
0
] |
[] |
[] |
[
"cookies",
"python"
] |
stackoverflow_0000920278_cookies_python.txt
|
Q:
Python and Qt - function reloading
i have an application class inherited from QtGui.QDialog.
I have to reload show-function but save functionality. I take this idea from C#.
There i could do something like this:
static void show()
{
// My code...
base.show();
}
I want to do something like that but with python and qt (PyQt). Can i do that?
A:
Checkout the super() function but note some pitfalls.
|
Python and Qt - function reloading
|
i have an application class inherited from QtGui.QDialog.
I have to reload show-function but save functionality. I take this idea from C#.
There i could do something like this:
static void show()
{
// My code...
base.show();
}
I want to do something like that but with python and qt (PyQt). Can i do that?
|
[
"Checkout the super() function but note some pitfalls.\n"
] |
[
1
] |
[] |
[] |
[
"function",
"python",
"reloading"
] |
stackoverflow_0000921929_function_python_reloading.txt
|
Q:
How do I deactivate an egg?
I've installed cx_Oracle (repeatedly) and I just can't get it to work on my Intel Mac. How do I deactivate/uninstall it?
A:
You simply delete the .egg file
On OS X they are installed into /Library/Python/2.5/site-packages/ - in that folder you should find a file named cx_Oracle.egg or similar. You can simple delete this file and it will be gone.
One way of finding the file is, if you can import the module, simply displaying the repr() of the module:
>>> import urllib
>>> urllib
<module 'urllib' from '/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/urllib.pyc'>
>>> import BeautifulSoup
>>> BeautifulSoup
<module 'BeautifulSoup' from '/Library/Python/2.5/site-packages/BeautifulSoup-3.0.6-py2.5.egg/BeautifulSoup.py'>
If the import fails, the traceback should show the location of the module also.
One thing to note, if the module installed any command-line tools, you'll have to remove these manually also.. On OS X they are installde in /usr/local/bin/ - you can find any tool which uses cx_Oracle using grep:
cd /usr/local/bin/
grep EASY-INSTALL * | grep cx_Oracle
Or simply..
cd /usr/local/bin/
grep cx_Oracle *
|
How do I deactivate an egg?
|
I've installed cx_Oracle (repeatedly) and I just can't get it to work on my Intel Mac. How do I deactivate/uninstall it?
|
[
"You simply delete the .egg file\nOn OS X they are installed into /Library/Python/2.5/site-packages/ - in that folder you should find a file named cx_Oracle.egg or similar. You can simple delete this file and it will be gone.\nOne way of finding the file is, if you can import the module, simply displaying the repr() of the module:\n>>> import urllib\n>>> urllib\n<module 'urllib' from '/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/urllib.pyc'>\n>>> import BeautifulSoup\n>>> BeautifulSoup\n<module 'BeautifulSoup' from '/Library/Python/2.5/site-packages/BeautifulSoup-3.0.6-py2.5.egg/BeautifulSoup.py'>\n\nIf the import fails, the traceback should show the location of the module also.\nOne thing to note, if the module installed any command-line tools, you'll have to remove these manually also.. On OS X they are installde in /usr/local/bin/ - you can find any tool which uses cx_Oracle using grep:\ncd /usr/local/bin/\ngrep EASY-INSTALL * | grep cx_Oracle\n\nOr simply..\ncd /usr/local/bin/\ngrep cx_Oracle *\n\n"
] |
[
3
] |
[] |
[] |
[
"egg",
"python",
"uninstallation"
] |
stackoverflow_0000922323_egg_python_uninstallation.txt
|
Q:
Learning parser in python
I recall I have read about a parser which you just have to feed some sample lines, for it to know how to parse some text.
It just determines the difference between two lines to know what the variable parts are. I thought it was written in python, but i'm not sure. Does anyone know what library that was?
A:
Probably you mean TemplateMaker, I haven't tried it yet, but it builds on well-researched longest-common-substring algorithms and thus should work reasonably... If you are interested in different (more complex) approaches, you can easily find a lot of material on Google Scholar using the query "wrapper induction" or "template induction".
A:
Conceivably you might mean Reverend?
|
Learning parser in python
|
I recall I have read about a parser which you just have to feed some sample lines, for it to know how to parse some text.
It just determines the difference between two lines to know what the variable parts are. I thought it was written in python, but i'm not sure. Does anyone know what library that was?
|
[
"Probably you mean TemplateMaker, I haven't tried it yet, but it builds on well-researched longest-common-substring algorithms and thus should work reasonably... If you are interested in different (more complex) approaches, you can easily find a lot of material on Google Scholar using the query \"wrapper induction\" or \"template induction\".\n",
"Conceivably you might mean Reverend?\n"
] |
[
10,
2
] |
[] |
[] |
[
"parsing",
"python"
] |
stackoverflow_0000921792_parsing_python.txt
|
Q:
Is there a better way than int( byte_buffer.encode('hex'), 16 )
In Python, I'm constantly using the following sequence to get an integer value from a byte buffer (in python this is a str).
I'm getting the buffer from the struct.unpack() routine. When I unpack a 'char' using
byte_buffer, = struct.unpack('c', raw_buffer)
int_value = int( byte_buffer.encode('hex'), 16 )
Is there a better way?
A:
The struct module is good at unpacking binary data.
int_value = struct.unpack('>I', byte_buffer)[0]
A:
Bounded to 1 byte – Noah Campbell 18 mins ago
The best way to do this then is to instantiate a struct unpacker.
from struct import Struct
unpacker = Struct("b")
unpacker.unpack("z")[0]
Note that you can change "b" to "B" if you want an unsigned byte. Also, endian format is not needed.
For anyone else who wants to know a method for unbounded integers, create a question, and tell me in the comments.
A:
If we're talking about getting the integer value of a byte, then you want this:
ord(byte_buffer)
Can't understand why it isn't already suggested.
|
Is there a better way than int( byte_buffer.encode('hex'), 16 )
|
In Python, I'm constantly using the following sequence to get an integer value from a byte buffer (in python this is a str).
I'm getting the buffer from the struct.unpack() routine. When I unpack a 'char' using
byte_buffer, = struct.unpack('c', raw_buffer)
int_value = int( byte_buffer.encode('hex'), 16 )
Is there a better way?
|
[
"The struct module is good at unpacking binary data.\nint_value = struct.unpack('>I', byte_buffer)[0]\n\n",
"\nBounded to 1 byte – Noah Campbell 18 mins ago\n\nThe best way to do this then is to instantiate a struct unpacker.\nfrom struct import Struct\n\nunpacker = Struct(\"b\")\nunpacker.unpack(\"z\")[0]\n\nNote that you can change \"b\" to \"B\" if you want an unsigned byte. Also, endian format is not needed.\nFor anyone else who wants to know a method for unbounded integers, create a question, and tell me in the comments.\n",
"If we're talking about getting the integer value of a byte, then you want this:\nord(byte_buffer)\n\nCan't understand why it isn't already suggested.\n"
] |
[
6,
2,
1
] |
[] |
[] |
[
"byte",
"integer",
"python",
"types"
] |
stackoverflow_0000918754_byte_integer_python_types.txt
|
Q:
Python, SimPy: How to generate a value from a triangular probability distribution?
I want to run a simulation that uses as parameter a value generated from a triangular probability distribution with lower limit A, mode B and and upper limit C. How can I generate this value in Python? Is there something as simple as expovariate(lambda) (from random) for this distribution or do I have to code this thing?
A:
If you download the NumPy package, it has a function numpy.random.triangular(left, mode, right[, size]) that does exactly what you are looking for.
A:
Since, I was checking random's documentation from Python 2.4 I missed this:
random.triangular(low, high, mode)¶
Return a random floating point number N such that low <= N <= high and with the specified mode between those bounds. The low and high bounds default to zero and one. The mode argument defaults to the midpoint between the bounds, giving a symmetric distribution.
New in version 2.6.
A:
Let's say that your distribution wasn't handled by NumPy or the Python Standard Library.
In situations where performance is not very important, rejection sampling is a useful hack for getting draws from a distribution you don't have using one you do have.
For your triangular distribution, you could do something like
from random import random, uniform
def random_triangular(low, high, mode):
while True:
proposal = uniform(low, high)
if proposal < mode:
acceptance_prob = (proposal - low) / (mode - low)
else:
acceptance_prob = (high - proposal) / (high - mode)
if random() < acceptance_prob: break
return proposal
You can plot some samples
pylab.hist([random_triangular(1, 6, 5) for t in range(10000)])
to make sure that everything looks okay.
|
Python, SimPy: How to generate a value from a triangular probability distribution?
|
I want to run a simulation that uses as parameter a value generated from a triangular probability distribution with lower limit A, mode B and and upper limit C. How can I generate this value in Python? Is there something as simple as expovariate(lambda) (from random) for this distribution or do I have to code this thing?
|
[
"If you download the NumPy package, it has a function numpy.random.triangular(left, mode, right[, size]) that does exactly what you are looking for.\n",
"Since, I was checking random's documentation from Python 2.4 I missed this:\nrandom.triangular(low, high, mode)¶\n Return a random floating point number N such that low <= N <= high and with the specified mode between those bounds. The low and high bounds default to zero and one. The mode argument defaults to the midpoint between the bounds, giving a symmetric distribution.\n New in version 2.6.\n",
"Let's say that your distribution wasn't handled by NumPy or the Python Standard Library.\nIn situations where performance is not very important, rejection sampling is a useful hack for getting draws from a distribution you don't have using one you do have.\nFor your triangular distribution, you could do something like\nfrom random import random, uniform\n\ndef random_triangular(low, high, mode):\n while True:\n proposal = uniform(low, high)\n if proposal < mode:\n acceptance_prob = (proposal - low) / (mode - low)\n else:\n acceptance_prob = (high - proposal) / (high - mode)\n if random() < acceptance_prob: break\n return proposal\n\nYou can plot some samples\npylab.hist([random_triangular(1, 6, 5) for t in range(10000)])\n\nto make sure that everything looks okay.\n"
] |
[
9,
6,
3
] |
[] |
[] |
[
"distribution",
"probability",
"python",
"simpy"
] |
stackoverflow_0000815969_distribution_probability_python_simpy.txt
|
Q:
How can I capture the stdout output of a child process?
I'm trying to write a program in Python and I'm told to run an .exe file. When this .exe file is run it spits out a lot of data and I need a certain line printed out to the screen. I'm pretty sure I need to use subprocess.popen or something similar but I'm new to subprocess and have no clue. Anyone have an easy way for me to get this done?
A:
@Paolo's solution is perfect if you are interested in printing output after the process has finished executing. In case you want to poll output while the process is running you have to do it this way:
process = subprocess.Popen(cmd, stdout=subprocess.PIPE)
while True:
out = process.stdout.readline(1)
if out == '' and process.poll() != None:
break
if out.startswith('myline'):
sys.stdout.write(out)
sys.stdout.flush()
A:
Something like this:
import subprocess
process = subprocess.Popen(["yourcommand"], stdout=subprocess.PIPE)
result = process.communicate()[0]
|
How can I capture the stdout output of a child process?
|
I'm trying to write a program in Python and I'm told to run an .exe file. When this .exe file is run it spits out a lot of data and I need a certain line printed out to the screen. I'm pretty sure I need to use subprocess.popen or something similar but I'm new to subprocess and have no clue. Anyone have an easy way for me to get this done?
|
[
"@Paolo's solution is perfect if you are interested in printing output after the process has finished executing. In case you want to poll output while the process is running you have to do it this way:\nprocess = subprocess.Popen(cmd, stdout=subprocess.PIPE)\n\nwhile True:\n out = process.stdout.readline(1)\n if out == '' and process.poll() != None:\n break\n if out.startswith('myline'):\n sys.stdout.write(out)\n sys.stdout.flush()\n\n",
"Something like this:\nimport subprocess\nprocess = subprocess.Popen([\"yourcommand\"], stdout=subprocess.PIPE)\nresult = process.communicate()[0]\n\n"
] |
[
28,
21
] |
[] |
[] |
[
"python",
"stdout",
"subprocess"
] |
stackoverflow_0000923079_python_stdout_subprocess.txt
|
Q:
Writing a kernel mode profiler for processes in python
I would like seek some guidance in writing a "process profiler" which runs in kernel mode. I am asking for a kernel mode profiler is because I run loads of applications and I do not want my profiler to be swapped out.
When I said "process profiler" I mean to something that would monitor resource usage by the process. including usage of threads and their statistics.
And I wish to write this in python. Point me to some modules or helpful resource.
Please provide me guidance/suggestion for doing it.
Thanks,
Edit::: Would like to add that currently my interest isto write only for linux. however after i built it i will have to support windows.
A:
It's going to be very difficult to do the process monitoring part in Python, since the python interpreter doesn't run in the kernel.
I suspect there are two easy approaches to this:
use the /proc filesystem if you have one (you don't mention your OS)
Use dtrace if you have dtrace (again, without the OS, who knows.)
Okay, following up after the edit.
First, there's no way you're going to be able to write code that runs in the kernel, in python, and is portable between Linux and Windows. Or at least if you were to, it would be a hack that would live in glory forever.
That said, though, if your purpose is to process Python, there are a lot of Python tools available to get information from the Python interpreter at run time.
If instead your desire is to get process information from other processes in general, you're going to need to examine the options available to you in the various OS APIs. Linux has a /proc filesystem; that's a useful start. I suspect Windows has similar APIs, but I don't know them.
If you have to write kernel code, you'll almost certainly need to write it in C or C++.
A:
don't try and get python running in kernel space!
You would be much better using an existing tool and getting it to spit out XML that can be sucked into Python. I wouldn't want to port the Python interpreter to kernel-mode (it sounds grim writing it).
The /proc option does sound good.
some code code that reads proc information to determine memory usage and such. Should get you going:
http://www.pixelbeat.org/scripts/ps_mem.py reads memory information of processes using Python through /proc/smaps like charlie suggested.
A:
have you looked at PSI? (http://www.psychofx.com/psi/)
"PSI is a Python module providing direct access to real-time system and process information. PSI is a Python C extension, providing the most efficient access to system information directly from system calls."
it might give you what you are looking for. .... or at least a starting point.
Edit 2014:
I'd recommend checking out psutil instead:
https://pypi.python.org/pypi/psutil
psutil is actively maintained and has some nifty process monitoring features. PSI seems to be somewhat dead (last release 2009).
A:
Some of your comments on other answers suggest that you are a relatively inexperienced programmer. Therefore I would strongly suggest that you stay away from kernel programming, as it is very hard even for experienced programmers.
Why would you want to write something that
is a very complex system (just look at existing profiling infrastructures and how complex they are)
can not be done in python (I don't know any kernel that would allow execution of python in kernel mode)
already exists (oprofile on Linux)
|
Writing a kernel mode profiler for processes in python
|
I would like seek some guidance in writing a "process profiler" which runs in kernel mode. I am asking for a kernel mode profiler is because I run loads of applications and I do not want my profiler to be swapped out.
When I said "process profiler" I mean to something that would monitor resource usage by the process. including usage of threads and their statistics.
And I wish to write this in python. Point me to some modules or helpful resource.
Please provide me guidance/suggestion for doing it.
Thanks,
Edit::: Would like to add that currently my interest isto write only for linux. however after i built it i will have to support windows.
|
[
"It's going to be very difficult to do the process monitoring part in Python, since the python interpreter doesn't run in the kernel.\nI suspect there are two easy approaches to this:\n\nuse the /proc filesystem if you have one (you don't mention your OS)\nUse dtrace if you have dtrace (again, without the OS, who knows.)\n\n\nOkay, following up after the edit.\nFirst, there's no way you're going to be able to write code that runs in the kernel, in python, and is portable between Linux and Windows. Or at least if you were to, it would be a hack that would live in glory forever.\nThat said, though, if your purpose is to process Python, there are a lot of Python tools available to get information from the Python interpreter at run time.\nIf instead your desire is to get process information from other processes in general, you're going to need to examine the options available to you in the various OS APIs. Linux has a /proc filesystem; that's a useful start. I suspect Windows has similar APIs, but I don't know them.\nIf you have to write kernel code, you'll almost certainly need to write it in C or C++.\n",
"don't try and get python running in kernel space!\nYou would be much better using an existing tool and getting it to spit out XML that can be sucked into Python. I wouldn't want to port the Python interpreter to kernel-mode (it sounds grim writing it).\nThe /proc option does sound good.\nsome code code that reads proc information to determine memory usage and such. Should get you going:\nhttp://www.pixelbeat.org/scripts/ps_mem.py reads memory information of processes using Python through /proc/smaps like charlie suggested.\n",
"have you looked at PSI? (http://www.psychofx.com/psi/)\n\"PSI is a Python module providing direct access to real-time system and process information. PSI is a Python C extension, providing the most efficient access to system information directly from system calls.\"\nit might give you what you are looking for. .... or at least a starting point.\n\nEdit 2014:\nI'd recommend checking out psutil instead:\nhttps://pypi.python.org/pypi/psutil\npsutil is actively maintained and has some nifty process monitoring features. PSI seems to be somewhat dead (last release 2009).\n",
"Some of your comments on other answers suggest that you are a relatively inexperienced programmer. Therefore I would strongly suggest that you stay away from kernel programming, as it is very hard even for experienced programmers.\nWhy would you want to write something that\n\nis a very complex system (just look at existing profiling infrastructures and how complex they are)\ncan not be done in python (I don't know any kernel that would allow execution of python in kernel mode)\nalready exists (oprofile on Linux)\n\n"
] |
[
7,
3,
0,
0
] |
[] |
[] |
[
"kernel",
"python"
] |
stackoverflow_0000922788_kernel_python.txt
|
Q:
Scraping Multiple html files to CSV
I am trying to scrape rows off of over 1200 .htm files that are on my hard drive. On my computer they are here 'file:///home/phi/Data/NHL/pl07-08/PL020001.HTM'. These .htm files are sequential from *20001.htm until *21230.htm. My plan is to eventually toss my data in MySQL or SQLite via a spreadsheet app or just straight in if I can get a clean .csv file out of this process.
This is my first attempt at code (Python), scraping, and I just installed Ubuntu 9.04 on my crappy pentium IV. Needless to say I am newb and have some roadblocks.
How do I get mechanize to go through all the files in the directory in order. Can mechanize even do this? Can mechanize/Python/BeautifulSoup read a 'file:///' style url or is there another way to point it to /home/phi/Data/NHL/pl07-08/PL020001.HTM? Is it smart to do this in 100 or 250 file increments or just send all 1230?
I just need rows that start with this "<tr class="evenColor">" and end with this "</tr>". Ideally I only want the rows that contain "SHOT"|"MISS"|"GOAL" within them but I want the whole row (every column). Note that "GOAL" is in bold so do I have to specify this? There are 3 tables per htm file.
Also I would like the name of the parent file (pl020001.htm) to be included in the rows I scrape so I can id them in their own column in the final database. I don't even know where to begin for that. This is what I have so far:
#/usr/bin/python
from BeautifulSoup import BeautifulSoup
import re
from mechanize import Browser
mech = Browser()
url = "file:///home/phi/Data/NHL/pl07-08/PL020001.HTM"
##but how do I do multiple urls/files? PL02*.HTM?
page = mech.open(url)
html = page.read()
soup = BeautifulSoup(html)
##this confuses me and seems redundant
pl = open("input_file.html","r")
chances = open("chancesforsql.csv,"w")
table = soup.find("table", border=0)
for row in table.findAll 'tr class="evenColor"'
#should I do this instead of before?
outfile = open("shooting.csv", "w")
##how do I end it?
Should I be using IDLE or something like it? just Terminal in Ubuntu 9.04?
A:
You won't need mechanize. Since I do not exactly know the HTML content, I'd try to see what matches, first. Like this:
import glob
from BeautifulSoup import BeautifulSoup
for filename in glob.glob('/home/phi/Data/*.htm'):
soup = BeautifulSoup(open(filename, "r").read()) # assuming some HTML
for a_tr in soup.findAll("tr", attrs={ "class" : "evenColor" }):
print a_tr
Then pick the stuff you want and write it to stdout with commas (and redirect it > to a file). Or write the csv via python.
A:
MYYN's answer looks like a great start to me. One thing I'd point out that I've had luck with is:
import glob
for file_name in glob.glob('/home/phi/Data/*.htm'):
#read the file and then parse with BeautifulSoup
I've found both the os and glob imports to be really useful for running through files in a directory.
Also, once you're using a for loop in this way, you have the file_name which you can modify for use in the output file, so that the output filenames will match the input filenames.
|
Scraping Multiple html files to CSV
|
I am trying to scrape rows off of over 1200 .htm files that are on my hard drive. On my computer they are here 'file:///home/phi/Data/NHL/pl07-08/PL020001.HTM'. These .htm files are sequential from *20001.htm until *21230.htm. My plan is to eventually toss my data in MySQL or SQLite via a spreadsheet app or just straight in if I can get a clean .csv file out of this process.
This is my first attempt at code (Python), scraping, and I just installed Ubuntu 9.04 on my crappy pentium IV. Needless to say I am newb and have some roadblocks.
How do I get mechanize to go through all the files in the directory in order. Can mechanize even do this? Can mechanize/Python/BeautifulSoup read a 'file:///' style url or is there another way to point it to /home/phi/Data/NHL/pl07-08/PL020001.HTM? Is it smart to do this in 100 or 250 file increments or just send all 1230?
I just need rows that start with this "<tr class="evenColor">" and end with this "</tr>". Ideally I only want the rows that contain "SHOT"|"MISS"|"GOAL" within them but I want the whole row (every column). Note that "GOAL" is in bold so do I have to specify this? There are 3 tables per htm file.
Also I would like the name of the parent file (pl020001.htm) to be included in the rows I scrape so I can id them in their own column in the final database. I don't even know where to begin for that. This is what I have so far:
#/usr/bin/python
from BeautifulSoup import BeautifulSoup
import re
from mechanize import Browser
mech = Browser()
url = "file:///home/phi/Data/NHL/pl07-08/PL020001.HTM"
##but how do I do multiple urls/files? PL02*.HTM?
page = mech.open(url)
html = page.read()
soup = BeautifulSoup(html)
##this confuses me and seems redundant
pl = open("input_file.html","r")
chances = open("chancesforsql.csv,"w")
table = soup.find("table", border=0)
for row in table.findAll 'tr class="evenColor"'
#should I do this instead of before?
outfile = open("shooting.csv", "w")
##how do I end it?
Should I be using IDLE or something like it? just Terminal in Ubuntu 9.04?
|
[
"You won't need mechanize. Since I do not exactly know the HTML content, I'd try to see what matches, first. Like this: \nimport glob\nfrom BeautifulSoup import BeautifulSoup\n\nfor filename in glob.glob('/home/phi/Data/*.htm'):\n soup = BeautifulSoup(open(filename, \"r\").read()) # assuming some HTML\n for a_tr in soup.findAll(\"tr\", attrs={ \"class\" : \"evenColor\" }):\n print a_tr\n\nThen pick the stuff you want and write it to stdout with commas (and redirect it > to a file). Or write the csv via python.\n",
"MYYN's answer looks like a great start to me. One thing I'd point out that I've had luck with is:\nimport glob\n for file_name in glob.glob('/home/phi/Data/*.htm'):\n #read the file and then parse with BeautifulSoup\n\nI've found both the os and glob imports to be really useful for running through files in a directory.\nAlso, once you're using a for loop in this way, you have the file_name which you can modify for use in the output file, so that the output filenames will match the input filenames.\n"
] |
[
1,
0
] |
[] |
[] |
[
"beautifulsoup",
"mechanize",
"python",
"screen_scraping",
"sqlite"
] |
stackoverflow_0000923318_beautifulsoup_mechanize_python_screen_scraping_sqlite.txt
|
Q:
Matching text within P tags in HTML
I'd like to match the contents within each paragraph in html using a python regular expression. These paragraphs always have BR tags inside them like so:
<p class="thisClass">this is nice <br /><br /> isn't it?</p>
I'm currently using this pattern:
pattern = re.compile('<p class=\"thisClass\">(.*?)<\/p>')
Then I'm using:
pattern.findall(html)
to find all the matches. However, it only matches two of 28 paragraphs I have, and it looks like that's because those two don't have BR tags inside of them and the rest do. What am I doing wrong? What can I do to fix it? Thanks!
A:
I don't think it is failing because of the <br/> but rather because the paragraph is spread across multiple lines. Use the DOTALL mode to fix this:
pattern = re.compile('<p class=\"thisClass\">(.*?)<\/p>', re.DOTALL)
A:
It turns out the answer was to include re.S as a flag which allows the "." character to match newlines as well.
pattern = re.compile('<p class=\"thisClass\">(.*?)<\/p>', re.S)
This works perfectly.
|
Matching text within P tags in HTML
|
I'd like to match the contents within each paragraph in html using a python regular expression. These paragraphs always have BR tags inside them like so:
<p class="thisClass">this is nice <br /><br /> isn't it?</p>
I'm currently using this pattern:
pattern = re.compile('<p class=\"thisClass\">(.*?)<\/p>')
Then I'm using:
pattern.findall(html)
to find all the matches. However, it only matches two of 28 paragraphs I have, and it looks like that's because those two don't have BR tags inside of them and the rest do. What am I doing wrong? What can I do to fix it? Thanks!
|
[
"I don't think it is failing because of the <br/> but rather because the paragraph is spread across multiple lines. Use the DOTALL mode to fix this:\npattern = re.compile('<p class=\\\"thisClass\\\">(.*?)<\\/p>', re.DOTALL)\n\n",
"It turns out the answer was to include re.S as a flag which allows the \".\" character to match newlines as well.\npattern = re.compile('<p class=\\\"thisClass\\\">(.*?)<\\/p>', re.S)\n\nThis works perfectly.\n"
] |
[
5,
3
] |
[] |
[] |
[
"html",
"python",
"regex"
] |
stackoverflow_0000923472_html_python_regex.txt
|
Q:
Python RegEx - Getting multiple pieces of information out of a string
I'm trying to use python to parse a log file and match 4 pieces of information in one regex. (epoch time, SERVICE NOTIFICATION, hostname and CRITICAL) I can't seem to get this to work. So Far I've been able to only match two of the four. Is it possible to do this? Below is an example of a string from the log file and the code I've gotten to work thus far. Any help would make me a happy noob.
[1242248375] SERVICE ALERT: myhostname.com;DNS: Recursive;CRITICAL;SOFT;1;CRITICAL - Plugin timed out while executing system call
hostname = options.hostname
n = open('/var/tmp/nagios.log', 'r')
n.readline()
l = [str(x) for x in n]
for line in l:
match = re.match (r'^\[(\d+)\] SERVICE NOTIFICATION: ', line)
if match:
timestamp = int(match.groups()[0])
print timestamp
A:
You can use | to match any one of various possible things, and re.findall to get all non-overlapping matches to some RE.
A:
The question is a bit confusing. But you don't need to do everything with regular expressions, there are some good plain old string functions you might want to try, like 'split'.
This version will also refrain from loading the entire file in memory at once, and it will close the file even when an exception is thrown.
regexp = re.compile(r'\[(\d+)\] SERVICE NOTIFICATION: (.+)')
with open('var/tmp/nagios.log', 'r') as file:
for line in file:
fields = line.split(';')
match = regexp.match(fields[0])
if match:
timestamp = int(match.group(1))
hostname = match.group(2)
A:
You can use more than one group at a time, e.g.:
import re
logstring = '[1242248375] SERVICE ALERT: myhostname.com;DNS: Recursive;CRITICAL;SOFT;1;CRITICAL - Plugin timed out while executing system call'
exp = re.compile('^\[(\d+)\] ([A-Z ]+): ([A-Za-z0-9.\-]+);[^;]+;([A-Z]+);')
m = exp.search(logstring)
for s in m.groups():
print s
A:
If you are looking to split out those particular parts of the line then.
Something along the lines of:
match = re.match(r'^\[(\d+)\] (.*?): (.*?);.*?;(.*?);',line)
Should give each of those parts in their respective index in groups.
A:
Could it be as simple as "SERVICE NOTIFICATION" in your pattern doesn't match "SERVICE ALERT" in your example?
|
Python RegEx - Getting multiple pieces of information out of a string
|
I'm trying to use python to parse a log file and match 4 pieces of information in one regex. (epoch time, SERVICE NOTIFICATION, hostname and CRITICAL) I can't seem to get this to work. So Far I've been able to only match two of the four. Is it possible to do this? Below is an example of a string from the log file and the code I've gotten to work thus far. Any help would make me a happy noob.
[1242248375] SERVICE ALERT: myhostname.com;DNS: Recursive;CRITICAL;SOFT;1;CRITICAL - Plugin timed out while executing system call
hostname = options.hostname
n = open('/var/tmp/nagios.log', 'r')
n.readline()
l = [str(x) for x in n]
for line in l:
match = re.match (r'^\[(\d+)\] SERVICE NOTIFICATION: ', line)
if match:
timestamp = int(match.groups()[0])
print timestamp
|
[
"You can use | to match any one of various possible things, and re.findall to get all non-overlapping matches to some RE.\n",
"The question is a bit confusing. But you don't need to do everything with regular expressions, there are some good plain old string functions you might want to try, like 'split'.\nThis version will also refrain from loading the entire file in memory at once, and it will close the file even when an exception is thrown. \nregexp = re.compile(r'\\[(\\d+)\\] SERVICE NOTIFICATION: (.+)')\nwith open('var/tmp/nagios.log', 'r') as file:\n for line in file:\n fields = line.split(';')\n match = regexp.match(fields[0])\n if match:\n timestamp = int(match.group(1))\n hostname = match.group(2)\n\n",
"You can use more than one group at a time, e.g.:\nimport re\n\nlogstring = '[1242248375] SERVICE ALERT: myhostname.com;DNS: Recursive;CRITICAL;SOFT;1;CRITICAL - Plugin timed out while executing system call'\nexp = re.compile('^\\[(\\d+)\\] ([A-Z ]+): ([A-Za-z0-9.\\-]+);[^;]+;([A-Z]+);')\nm = exp.search(logstring)\n\nfor s in m.groups():\n print s\n\n",
"If you are looking to split out those particular parts of the line then.\nSomething along the lines of:\nmatch = re.match(r'^\\[(\\d+)\\] (.*?): (.*?);.*?;(.*?);',line)\n\nShould give each of those parts in their respective index in groups.\n",
"Could it be as simple as \"SERVICE NOTIFICATION\" in your pattern doesn't match \"SERVICE ALERT\" in your example?\n"
] |
[
6,
2,
2,
1,
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0000924127_python_regex.txt
|
Q:
Unicode problems in PyObjC
I am trying to figure out PyObjC on Mac OS X, and I have written a simple program to print out the names in my Address Book. However, I am having some trouble with the encoding of the output.
#! /usr/bin/env python
# -*- coding: UTF-8 -*-
from AddressBook import *
ab = ABAddressBook.sharedAddressBook()
people = ab.people()
for person in people:
name = person.valueForProperty_("First") + ' ' + person.valueForProperty_("Last")
name
when I run this program, the output looks something like this:
...snip...
u'Jacob \xc5berg'
u'Fernando Gonzales'
...snip...
Could someone please explain why the strings are in unicode, but the content looks like that?
I have also noticed that when I try to print the name I get the error
UnicodeEncodeError: 'ascii' codec can't encode character u'\xc5' in position 6: ordinal not in range(128)
A:
# -*- coding: UTF-8 -*-
only affects the way Python decodes comments and string literals in your source, not the way standard output is configured, etc, etc. If you set your Mac's Terminal to UTF-8 (Terminal, Preferences, Settings, Advanced, International dropdown) and emit Unicode text to it after encoding it in UTF-8 (print name.encode("utf-8")), you should be fine.
A:
If you run the code in your question in the interactive console the interpreter will print the repr of "name" because of the last statement of the loop.
If you change the last line of the loop from just "name" to "print name" the output should be fine. I've tested this with Terminal.app on a 10.5.7 system.
A:
Just writing the variable name sends repr(name) to the standard output and repr() encodes all unicode values.
print tries to convert u'Jacob \xc5berg' to ASCII, which doesn't work. Try writing it to a file.
See Print Fails on the python wiki.
That means you're using legacy,
limited or misconfigured console. If
you're just trying to play with
unicode at interactive prompt move to
a modern unicode-aware console. Most
modern Python distributions come with
IDLE where you'll be able to print all
unicode characters.
A:
Convert it to a unicode string through:
print unicode(name)
|
Unicode problems in PyObjC
|
I am trying to figure out PyObjC on Mac OS X, and I have written a simple program to print out the names in my Address Book. However, I am having some trouble with the encoding of the output.
#! /usr/bin/env python
# -*- coding: UTF-8 -*-
from AddressBook import *
ab = ABAddressBook.sharedAddressBook()
people = ab.people()
for person in people:
name = person.valueForProperty_("First") + ' ' + person.valueForProperty_("Last")
name
when I run this program, the output looks something like this:
...snip...
u'Jacob \xc5berg'
u'Fernando Gonzales'
...snip...
Could someone please explain why the strings are in unicode, but the content looks like that?
I have also noticed that when I try to print the name I get the error
UnicodeEncodeError: 'ascii' codec can't encode character u'\xc5' in position 6: ordinal not in range(128)
|
[
"# -*- coding: UTF-8 -*-\n\nonly affects the way Python decodes comments and string literals in your source, not the way standard output is configured, etc, etc. If you set your Mac's Terminal to UTF-8 (Terminal, Preferences, Settings, Advanced, International dropdown) and emit Unicode text to it after encoding it in UTF-8 (print name.encode(\"utf-8\")), you should be fine.\n",
"If you run the code in your question in the interactive console the interpreter will print the repr of \"name\" because of the last statement of the loop.\nIf you change the last line of the loop from just \"name\" to \"print name\" the output should be fine. I've tested this with Terminal.app on a 10.5.7 system. \n",
"Just writing the variable name sends repr(name) to the standard output and repr() encodes all unicode values.\nprint tries to convert u'Jacob \\xc5berg' to ASCII, which doesn't work. Try writing it to a file.\nSee Print Fails on the python wiki.\n\nThat means you're using legacy,\n limited or misconfigured console. If\n you're just trying to play with\n unicode at interactive prompt move to\n a modern unicode-aware console. Most\n modern Python distributions come with\n IDLE where you'll be able to print all\n unicode characters.\n\n",
"Convert it to a unicode string through:\nprint unicode(name)\n\n"
] |
[
3,
1,
0,
0
] |
[] |
[] |
[
"macos",
"pyobjc",
"python",
"unicode"
] |
stackoverflow_0000922562_macos_pyobjc_python_unicode.txt
|
Q:
Python taskbar applet
I want to code up a panel that will be used both in Linux and Windows. Ideally it will be written in Python using PyQT.
What I've found so far is the QSystemTrayIcon widget, and while that is quite useful, that's not quite what I'm looking for. That widget lets you attach a menu to the left and right clicks of an icon on the system tray and then you can have a dialog open in certain situations.
I'm looking for something that will let me write up something like the tools that Gnome lets you add to the taskbar (they call them panels). Such as a weather feed, or processor usage, right on the taskbar. And also not in the system tray area.
I'm writing more of a tool than something reflects a status.
I know that I could write this natively in both OSes using GTK and its ilk, but anyway to write in PyQT or WxWidget so I don't have to deal with dependancy issues?
A:
Widgets inside the GNOME panel are called applets, and to my knowledge it's not possible to write them with anything but Gtk, since you have to use the respective GNOME library libpanel-applet (in either C, C++ or Python).
System tray icons are different, because they only allow icons to be displayed inside the notification area, since Windows only supports icons there.
The panel mechanism on Windows (Vista, XP does only have the notification area) is quite different, I would assume. Unless somebody already wrote a library that abstracts the differences of the GNOME panel and the Vista side bar, you would have to do that yourself.
|
Python taskbar applet
|
I want to code up a panel that will be used both in Linux and Windows. Ideally it will be written in Python using PyQT.
What I've found so far is the QSystemTrayIcon widget, and while that is quite useful, that's not quite what I'm looking for. That widget lets you attach a menu to the left and right clicks of an icon on the system tray and then you can have a dialog open in certain situations.
I'm looking for something that will let me write up something like the tools that Gnome lets you add to the taskbar (they call them panels). Such as a weather feed, or processor usage, right on the taskbar. And also not in the system tray area.
I'm writing more of a tool than something reflects a status.
I know that I could write this natively in both OSes using GTK and its ilk, but anyway to write in PyQT or WxWidget so I don't have to deal with dependancy issues?
|
[
"Widgets inside the GNOME panel are called applets, and to my knowledge it's not possible to write them with anything but Gtk, since you have to use the respective GNOME library libpanel-applet (in either C, C++ or Python). \nSystem tray icons are different, because they only allow icons to be displayed inside the notification area, since Windows only supports icons there. \nThe panel mechanism on Windows (Vista, XP does only have the notification area) is quite different, I would assume. Unless somebody already wrote a library that abstracts the differences of the GNOME panel and the Vista side bar, you would have to do that yourself. \n"
] |
[
5
] |
[
"Sounds like you are looking for Plasmoids, which can be integrated into the task bar. There are a Plasmoid tutorials in C++ and Python.\nI can't say, however, whether it will work with KDE on Windows.\n"
] |
[
-1
] |
[
"pyqt",
"python",
"qt4"
] |
stackoverflow_0000923701_pyqt_python_qt4.txt
|
Q:
Capturing Implicit Signals of Interest in Django
To set the background: I'm interested in:
Capturing implicit signals of interest in books as users browse around a site. The site is written in django (python) using mysql, memcached, ngnix, and apache
Let's say, for instance, my site sells books. As a user browses around my site I'd like to keep track of which books they've viewed, and how many times they've viewed them.
Not that I'd store the data this way, but ideally I could have on-the-fly access to a structure like:
{user_id : {book_id: number_of_views, book_id_2: number_of_views}}
I realize there are a few approaches here:
Some flat-file log
Writing an object to a database every time
Writing to an object in memcached
I don't really know the performance implications, but I'd rather not be writing to a database on every single page view, and the lag writing to a log and computing the structure later seems not quick enough to give good recommendations on-the-fly as you use the site, and the memcached appraoch seems fine, but there's a cost in keeping this obj in memory: you might lose it, and it never gets written somewhere 'permanent'.
What approach would you suggest? (doesn't have to be one of the above) Thanks!
A:
If this data is not an unimportant statistic that might or might not be available I'd suggest taking the simple approach and using a model. It will surely hit the database everytime.
Unless you are absolutely positively sure these queries are actually degrading overall experience there is no need to worry about it. Even if you optimize this one, there's a good chance other unexpected queries are wasting more CPU time. I assume you wouldn't be asking this question if you were testing all other queries. So why risk premature optimization on this one?
An advantage of the model approach would be having an API in place. When you have tested and decided to optimize you can keep this API and change the underlying model with something else (which will most probably be more complex than a model).
I'd definitely go with a model first and see how it performs. (and also how other parts of the project perform)
A:
What approach would you suggest? (doesn't have to be one of the above) Thanks!
hmmmm ...this like been in a four walled room with only one door and saying i want to get out of room but not through the only door...
There was an article i was reading sometime back (can't get the link now) that says memcache can handle huge (facebook uses it) sets of data in memory with very little degradation in performance...my advice is you will need to explore more on memcache, i think it will do the trick.
A:
Either a document datastore (mongo/couchdb), or a persistent key value store (tokyodb, memcachedb etc) may be explored.
No definite recommendations from me as the final solution depends on multiple factors - load, your willingness to learn/deploy a new technology, size of the data...
A:
Seems to me that one approach could be to use memcached to keep the counter, but have a cron running regularly to store the value from memcached to the db or disk. That way you'd get all the performance of memcached, but in the case of a crash you wouldn't lose more than a couple of minutes' data.
|
Capturing Implicit Signals of Interest in Django
|
To set the background: I'm interested in:
Capturing implicit signals of interest in books as users browse around a site. The site is written in django (python) using mysql, memcached, ngnix, and apache
Let's say, for instance, my site sells books. As a user browses around my site I'd like to keep track of which books they've viewed, and how many times they've viewed them.
Not that I'd store the data this way, but ideally I could have on-the-fly access to a structure like:
{user_id : {book_id: number_of_views, book_id_2: number_of_views}}
I realize there are a few approaches here:
Some flat-file log
Writing an object to a database every time
Writing to an object in memcached
I don't really know the performance implications, but I'd rather not be writing to a database on every single page view, and the lag writing to a log and computing the structure later seems not quick enough to give good recommendations on-the-fly as you use the site, and the memcached appraoch seems fine, but there's a cost in keeping this obj in memory: you might lose it, and it never gets written somewhere 'permanent'.
What approach would you suggest? (doesn't have to be one of the above) Thanks!
|
[
"If this data is not an unimportant statistic that might or might not be available I'd suggest taking the simple approach and using a model. It will surely hit the database everytime. \nUnless you are absolutely positively sure these queries are actually degrading overall experience there is no need to worry about it. Even if you optimize this one, there's a good chance other unexpected queries are wasting more CPU time. I assume you wouldn't be asking this question if you were testing all other queries. So why risk premature optimization on this one?\nAn advantage of the model approach would be having an API in place. When you have tested and decided to optimize you can keep this API and change the underlying model with something else (which will most probably be more complex than a model).\nI'd definitely go with a model first and see how it performs. (and also how other parts of the project perform)\n",
"What approach would you suggest? (doesn't have to be one of the above) Thanks!\nhmmmm ...this like been in a four walled room with only one door and saying i want to get out of room but not through the only door...\nThere was an article i was reading sometime back (can't get the link now) that says memcache can handle huge (facebook uses it) sets of data in memory with very little degradation in performance...my advice is you will need to explore more on memcache, i think it will do the trick.\n",
"Either a document datastore (mongo/couchdb), or a persistent key value store (tokyodb, memcachedb etc) may be explored. \nNo definite recommendations from me as the final solution depends on multiple factors - load, your willingness to learn/deploy a new technology, size of the data...\n",
"Seems to me that one approach could be to use memcached to keep the counter, but have a cron running regularly to store the value from memcached to the db or disk. That way you'd get all the performance of memcached, but in the case of a crash you wouldn't lose more than a couple of minutes' data.\n"
] |
[
3,
1,
1,
0
] |
[] |
[] |
[
"collaborative_filtering",
"django",
"mysql",
"python"
] |
stackoverflow_0000924530_collaborative_filtering_django_mysql_python.txt
|
Q:
The right way to auto filter SQLAlchemy queries?
I've just introspected a pretty nasty schema from a CRM app with sqlalchemy. All of the tables have a deleted column on them and I wanted to auto filter all those entities and relations flagged as deleted. Here's what I came up with:
class CustomizableQuery(Query):
"""An overridden sqlalchemy.orm.query.Query to filter entities
Filters itself by BinaryExpressions
found in :attr:`CONDITIONS`
"""
CONDITIONS = []
def __init__(self, mapper, session=None):
super(CustomizableQuery, self).__init__(mapper, session)
for cond in self.CONDITIONS:
self._add_criterion(cond)
def _add_criterion(self, criterion):
criterion = self._adapt_clause(criterion, False, True)
if self._criterion is not None:
self._criterion = self._criterion & criterion
else:
self._criterion = criterion
And it's used like this:
class UndeletedContactQuery(CustomizableQuery):
CONDITIONS = [contacts.c.deleted != True]
def by_email(self, email_address):
return EmailInfo.query.by_module_and_address('Contacts', email_address).contact
def by_username(self, uname):
return self.filter_by(twod_username_c=uname).one()
class Contact(object):
query = session.query_property(UndeletedContactQuery)
Contact.query.by_email('[email protected]')
EmailInfo is the class that's mapped to the join table between emails and the other Modules that they're related to.
Here's an example of a mapper:
contacts_map = mapper(Contact, join(contacts, contacts_cstm), {
'_emails': dynamic_loader(EmailInfo,
foreign_keys=[email_join.c.bean_id],
primaryjoin=contacts.c.id==email_join.c.bean_id,
query_class=EmailInfoQuery),
})
class EmailInfoQuery(CustomizableQuery):
CONDITIONS = [email_join.c.deleted != True]
# More methods here
This gives me what I want in that I've filtered out all deleted Contacts. I can also use this as the query_class argument to dynamic_loader in my mappers - However...
Is there a better way to do this, I'm not really happy with poking around with the internals of a compicated class like Query as I am.
Has anyone solved this in a different way that they can share?
A:
You can map to a select. Like this:
mapper(EmailInfo, select([email_join], email_join.c.deleted == False))
A:
I'd consider seeing if it was possible to create views for these tables that filter out the deleted elements, and then you might be able to map directly to that view instead of the underlying table, at least for querying operations. However I've never tried this myself!
|
The right way to auto filter SQLAlchemy queries?
|
I've just introspected a pretty nasty schema from a CRM app with sqlalchemy. All of the tables have a deleted column on them and I wanted to auto filter all those entities and relations flagged as deleted. Here's what I came up with:
class CustomizableQuery(Query):
"""An overridden sqlalchemy.orm.query.Query to filter entities
Filters itself by BinaryExpressions
found in :attr:`CONDITIONS`
"""
CONDITIONS = []
def __init__(self, mapper, session=None):
super(CustomizableQuery, self).__init__(mapper, session)
for cond in self.CONDITIONS:
self._add_criterion(cond)
def _add_criterion(self, criterion):
criterion = self._adapt_clause(criterion, False, True)
if self._criterion is not None:
self._criterion = self._criterion & criterion
else:
self._criterion = criterion
And it's used like this:
class UndeletedContactQuery(CustomizableQuery):
CONDITIONS = [contacts.c.deleted != True]
def by_email(self, email_address):
return EmailInfo.query.by_module_and_address('Contacts', email_address).contact
def by_username(self, uname):
return self.filter_by(twod_username_c=uname).one()
class Contact(object):
query = session.query_property(UndeletedContactQuery)
Contact.query.by_email('[email protected]')
EmailInfo is the class that's mapped to the join table between emails and the other Modules that they're related to.
Here's an example of a mapper:
contacts_map = mapper(Contact, join(contacts, contacts_cstm), {
'_emails': dynamic_loader(EmailInfo,
foreign_keys=[email_join.c.bean_id],
primaryjoin=contacts.c.id==email_join.c.bean_id,
query_class=EmailInfoQuery),
})
class EmailInfoQuery(CustomizableQuery):
CONDITIONS = [email_join.c.deleted != True]
# More methods here
This gives me what I want in that I've filtered out all deleted Contacts. I can also use this as the query_class argument to dynamic_loader in my mappers - However...
Is there a better way to do this, I'm not really happy with poking around with the internals of a compicated class like Query as I am.
Has anyone solved this in a different way that they can share?
|
[
"You can map to a select. Like this:\nmapper(EmailInfo, select([email_join], email_join.c.deleted == False))\n\n",
"I'd consider seeing if it was possible to create views for these tables that filter out the deleted elements, and then you might be able to map directly to that view instead of the underlying table, at least for querying operations. However I've never tried this myself!\n"
] |
[
7,
0
] |
[] |
[] |
[
"python",
"sqlalchemy",
"sugarcrm"
] |
stackoverflow_0000920724_python_sqlalchemy_sugarcrm.txt
|
Q:
Uploading multiple images in Django admin
I'm currently building a portfolio site for a client, and I'm having trouble with one small area. I want to be able to upload multiple images (varying number) inline for each portfolio item, and I can't see an obvious way to do it.
The most user-friendly way I can see would be a file upload form with a JavaScript control that allows the user to add more fields as required. Has anybody had any experience with an issue like this? Indeed, are there any custom libraries out there that would solve my problem?
I've had little call for modifying the admin tool before now, so I don't really know where to start.
Thank you to anybody who can shed some light.
A:
photologue is a feature-rich photo app for django. it e.g. lets you upload galleries as zip files (which in a sense means uploading multiple files at once), automatically creates thumbnails of different custom sizes and can apply effects to images. I used it once on one project and the integration wasn't too hard.
A:
You can extend the Admin interface pretty easily using Javascript. There's a good article on doing exactly what you want with a bit of jQuery magic.
You would just have to throw all of his code into one Javascript file and then include the following in your admin.py:
class Photo(admin.ModelAdmin):
class Media:
js = ('jquery.js', 'inlines.js',)
Looking at his source, you would also have to dynamically add the link to add more inlines using Javascript, but that's pretty easy to do:
$(document).ready(function(){
// Note the name passed in is the model's name, all lower case
$('div.last-related').after('<div><a class="add" href="#" onclick="return add_inline_form(\'photos\')">');
});
You probably need to do some styling to make it all look right, but that should get you started in the right direction.
Also, since you're in inline land, check out the inline sort snippet.
|
Uploading multiple images in Django admin
|
I'm currently building a portfolio site for a client, and I'm having trouble with one small area. I want to be able to upload multiple images (varying number) inline for each portfolio item, and I can't see an obvious way to do it.
The most user-friendly way I can see would be a file upload form with a JavaScript control that allows the user to add more fields as required. Has anybody had any experience with an issue like this? Indeed, are there any custom libraries out there that would solve my problem?
I've had little call for modifying the admin tool before now, so I don't really know where to start.
Thank you to anybody who can shed some light.
|
[
"photologue is a feature-rich photo app for django. it e.g. lets you upload galleries as zip files (which in a sense means uploading multiple files at once), automatically creates thumbnails of different custom sizes and can apply effects to images. I used it once on one project and the integration wasn't too hard. \n",
"You can extend the Admin interface pretty easily using Javascript. There's a good article on doing exactly what you want with a bit of jQuery magic.\nYou would just have to throw all of his code into one Javascript file and then include the following in your admin.py:\nclass Photo(admin.ModelAdmin):\n class Media:\n js = ('jquery.js', 'inlines.js',)\n\nLooking at his source, you would also have to dynamically add the link to add more inlines using Javascript, but that's pretty easy to do:\n$(document).ready(function(){\n // Note the name passed in is the model's name, all lower case\n $('div.last-related').after('<div><a class=\"add\" href=\"#\" onclick=\"return add_inline_form(\\'photos\\')\">');\n});\n\nYou probably need to do some styling to make it all look right, but that should get you started in the right direction.\nAlso, since you're in inline land, check out the inline sort snippet.\n"
] |
[
9,
9
] |
[] |
[] |
[
"django",
"image_uploading",
"python"
] |
stackoverflow_0000925305_django_image_uploading_python.txt
|
Q:
parsing in python
I have following string
adId:4028cb901dd9720a011e1160afbc01a3;siteId:8a8ee4f720e6beb70120e6d8e08b0002;userId:5082a05c-015e-4266-9874-5dc6262da3e0
I need only the value of adId,siteId and userId.
means
4028cb901dd9720a011e1160afbc01a3
8a8ee4f720e6beb70120e6d8e08b0002
5082a05c-015e-4266-9874-5dc6262da3e0
all the 3 in different variable or in a array so that i can use all three
A:
You can split them to a dictionary if you don't need any fancy parsing:
In [2]: dict(kvpair.split(':') for kvpair in s.split(';'))
Out[2]:
{'adId': '4028cb901dd9720a011e1160afbc01a3',
'siteId': '8a8ee4f720e6beb70120e6d8e08b0002',
'userId': '5082a05c-015e-4266-9874-5dc6262da3e0'}
A:
matches = re.findall("([a-z0-9A-Z_]+):([a-zA-Z0-9\-]+);", buf)
for m in matches:
#m[1] is adid and things
#m[2] is the long string.
You can also limit the lengths using {32} like
([a-zA-Z0-9]+){32};
Regular expressions allow you to validate the string and split it into component parts.
A:
You could do something like this:
input='adId:4028cb901dd9720a011e1160afbc01a3;siteId:8a8ee4f720e6beb70120e6d8e08b0002;userId:5082a05c-015e-4266-9874-5dc6262da3e0'
result={}
for pair in input.split(';'):
(key,value) = pair.split(':')
result[key] = value
print result['adId']
print result['siteId']
print result['userId']
A:
There is an awesome method called split() for python that will work nicely for you. I would suggest using it twice, once for ';' then again for each one of those using ':'.
|
parsing in python
|
I have following string
adId:4028cb901dd9720a011e1160afbc01a3;siteId:8a8ee4f720e6beb70120e6d8e08b0002;userId:5082a05c-015e-4266-9874-5dc6262da3e0
I need only the value of adId,siteId and userId.
means
4028cb901dd9720a011e1160afbc01a3
8a8ee4f720e6beb70120e6d8e08b0002
5082a05c-015e-4266-9874-5dc6262da3e0
all the 3 in different variable or in a array so that i can use all three
|
[
"You can split them to a dictionary if you don't need any fancy parsing:\nIn [2]: dict(kvpair.split(':') for kvpair in s.split(';'))\nOut[2]:\n{'adId': '4028cb901dd9720a011e1160afbc01a3',\n 'siteId': '8a8ee4f720e6beb70120e6d8e08b0002',\n 'userId': '5082a05c-015e-4266-9874-5dc6262da3e0'}\n\n",
"matches = re.findall(\"([a-z0-9A-Z_]+):([a-zA-Z0-9\\-]+);\", buf)\n\nfor m in matches:\n #m[1] is adid and things\n #m[2] is the long string.\n\nYou can also limit the lengths using {32} like\n([a-zA-Z0-9]+){32};\n\nRegular expressions allow you to validate the string and split it into component parts.\n",
"You could do something like this:\ninput='adId:4028cb901dd9720a011e1160afbc01a3;siteId:8a8ee4f720e6beb70120e6d8e08b0002;userId:5082a05c-015e-4266-9874-5dc6262da3e0'\n\nresult={}\nfor pair in input.split(';'):\n (key,value) = pair.split(':')\n result[key] = value\n\nprint result['adId']\nprint result['siteId']\nprint result['userId']\n\n",
"There is an awesome method called split() for python that will work nicely for you. I would suggest using it twice, once for ';' then again for each one of those using ':'.\n"
] |
[
18,
1,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000925839_python.txt
|
Q:
When calling a Python script from a PHP script, temporary file that is created on a console run, is not created via the PHP invocation
Scenario:
I have a php page in which I call a python script.
Python script when run on the command line (Linux) shows output on the command line, as well as writes the output to a file.
Python script when run through php, doesn't do either.
Elaboration:
I use a simple system command in PHP to run the python script as:
/var/www/html/1.php:
system('/usr/python/bin/python3 ../cgi-bin/tabular.py 1');
/var/www/cgi-bin/tabular.py
--This python file basically parses a data file, uses python's regular expression to search for specific headings and outputs the headings to the stdout, as well as write it to a file.
This python script has a few routines in it which get executed, so I put print statements to debug. I noticed only a few initial print statements' output in the PHP page, all the ones from the function that actually does something are not seen.
Also, as part of my test, I thought well the py script is in a different folder so let me change it to the /var/www/html folder, no go.
I hope I captured the problem statement with sufficient detail and someone is able to reproduce this issue at their end. If I make any progress on this one myself, I'll annotate this question. Thanks everyone.
Gaurav
A:
I bet your py script has some bug which couses it to break when called from inside PHP.
Try
passthru('/usr/python/bin/python3 ../cgi-bin/tabular.py 1 2>&1');
to investigate (notice 2>&1 which causess stderr to be written to stdout).
A:
A permission problem is most likely the case.
If apache is running as apache, then it will not have access to write to a file unless
The file is owned by apache
The file is in the group apache and group writable
The file is world writable
This is a "sticky" problem on a multi-user machine, as different people have access to Apache.
Try chmod 666 output.txt on the file and then re-run your test.
Considerations:
Have the python script write the output to a database
Use PHP's popen functionality to open the process and communicate over pipes
Re-write using PHP's regular expressions
Write the output file to /tmp and then read the results using PHP as soon as the python script is done.
etc...
A:
Check that the user the python script is running is has write permissions in CWD. Also, try shell_exec() or passthru() to call the script, rather than system().
|
When calling a Python script from a PHP script, temporary file that is created on a console run, is not created via the PHP invocation
|
Scenario:
I have a php page in which I call a python script.
Python script when run on the command line (Linux) shows output on the command line, as well as writes the output to a file.
Python script when run through php, doesn't do either.
Elaboration:
I use a simple system command in PHP to run the python script as:
/var/www/html/1.php:
system('/usr/python/bin/python3 ../cgi-bin/tabular.py 1');
/var/www/cgi-bin/tabular.py
--This python file basically parses a data file, uses python's regular expression to search for specific headings and outputs the headings to the stdout, as well as write it to a file.
This python script has a few routines in it which get executed, so I put print statements to debug. I noticed only a few initial print statements' output in the PHP page, all the ones from the function that actually does something are not seen.
Also, as part of my test, I thought well the py script is in a different folder so let me change it to the /var/www/html folder, no go.
I hope I captured the problem statement with sufficient detail and someone is able to reproduce this issue at their end. If I make any progress on this one myself, I'll annotate this question. Thanks everyone.
Gaurav
|
[
"I bet your py script has some bug which couses it to break when called from inside PHP.\nTry\npassthru('/usr/python/bin/python3 ../cgi-bin/tabular.py 1 2>&1');\n\nto investigate (notice 2>&1 which causess stderr to be written to stdout).\n",
"A permission problem is most likely the case.\nIf apache is running as apache, then it will not have access to write to a file unless\n\nThe file is owned by apache\nThe file is in the group apache and group writable\nThe file is world writable\n\nThis is a \"sticky\" problem on a multi-user machine, as different people have access to Apache. \nTry chmod 666 output.txt on the file and then re-run your test.\n\nConsiderations:\n\nHave the python script write the output to a database\nUse PHP's popen functionality to open the process and communicate over pipes\nRe-write using PHP's regular expressions\nWrite the output file to /tmp and then read the results using PHP as soon as the python script is done.\netc...\n\n",
"Check that the user the python script is running is has write permissions in CWD. Also, try shell_exec() or passthru() to call the script, rather than system().\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"linux",
"php",
"python"
] |
stackoverflow_0000923680_linux_php_python.txt
|
Q:
Are inner-classes unpythonic?
My colleague just pointed out that my use of inner-classes seemed to be "unpythonic". I guess it violates the "flat is better than nested" heuristic.
What do people here think? Are inner-classes something which are more appropriate to Java etc than Python?
NB : I don't think this is a "subjective" question. Surely style and aesthetics are objective within a programming community.
Related Question: Is there a benefit to defining a class inside another class in Python?
A:
This may not deserve a [subjective] tag on StackOverflow, but it's subjective on the larger stage: some language communities encourage nesting and others discourage it. So why would the Python community discourage nesting? Because Tim Peters put it in The Zen of Python? Does it apply to every scenario, always, without exception? Rules should be taken as guidelines, meaning you should not switch off your brain when applying them. You must understand what the rule means and why it's important enough that someone bothered to make a rule.
The biggest reason I know to keep things flat is because of another philosophy: do one thing and do it well. Lots of little special purpose classes living inside other classes is a sign that you're not abstracting enough. I.e., you should be removing the need and desire to have inner classes, not just moving them outside for the sake of following rules.
But sometimes you really do have some behavior that should be abstracted into a class, and it's a special case that only obtains within another single class. In that case you should use an inner class because it makes sense, and it tells anyone else reading the code that there's something special going on there.
Don't slavishly follow rules.
Do understand the reason for a rule and respect that.
A:
"Flat is better than nested" is focused on avoiding excessive nesting -- i.e., seriously deep hierarchies. One level of nesting, per se, is no big deal: as long as your nesting still respects (a weakish form of) the Law of Demeter, as typically confirmed by the fact that you don't find yourself writing stuff like onething.another.andyet.anotherone (too many dots in an expression are a "code smell" suggesting you've gone too deep in nesting and need to refactor to flatten things out), I wouldn't worry too much.
A:
Actually, I'm not sure if I agree with the whole premise that "Flat is better than nested". Sometimes, quite often actually, the best way to represent something is hierarchically... Even nature itself, often uses hierarchy.
|
Are inner-classes unpythonic?
|
My colleague just pointed out that my use of inner-classes seemed to be "unpythonic". I guess it violates the "flat is better than nested" heuristic.
What do people here think? Are inner-classes something which are more appropriate to Java etc than Python?
NB : I don't think this is a "subjective" question. Surely style and aesthetics are objective within a programming community.
Related Question: Is there a benefit to defining a class inside another class in Python?
|
[
"This may not deserve a [subjective] tag on StackOverflow, but it's subjective on the larger stage: some language communities encourage nesting and others discourage it. So why would the Python community discourage nesting? Because Tim Peters put it in The Zen of Python? Does it apply to every scenario, always, without exception? Rules should be taken as guidelines, meaning you should not switch off your brain when applying them. You must understand what the rule means and why it's important enough that someone bothered to make a rule.\nThe biggest reason I know to keep things flat is because of another philosophy: do one thing and do it well. Lots of little special purpose classes living inside other classes is a sign that you're not abstracting enough. I.e., you should be removing the need and desire to have inner classes, not just moving them outside for the sake of following rules.\nBut sometimes you really do have some behavior that should be abstracted into a class, and it's a special case that only obtains within another single class. In that case you should use an inner class because it makes sense, and it tells anyone else reading the code that there's something special going on there.\n\nDon't slavishly follow rules.\nDo understand the reason for a rule and respect that.\n\n",
"\"Flat is better than nested\" is focused on avoiding excessive nesting -- i.e., seriously deep hierarchies. One level of nesting, per se, is no big deal: as long as your nesting still respects (a weakish form of) the Law of Demeter, as typically confirmed by the fact that you don't find yourself writing stuff like onething.another.andyet.anotherone (too many dots in an expression are a \"code smell\" suggesting you've gone too deep in nesting and need to refactor to flatten things out), I wouldn't worry too much.\n",
"Actually, I'm not sure if I agree with the whole premise that \"Flat is better than nested\". Sometimes, quite often actually, the best way to represent something is hierarchically... Even nature itself, often uses hierarchy.\n"
] |
[
10,
9,
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000926327_python.txt
|
Q:
Why does defining __getitem__ on a class make it iterable in python?
Why does defining __getitem__ on a class make it iterable?
For instance if I write:
class b:
def __getitem__(self, k):
return k
cb = b()
for k in cb:
print k
I get the output:
0
1
2
3
4
5
6
7
8
...
I would really expect to see an error returned from "for k in cb:"
A:
Iteration's support for __getitem__ can be seen as a "legacy feature" which allowed smoother transition when PEP234 introduced iterability as a primary concept. It only applies to classes without __iter__ whose __getitem__ accepts integers 0, 1, &c, and raises IndexError once the index gets too high (if ever), typically "sequence" classes coded before __iter__ appeared (though nothing stops you from coding new classes this way too).
Personally, I would rather not rely on this in new code, though it's not deprecated nor is it going away (works fine in Python 3 too), so this is just a matter of style and taste ("explicit is better than implicit" so I'd rather explicitly support iterability rather than rely on __getitem__ supporting it implicitly for me -- but, not a bigge).
A:
If you take a look at PEP234 defining iterators, it says:
1. An object can be iterated over with "for" if it implements
__iter__() or __getitem__().
2. An object can function as an iterator if it implements next().
A:
__getitem__ predates the iterator protocol, and was in the past the only way to make things iterable. As such, it's still supported as a method of iterating. Essentially, the protocol for iteration is:
Check for an __iter__ method. If it exists, use the new iteration protocol.
Otherwise, try calling __getitem__ with successively larger integer values until it raises IndexError.
(2) used to be the only way of doing this, but had the disadvantage that it assumed more than was needed to support just iteration. To support iteration, you had to support random access, which was much more expensive for things like files or network streams where going forwards was easy, but going backwards would require storing everything. __iter__ allowed iteration without random access, but since random access usually allows iteration anyway, and because breaking backward compatability would be bad, __getitem__ is still supported.
A:
Special methods such as __getitem__ add special behaviors to objects, including iteration.
http://docs.python.org/reference/datamodel.html#object.getitem
"for loops expect that an IndexError will be raised for illegal indexes to allow proper detection of the end of the sequence."
Raise IndexError to signal the end of the sequence.
Your code is basically equivalent to:
i = 0
while True:
try:
yield object[i]
i += 1
except IndexError:
break
Where object is what you're iterating over in the for loop.
A:
This is so for historical reasons. Prior to Python 2.2 __getitem__ was the only way to create a class that could be iterated over with the for loop. In 2.2 the __iter__ protocol was added but to retain backwards compatibility __getitem__ still works in for loops.
A:
Because cb[0] is the same as cb.__getitem__(0). See the python documentation on this.
|
Why does defining __getitem__ on a class make it iterable in python?
|
Why does defining __getitem__ on a class make it iterable?
For instance if I write:
class b:
def __getitem__(self, k):
return k
cb = b()
for k in cb:
print k
I get the output:
0
1
2
3
4
5
6
7
8
...
I would really expect to see an error returned from "for k in cb:"
|
[
"Iteration's support for __getitem__ can be seen as a \"legacy feature\" which allowed smoother transition when PEP234 introduced iterability as a primary concept. It only applies to classes without __iter__ whose __getitem__ accepts integers 0, 1, &c, and raises IndexError once the index gets too high (if ever), typically \"sequence\" classes coded before __iter__ appeared (though nothing stops you from coding new classes this way too).\nPersonally, I would rather not rely on this in new code, though it's not deprecated nor is it going away (works fine in Python 3 too), so this is just a matter of style and taste (\"explicit is better than implicit\" so I'd rather explicitly support iterability rather than rely on __getitem__ supporting it implicitly for me -- but, not a bigge).\n",
"If you take a look at PEP234 defining iterators, it says:\n1. An object can be iterated over with \"for\" if it implements\n __iter__() or __getitem__().\n\n2. An object can function as an iterator if it implements next().\n\n",
"__getitem__ predates the iterator protocol, and was in the past the only way to make things iterable. As such, it's still supported as a method of iterating. Essentially, the protocol for iteration is:\n\nCheck for an __iter__ method. If it exists, use the new iteration protocol.\nOtherwise, try calling __getitem__ with successively larger integer values until it raises IndexError.\n\n(2) used to be the only way of doing this, but had the disadvantage that it assumed more than was needed to support just iteration. To support iteration, you had to support random access, which was much more expensive for things like files or network streams where going forwards was easy, but going backwards would require storing everything. __iter__ allowed iteration without random access, but since random access usually allows iteration anyway, and because breaking backward compatability would be bad, __getitem__ is still supported.\n",
"Special methods such as __getitem__ add special behaviors to objects, including iteration. \nhttp://docs.python.org/reference/datamodel.html#object.getitem\n\"for loops expect that an IndexError will be raised for illegal indexes to allow proper detection of the end of the sequence.\"\nRaise IndexError to signal the end of the sequence.\nYour code is basically equivalent to:\ni = 0\nwhile True:\n try:\n yield object[i]\n i += 1\n except IndexError:\n break\n\nWhere object is what you're iterating over in the for loop.\n",
"This is so for historical reasons. Prior to Python 2.2 __getitem__ was the only way to create a class that could be iterated over with the for loop. In 2.2 the __iter__ protocol was added but to retain backwards compatibility __getitem__ still works in for loops.\n",
"Because cb[0] is the same as cb.__getitem__(0). See the python documentation on this.\n"
] |
[
77,
55,
40,
8,
5,
2
] |
[] |
[] |
[
"iterator",
"overloading",
"python"
] |
stackoverflow_0000926574_iterator_overloading_python.txt
|
Q:
How can I make a class in python support __getitem__, but not allow iteration?
I want to define a class that supports __getitem__, but does not allow iteration.
for example:
class B:
def __getitem__(self, k):
return k
cb = B()
for x in cb:
print x
What could I add to the class B to force the for x in cb: to fail?
A:
I think a slightly better solution would be to raise a TypeError rather than a plain exception (this is what normally happens with a non-iterable class:
class A(object):
# show what happens with a non-iterable class with no __getitem__
pass
class B(object):
def __getitem__(self, k):
return k
def __iter__(self):
raise TypeError('%r object is not iterable'
% self.__class__.__name__)
Testing:
>>> iter(A())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'A' object is not iterable
>>> iter(B())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "iter.py", line 9, in __iter__
% self.__class__.__name__)
TypeError: 'B' object is not iterable
A:
From the answers to this question, we can see that __iter__ will be called before __getitem__ if it exists, so simply define B as:
class B:
def __getitem__(self, k):
return k
def __iter__(self):
raise Exception("This class is not iterable")
Then:
cb = B()
for x in cb: # this will throw an exception when __iter__ is called.
print x
|
How can I make a class in python support __getitem__, but not allow iteration?
|
I want to define a class that supports __getitem__, but does not allow iteration.
for example:
class B:
def __getitem__(self, k):
return k
cb = B()
for x in cb:
print x
What could I add to the class B to force the for x in cb: to fail?
|
[
"I think a slightly better solution would be to raise a TypeError rather than a plain exception (this is what normally happens with a non-iterable class:\nclass A(object):\n # show what happens with a non-iterable class with no __getitem__\n pass\n\nclass B(object):\n def __getitem__(self, k):\n return k\n def __iter__(self):\n raise TypeError('%r object is not iterable'\n % self.__class__.__name__)\n\nTesting:\n>>> iter(A())\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: 'A' object is not iterable\n>>> iter(B())\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"iter.py\", line 9, in __iter__\n % self.__class__.__name__)\nTypeError: 'B' object is not iterable\n\n",
"From the answers to this question, we can see that __iter__ will be called before __getitem__ if it exists, so simply define B as:\nclass B:\n def __getitem__(self, k):\n return k\n\n def __iter__(self):\n raise Exception(\"This class is not iterable\")\n\nThen:\ncb = B()\nfor x in cb: # this will throw an exception when __iter__ is called.\n print x\n\n"
] |
[
14,
2
] |
[] |
[] |
[
"iteration",
"operator_overloading",
"python"
] |
stackoverflow_0000926688_iteration_operator_overloading_python.txt
|
Q:
Checking to See if a List Exists Within Another Lists?
Okay I'm trying to go for a more pythonic method of doing things.
How can i do the following:
required_values = ['A','B','C']
some_map = {'A' : 1, 'B' : 2, 'C' : 3, 'D' : 4}
for required_value in required_values:
if not required_value in some_map:
print 'It Doesnt Exists'
return False
return True
I looked at the builtin function all, but I cant really see how to apply that to the above scenario.
Any suggestions for making this more pythonic?
A:
all(value in some_map for value in required_values)
A:
return set(required_values).issubset(set(some_map.keys()))
A:
try a list comprehension:
return not bool([x for x in required_values if x not in some_map.keys()]) (bool conversion for clarity)
or return not [x for x in required_values if x not in some_map.keys()] (i think the more pythonic way)
The inside [] statement builds a list of all required values not in your map keys
if the list is empty it evaluates to False, otherwise to True.
so if the map has not all required values, at least one element will be in the list built by the list comprehension expression.
This will evaluate to True, so we negate the result to fulfill your code requirements (which are all required values should be present in the map)
|
Checking to See if a List Exists Within Another Lists?
|
Okay I'm trying to go for a more pythonic method of doing things.
How can i do the following:
required_values = ['A','B','C']
some_map = {'A' : 1, 'B' : 2, 'C' : 3, 'D' : 4}
for required_value in required_values:
if not required_value in some_map:
print 'It Doesnt Exists'
return False
return True
I looked at the builtin function all, but I cant really see how to apply that to the above scenario.
Any suggestions for making this more pythonic?
|
[
"all(value in some_map for value in required_values)\n\n",
"return set(required_values).issubset(set(some_map.keys()))\n\n",
"try a list comprehension:\nreturn not bool([x for x in required_values if x not in some_map.keys()]) (bool conversion for clarity)\nor return not [x for x in required_values if x not in some_map.keys()] (i think the more pythonic way)\nThe inside [] statement builds a list of all required values not in your map keys\nif the list is empty it evaluates to False, otherwise to True.\nso if the map has not all required values, at least one element will be in the list built by the list comprehension expression. \nThis will evaluate to True, so we negate the result to fulfill your code requirements (which are all required values should be present in the map)\n"
] |
[
11,
3,
0
] |
[] |
[] |
[
"list",
"python"
] |
stackoverflow_0000926946_list_python.txt
|
Q:
Do I need PyISAPIe to run Django on IIS6?
It seems that all roads lead to having to use PyISAPIe to get Django running on IIS6. This becomes a problem for us because it appears you need separate application pools per PyISAPIe/Django instance which is something we'd prefer not to do.
Does anyone have any advice/guidance, or can share their experiences (particularly in a shared Windows hosting environment)?
A:
You need separate application pools no matter what extension you use. This is because application pools split the handler DLLs into different w3wp.exe process instances. You might wonder why this is necessary:
Look at Django's module setting: os.environ["DJANGO_SETTINGS_MODULE"]. That's the environment of the process, so if you change it for one ISAPI handler and then later another within the same application pool, they both point to the new DJANGO_SETTINGS_MODULE.
There isn't any meaningful reason for this, so feel free to convince the Django developers they don't need to do it :)
There are a few ways to hack around it but nothing works as cleanly as separate app pools.
Unfortunately, isapi-wsgi won't fix the Django problem, and I'd recommend that you keep using PyISAPIe (disclaimer: I'm the developer! ;)
A:
Django runs well on any WSGI infrastructure (much like any other modern Python web app framework) and there are several ways to run WSGI on IIS, e.g. see http://code.google.com/p/isapi-wsgi/ .
|
Do I need PyISAPIe to run Django on IIS6?
|
It seems that all roads lead to having to use PyISAPIe to get Django running on IIS6. This becomes a problem for us because it appears you need separate application pools per PyISAPIe/Django instance which is something we'd prefer not to do.
Does anyone have any advice/guidance, or can share their experiences (particularly in a shared Windows hosting environment)?
|
[
"You need separate application pools no matter what extension you use. This is because application pools split the handler DLLs into different w3wp.exe process instances. You might wonder why this is necessary:\nLook at Django's module setting: os.environ[\"DJANGO_SETTINGS_MODULE\"]. That's the environment of the process, so if you change it for one ISAPI handler and then later another within the same application pool, they both point to the new DJANGO_SETTINGS_MODULE.\nThere isn't any meaningful reason for this, so feel free to convince the Django developers they don't need to do it :)\nThere are a few ways to hack around it but nothing works as cleanly as separate app pools.\nUnfortunately, isapi-wsgi won't fix the Django problem, and I'd recommend that you keep using PyISAPIe (disclaimer: I'm the developer! ;)\n",
"Django runs well on any WSGI infrastructure (much like any other modern Python web app framework) and there are several ways to run WSGI on IIS, e.g. see http://code.google.com/p/isapi-wsgi/ .\n"
] |
[
3,
1
] |
[] |
[] |
[
"django",
"iis_6",
"pyisapie",
"python"
] |
stackoverflow_0000853755_django_iis_6_pyisapie_python.txt
|
Q:
Giving anonymous users the same functionality as registered ones
I'm working on an online store in Django (just a basic shopping cart right now), and I'm planning to add functionality for users to mark items as favorite (just like in stackoverflow). Models for the cart look something like this:
class Cart(models.Model):
user = models.OneToOneField(User)
class CartItem(models.Model):
cart = models.ForeignKey(Cart)
product = models.ForeignKey(Product, verbose_name="produs")
The favorites model would be just a table with two rows: user and product.
The problem is that this would only work for registered users, as I need a user object. How can I also let unregistered users use these features, saving the data in cookies/sessions, and when and if they decides to register, moving the data to their user?
I guess one option would be some kind of generic relations, but I think that's a little to complicated. Maybe having an extra row after user that's a session object (I haven't really used sessions in django until now), and if the User is set to None, use that?
So basically, what I want to ask, is if you've had this problem before, how did you solve it, what would be the best approach?
A:
I haven't done this before but from reading your description I would simply create a user object when someone needs to do something that requires it. You then send the user a cookie which links to this user object, so if someone comes back (without clearing their cookies) they get the same skeleton user object.
This means that you can use your current code with minimal changes and when they want to migrate to a full registered user you can just populate the skeleton user object with their details.
If you wanted to keep your DB tidy-ish you could add a task that deletes all skeleton Users that haven't been used in say the last 30 days.
A:
Seems to me that the easiest way to do this would be to store both the user id or the session id:
class Cart(models.Model):
user = models.ForeignKey(User, null=True)
session = models.CharField(max_length=32, null=True)
Then, when a user registers, you can take their request.session.session_key and update all rows with their new user id.
Better yet, you could define a "UserProxy" model:
class Cart(models.Model):
user = models.ForeignKey(UserProxy)
class UserProxy(models.Model):
user = models.ForeignKey(User, unique=True, null=True)
session = models.CharField(max_length=32, null=True)
So then you just have to update the UserProxy table when they register, and nothing about the cart has to change.
A:
Just save the user data the user table and don't populate then userid/password tables.
if a user registers then you just have to populate those fields.
You will have to have some "cleanup" script run periodically to clear out any users who haven't visited in some arbitrary period. I'd make this cleanup optional. and have a script that can be run serverside (or via a web admin interface) to clear out in case your client wants to do it manually.
remember to deleted all related entries as well as the user entry.
A:
I think you were on the right track thinking about using sessions. I would store a list of Product id's in the users session and then when the user registers, create a cart as you have defined and then add the items. Check out the session docs.
You could allow people that are either not logged in or don't have an account to add items to a 'temp' cart. When the person logs in to either account or creates a new account, add those items to their 'real' cart. Then by just adding a few lines to your 'add item to cart' and login functions, you can use your existing models.
|
Giving anonymous users the same functionality as registered ones
|
I'm working on an online store in Django (just a basic shopping cart right now), and I'm planning to add functionality for users to mark items as favorite (just like in stackoverflow). Models for the cart look something like this:
class Cart(models.Model):
user = models.OneToOneField(User)
class CartItem(models.Model):
cart = models.ForeignKey(Cart)
product = models.ForeignKey(Product, verbose_name="produs")
The favorites model would be just a table with two rows: user and product.
The problem is that this would only work for registered users, as I need a user object. How can I also let unregistered users use these features, saving the data in cookies/sessions, and when and if they decides to register, moving the data to their user?
I guess one option would be some kind of generic relations, but I think that's a little to complicated. Maybe having an extra row after user that's a session object (I haven't really used sessions in django until now), and if the User is set to None, use that?
So basically, what I want to ask, is if you've had this problem before, how did you solve it, what would be the best approach?
|
[
"I haven't done this before but from reading your description I would simply create a user object when someone needs to do something that requires it. You then send the user a cookie which links to this user object, so if someone comes back (without clearing their cookies) they get the same skeleton user object.\nThis means that you can use your current code with minimal changes and when they want to migrate to a full registered user you can just populate the skeleton user object with their details.\nIf you wanted to keep your DB tidy-ish you could add a task that deletes all skeleton Users that haven't been used in say the last 30 days.\n",
"Seems to me that the easiest way to do this would be to store both the user id or the session id:\nclass Cart(models.Model):\n user = models.ForeignKey(User, null=True)\n session = models.CharField(max_length=32, null=True)\n\nThen, when a user registers, you can take their request.session.session_key and update all rows with their new user id.\nBetter yet, you could define a \"UserProxy\" model:\nclass Cart(models.Model):\n user = models.ForeignKey(UserProxy)\n\nclass UserProxy(models.Model):\n user = models.ForeignKey(User, unique=True, null=True)\n session = models.CharField(max_length=32, null=True)\n\nSo then you just have to update the UserProxy table when they register, and nothing about the cart has to change.\n",
"Just save the user data the user table and don't populate then userid/password tables.\nif a user registers then you just have to populate those fields.\nYou will have to have some \"cleanup\" script run periodically to clear out any users who haven't visited in some arbitrary period. I'd make this cleanup optional. and have a script that can be run serverside (or via a web admin interface) to clear out in case your client wants to do it manually.\nremember to deleted all related entries as well as the user entry.\n",
"I think you were on the right track thinking about using sessions. I would store a list of Product id's in the users session and then when the user registers, create a cart as you have defined and then add the items. Check out the session docs. \nYou could allow people that are either not logged in or don't have an account to add items to a 'temp' cart. When the person logs in to either account or creates a new account, add those items to their 'real' cart. Then by just adding a few lines to your 'add item to cart' and login functions, you can use your existing models.\n"
] |
[
12,
5,
2,
0
] |
[] |
[] |
[
"django",
"python",
"session"
] |
stackoverflow_0000925456_django_python_session.txt
|
Q:
FIFO (named pipe) messaging obstacles
I plan to use Unix named pipes (mkfifo) for simple multi-process messaging.
A message would be just a single line of text.
Would you discourage me from that? What obstacles should I expect?
I have noticed these limitations:
A sender cannot continue until the message is received.
A receiver is blocked until there are some data. Nonblocking IO would be needed
when we need to stop the reading. For example, another thread could ask for that.
The receiver could obtain many messages in a single read. These have to be processed
before quiting.
The max length of an atomic message is limited by 4096 bytes. That is the PIPE_BUF limit on Linux (see man 7 pipe).
I will implement the messaging in Python. But the obstacles hold in general.
A:
Lack of portability - they are mainly a Unix thing. Sockets are more portable.
Harder to scale out to multiple systems (another sockets+)
On the other hand, I believe pipes are faster than sockets for processes on the same machine (less communication overhead).
As to your limitations,
You can "select" on pipes, to do a non-blocking read.
I normally (in perl) print out my messages on pipes seperated by "\n", and read a line from them to get one message at a time.
Do be careful with the atomic length.
I find perlipc to be a good discussion between the various options, though it has perl specific code.
A:
The blocking, both on the sender side and the receiver side, can be worked around via non-blocking I/O.
Further limitations of FIFOs:
Only one client at a time.
After the client closes the FIFO, the server need to re-open its endpoint.
Unidirectional.
I would use UNIX domain sockets instead, which have none of the above limitations.
As an added benefit, if you want to scale it to communicate between multiple machines, it's barely any change at all. For example, just take the Python documentation page on socket and replace socket.AF_INET with socket.AF_UNIX, (HOST, PORT) with filename, and it just works.
SOCK_STREAM will give you stream-like behavior; that is, two sends may be merged into one receive or vice versa. AF_UNIX also supports SOCK_DGRAM: datagrams are guaranteed to be sent and read all as one unit or not at all. (Analogously, AF_INET+SOCK_STREAM=TCP, AF_INET+SOCK_DGRAM=UDP.)
|
FIFO (named pipe) messaging obstacles
|
I plan to use Unix named pipes (mkfifo) for simple multi-process messaging.
A message would be just a single line of text.
Would you discourage me from that? What obstacles should I expect?
I have noticed these limitations:
A sender cannot continue until the message is received.
A receiver is blocked until there are some data. Nonblocking IO would be needed
when we need to stop the reading. For example, another thread could ask for that.
The receiver could obtain many messages in a single read. These have to be processed
before quiting.
The max length of an atomic message is limited by 4096 bytes. That is the PIPE_BUF limit on Linux (see man 7 pipe).
I will implement the messaging in Python. But the obstacles hold in general.
|
[
"\nLack of portability - they are mainly a Unix thing. Sockets are more portable.\nHarder to scale out to multiple systems (another sockets+)\nOn the other hand, I believe pipes are faster than sockets for processes on the same machine (less communication overhead).\n\nAs to your limitations,\n\nYou can \"select\" on pipes, to do a non-blocking read.\nI normally (in perl) print out my messages on pipes seperated by \"\\n\", and read a line from them to get one message at a time.\nDo be careful with the atomic length.\n\nI find perlipc to be a good discussion between the various options, though it has perl specific code.\n",
"The blocking, both on the sender side and the receiver side, can be worked around via non-blocking I/O.\nFurther limitations of FIFOs:\n\nOnly one client at a time.\nAfter the client closes the FIFO, the server need to re-open its endpoint.\nUnidirectional.\n\nI would use UNIX domain sockets instead, which have none of the above limitations.\nAs an added benefit, if you want to scale it to communicate between multiple machines, it's barely any change at all. For example, just take the Python documentation page on socket and replace socket.AF_INET with socket.AF_UNIX, (HOST, PORT) with filename, and it just works.\nSOCK_STREAM will give you stream-like behavior; that is, two sends may be merged into one receive or vice versa. AF_UNIX also supports SOCK_DGRAM: datagrams are guaranteed to be sent and read all as one unit or not at all. (Analogously, AF_INET+SOCK_STREAM=TCP, AF_INET+SOCK_DGRAM=UDP.)\n"
] |
[
5,
3
] |
[] |
[] |
[
"linux",
"named_pipes",
"pipe",
"python",
"unix"
] |
stackoverflow_0000927233_linux_named_pipes_pipe_python_unix.txt
|
Q:
Render PyCairo onto PyOpenGL surface?
I've recently started playing with pycairo - is it easy enough to render this to an pyopengl surface (e.g. on the side of a cube?)... my opengl is really non-existant so I'm not sure the best way to go about this.
A:
This procedure might work:
Do your drawing in pycairo like normal.
Export the image to a file (or get a handle to it in memory).
Load the image into opengl texture memory.
Draw your cube in opengl using the texture.
Steps 1&2 are in cairo, which I'm not familiar with. Steps 3&4 would be done in opengl. There's a tutorial on drawing textured surfaces at NeHe with a link to a python version at the bottom.
|
Render PyCairo onto PyOpenGL surface?
|
I've recently started playing with pycairo - is it easy enough to render this to an pyopengl surface (e.g. on the side of a cube?)... my opengl is really non-existant so I'm not sure the best way to go about this.
|
[
"This procedure might work:\n\nDo your drawing in pycairo like normal.\nExport the image to a file (or get a handle to it in memory).\nLoad the image into opengl texture memory.\nDraw your cube in opengl using the texture.\n\nSteps 1&2 are in cairo, which I'm not familiar with. Steps 3&4 would be done in opengl. There's a tutorial on drawing textured surfaces at NeHe with a link to a python version at the bottom.\n"
] |
[
0
] |
[] |
[] |
[
"cairo",
"opengl",
"python"
] |
stackoverflow_0000820221_cairo_opengl_python.txt
|
Q:
Does python-memcache use consistent hashing?
I'm using the python-memcache library, and I'm wondering if anyone knows if consistent hashing is used by that client as of 1.44.
A:
If you need something like that you might be interested in hash_ring
A:
From a quick view into the source code: No it does not. It uses server = hash_key % len(servers) and round-robin if offline/full servers are encountered.
|
Does python-memcache use consistent hashing?
|
I'm using the python-memcache library, and I'm wondering if anyone knows if consistent hashing is used by that client as of 1.44.
|
[
"If you need something like that you might be interested in hash_ring\n",
"From a quick view into the source code: No it does not. It uses server = hash_key % len(servers) and round-robin if offline/full servers are encountered.\n"
] |
[
3,
2
] |
[] |
[] |
[
"memcached",
"python"
] |
stackoverflow_0000926814_memcached_python.txt
|
Q:
Is there a uniform python library to transfer files using different protocols
I know there is ftplib for ftp, shutil for local files, what about NFS? I know urllib2 can get files via HTTP/HTTPS/FTP/FTPS, but it can't put files.
If there is a uniform library that automatically detects the protocol (FTP/NFS/LOCAL) with URI and deals with file transfer (get/put) transparently, it's even better, does it exist?
A:
You want to look up and use pycurl/libcurl. Libcurl: http://curl.haxx.se/ PyCurl: http://pycurl.sourceforge.net/ - curl supports the http://, file://, and ftp:// uris. I have used it with much success.
A:
Have a look at KDE IOSlaves. They can manage all the protocol you describe, plus a few others (samba, ssh, ...).
You can instantiates IOSlaves through PyKDE or if that dependency is too big, you can probably manage the ioslave from python with the subprocess module.
|
Is there a uniform python library to transfer files using different protocols
|
I know there is ftplib for ftp, shutil for local files, what about NFS? I know urllib2 can get files via HTTP/HTTPS/FTP/FTPS, but it can't put files.
If there is a uniform library that automatically detects the protocol (FTP/NFS/LOCAL) with URI and deals with file transfer (get/put) transparently, it's even better, does it exist?
|
[
"You want to look up and use pycurl/libcurl. Libcurl: http://curl.haxx.se/ PyCurl: http://pycurl.sourceforge.net/ - curl supports the http://, file://, and ftp:// uris. I have used it with much success.\n",
"Have a look at KDE IOSlaves. They can manage all the protocol you describe, plus a few others (samba, ssh, ...).\nYou can instantiates IOSlaves through PyKDE or if that dependency is too big, you can probably manage the ioslave from python with the subprocess module.\n"
] |
[
2,
1
] |
[] |
[] |
[
"file",
"ftp",
"networking",
"nfs",
"python"
] |
stackoverflow_0000925716_file_ftp_networking_nfs_python.txt
|
Q:
Really odd (mod)_python problem
this one is hard to explain!
I am writing a python application to be ran through mod_python. At each request, the returned output differs, even though the logic is 'fixed'.
I have two classes, classA and classB. Such that:
class ClassA:
def page(self, req):
req.write("In classA page")
objB = ClassB()
objB.methodB(req)
req.write("End of page")
class ClassB:
def methodB(self, req):
req.write("In methodB")
return None
Which is a heavily snipped version of what I have. But the stuff I have snipped doesn't change the control flow. There is only one place where MethodB() is called. That is from __init__() in classA.
You would expect the following output:
In classA __init__
In methodB
End of __init__
However, seemingly randomly either get the above correct output or:
In classA __init__
In methodB
End of __init__
In methodB
The stacktrace shows that methodB is being called the second time from __init__. methodB should only be called once. If it is called a second time, you would expect that the other logic in __init__ be done twice too. But nothing before or after methodB executes and there is no recursion.
I wouldn't usually resort to using SO for my debugging, but I have been scratching my head for a while on this.
Version: 2.5.2 r252:60911
thanks in advance
Edit
Some clues that the problem might be elsewhere .... The above changes to the snippet result in the weird output 1 in every 250 or so hits. Which is odd.
The more output prior to printing "In methodB", the more it is printed subsequently incorrectly ... on average, not in direct ratio. It even does it in Lynx.
Im going back to the drawing board.
:(
In response to answer
It seems mod_python and Apache are having marital problems. A restart and things are fine for a few requests. Then it all goes increasingly pear-shaped. When issuing
/etc/rc.d/init.d/httpd stop
It takes a weirdly long amount of time. Also RAM is getting eaten up with requests. I am not that familiar with Apache's internals but it feels like (thanks to Nadia) that threads are staying alive and randomly butting in on requests. Which is plain bonkers.
Moving to mod_wsgi as S.Lott and Nadia suggested
thanks again!!
A:
I've seen similar behaviour with mod_python before. Usually it is because apache is running multiple threads and one of them is running an older version of the code. When you refresh the page chances are the thread with the older code is serving the page. I usually fix this by stoping apache and then restarting it again
sudo /etc/init.d/apache stop
sudo /etc/init.d/apache restart
Restart on its own doesn't always work. Sometimes even that doesn't work! That might sound strange but my last resort in those rare cases where nothing is working is to add a raise Exception() statement on the first line in the handler, refresh the page, restart apache and then refresh the page again. That works every time. There must be a better solution. But that what worked for me. mod_python can drive one crazy for sure!
I hope this might help.
A:
I don't really know, but constructors aren't supposed to return anything, so remove the return None. Even if they could return stuff, None is automatically returned if a function doesn't return anything by itself.
And I think you need a self argument in MethodB.
EDIT: Could you show more code? This is working fine.
|
Really odd (mod)_python problem
|
this one is hard to explain!
I am writing a python application to be ran through mod_python. At each request, the returned output differs, even though the logic is 'fixed'.
I have two classes, classA and classB. Such that:
class ClassA:
def page(self, req):
req.write("In classA page")
objB = ClassB()
objB.methodB(req)
req.write("End of page")
class ClassB:
def methodB(self, req):
req.write("In methodB")
return None
Which is a heavily snipped version of what I have. But the stuff I have snipped doesn't change the control flow. There is only one place where MethodB() is called. That is from __init__() in classA.
You would expect the following output:
In classA __init__
In methodB
End of __init__
However, seemingly randomly either get the above correct output or:
In classA __init__
In methodB
End of __init__
In methodB
The stacktrace shows that methodB is being called the second time from __init__. methodB should only be called once. If it is called a second time, you would expect that the other logic in __init__ be done twice too. But nothing before or after methodB executes and there is no recursion.
I wouldn't usually resort to using SO for my debugging, but I have been scratching my head for a while on this.
Version: 2.5.2 r252:60911
thanks in advance
Edit
Some clues that the problem might be elsewhere .... The above changes to the snippet result in the weird output 1 in every 250 or so hits. Which is odd.
The more output prior to printing "In methodB", the more it is printed subsequently incorrectly ... on average, not in direct ratio. It even does it in Lynx.
Im going back to the drawing board.
:(
In response to answer
It seems mod_python and Apache are having marital problems. A restart and things are fine for a few requests. Then it all goes increasingly pear-shaped. When issuing
/etc/rc.d/init.d/httpd stop
It takes a weirdly long amount of time. Also RAM is getting eaten up with requests. I am not that familiar with Apache's internals but it feels like (thanks to Nadia) that threads are staying alive and randomly butting in on requests. Which is plain bonkers.
Moving to mod_wsgi as S.Lott and Nadia suggested
thanks again!!
|
[
"I've seen similar behaviour with mod_python before. Usually it is because apache is running multiple threads and one of them is running an older version of the code. When you refresh the page chances are the thread with the older code is serving the page. I usually fix this by stoping apache and then restarting it again\nsudo /etc/init.d/apache stop\nsudo /etc/init.d/apache restart\n\nRestart on its own doesn't always work. Sometimes even that doesn't work! That might sound strange but my last resort in those rare cases where nothing is working is to add a raise Exception() statement on the first line in the handler, refresh the page, restart apache and then refresh the page again. That works every time. There must be a better solution. But that what worked for me. mod_python can drive one crazy for sure!\nI hope this might help.\n",
"I don't really know, but constructors aren't supposed to return anything, so remove the return None. Even if they could return stuff, None is automatically returned if a function doesn't return anything by itself.\nAnd I think you need a self argument in MethodB.\nEDIT: Could you show more code? This is working fine.\n"
] |
[
4,
1
] |
[] |
[] |
[
"debugging",
"mod_python",
"python"
] |
stackoverflow_0000927759_debugging_mod_python_python.txt
|
Q:
str.startswith() not working as I intended
I'm trying to test for a /t or a space character and I can't understand why this bit of code won't work. What I am doing is reading in a file, counting the loc for the file, and then recording the names of each function present within the file along with their individual lines of code. The bit of code below is where I attempt to count the loc for the functions.
import re
...
else:
loc += 1
for line in infile:
line_t = line.lstrip()
if len(line_t) > 0 \
and not line_t.startswith('#') \
and not line_t.startswith('"""'):
if not line.startswith('\s'):
print ('line = ' + repr(line))
loc += 1
return (loc, name)
else:
loc += 1
elif line_t.startswith('"""'):
while True:
if line_t.rstrip().endswith('"""'):
break
line_t = infile.readline().rstrip()
return(loc,name)
Output:
Enter the file name: test.txt
line = '\tloc = 0\n'
There were 19 lines of code in "test.txt"
Function names:
count_loc -- 2 lines of code
As you can see, my test print for the line shows a /t, but the if statement explicitly says (or so I thought) that it should only execute with no whitespace characters present.
Here is my full test file I have been using:
def count_loc(infile):
""" Receives a file and then returns the amount
of actual lines of code by not counting commented
or blank lines """
loc = 0
for line in infile:
line = line.strip()
if len(line) > 0 \
and not line.startswith('//') \
and not line.startswith('/*'):
loc += 1
func_loc, func_name = checkForFunction(line);
elif line.startswith('/*'):
while True:
if line.endswith('*/'):
break
line = infile.readline().rstrip()
return loc
if __name__ == "__main__":
print ("Hi")
Function LOC = 15
File LOC = 19
A:
\s is only whitespace to the re package when doing pattern matching.
For startswith, an ordinary method of ordinary strings, \s is nothing special. Not a pattern, just characters.
A:
Your question has already been answered and this is slightly off-topic, but...
If you want to parse code, it is often easier and less error-prone to use a parser. If your code is Python code, Python comes with a couple of parsers (tokenize, ast, parser). For other languages, you can find a lot of parsers on the internet. ANTRL is a well-known one with Python bindings.
As an example, the following couple of lines of code print all lines of a Python module that are not comments and not doc-strings:
import tokenize
ignored_tokens = [tokenize.NEWLINE,tokenize.COMMENT,tokenize.N_TOKENS
,tokenize.STRING,tokenize.ENDMARKER,tokenize.INDENT
,tokenize.DEDENT,tokenize.NL]
with open('test.py', 'r') as f:
g = tokenize.generate_tokens(f.readline)
line_num = 0
for a_token in g:
if a_token[2][0] != line_num and a_token[0] not in ignored_tokens:
line_num = a_token[2][0]
print(a_token)
As a_token above is already parsed, you can easily check for function definition, too. You can also keep track where the function ends by looking at the current column start a_token[2][1]. If you want to do more complex things, you should use ast.
A:
You string literals aren't what you think they are.
You can specify a space or TAB like so:
space = ' '
tab = '\t'
|
str.startswith() not working as I intended
|
I'm trying to test for a /t or a space character and I can't understand why this bit of code won't work. What I am doing is reading in a file, counting the loc for the file, and then recording the names of each function present within the file along with their individual lines of code. The bit of code below is where I attempt to count the loc for the functions.
import re
...
else:
loc += 1
for line in infile:
line_t = line.lstrip()
if len(line_t) > 0 \
and not line_t.startswith('#') \
and not line_t.startswith('"""'):
if not line.startswith('\s'):
print ('line = ' + repr(line))
loc += 1
return (loc, name)
else:
loc += 1
elif line_t.startswith('"""'):
while True:
if line_t.rstrip().endswith('"""'):
break
line_t = infile.readline().rstrip()
return(loc,name)
Output:
Enter the file name: test.txt
line = '\tloc = 0\n'
There were 19 lines of code in "test.txt"
Function names:
count_loc -- 2 lines of code
As you can see, my test print for the line shows a /t, but the if statement explicitly says (or so I thought) that it should only execute with no whitespace characters present.
Here is my full test file I have been using:
def count_loc(infile):
""" Receives a file and then returns the amount
of actual lines of code by not counting commented
or blank lines """
loc = 0
for line in infile:
line = line.strip()
if len(line) > 0 \
and not line.startswith('//') \
and not line.startswith('/*'):
loc += 1
func_loc, func_name = checkForFunction(line);
elif line.startswith('/*'):
while True:
if line.endswith('*/'):
break
line = infile.readline().rstrip()
return loc
if __name__ == "__main__":
print ("Hi")
Function LOC = 15
File LOC = 19
|
[
"\\s is only whitespace to the re package when doing pattern matching.\nFor startswith, an ordinary method of ordinary strings, \\s is nothing special. Not a pattern, just characters.\n",
"Your question has already been answered and this is slightly off-topic, but...\nIf you want to parse code, it is often easier and less error-prone to use a parser. If your code is Python code, Python comes with a couple of parsers (tokenize, ast, parser). For other languages, you can find a lot of parsers on the internet. ANTRL is a well-known one with Python bindings.\nAs an example, the following couple of lines of code print all lines of a Python module that are not comments and not doc-strings:\nimport tokenize\n\nignored_tokens = [tokenize.NEWLINE,tokenize.COMMENT,tokenize.N_TOKENS\n ,tokenize.STRING,tokenize.ENDMARKER,tokenize.INDENT\n ,tokenize.DEDENT,tokenize.NL]\nwith open('test.py', 'r') as f:\n g = tokenize.generate_tokens(f.readline)\n line_num = 0\n for a_token in g:\n if a_token[2][0] != line_num and a_token[0] not in ignored_tokens:\n line_num = a_token[2][0]\n print(a_token)\n\nAs a_token above is already parsed, you can easily check for function definition, too. You can also keep track where the function ends by looking at the current column start a_token[2][1]. If you want to do more complex things, you should use ast.\n",
"You string literals aren't what you think they are.\nYou can specify a space or TAB like so:\nspace = ' '\ntab = '\\t'\n\n"
] |
[
8,
3,
2
] |
[] |
[] |
[
"python",
"python_3.x",
"string"
] |
stackoverflow_0000927584_python_python_3.x_string.txt
|
Q:
Formatting the output of a key from a dictionary
I have a dictionary which store a string as the key, and an integer as the value. In my output I would like to have the key displayed as a string without parenthesis or commas. How would I do this?
for f_name,f_loc in dict_func.items():
print ('Function names:\n\n\t{0} -- {1} lines of code\n'.format(f_name, f_loc))
output:
Enter the file name: test.txt
line = 'def count_loc(infile):'
There were 19 lines of code in "test.txt"
Function names:
('count_loc(infile)',) -- 15 lines of code
Just incase it wasn't clear, I would like the last line of the output to be displayed as:
count_loc(infile) -- 15 lines of code
EDIT
name = re.search(func_pattern, line).groups()
name = str(name)
Using type() before my output, I verified it remains a string, but the output is as it was when name was a tuple
A:
I don't have Python 3 so I can't test this, but the output of f_name makes it look like it is a tuple with one element in it. So you would change .format(f_name, f_loc) to .format(f_name[0], f_loc)
EDIT:
In response to your edit, try using .group() instead of .groups()
A:
To elaborate on Peter's answer, It looks to me like you're assigning a one-item tuple as the key of your dictionary. If you're evaluating an expression in parentheses somewhere and using that as the key, be sure you don't have a stray comma in there.
Looking at your further edited answer, it's indeed because you're using the groups() method of your regex match. That returns a tuple of (the entire matched section + all the matched groups), and since you have no groups, you want the entire thing. group() with no parameters will give you that.
A:
I expect you have a problem with your parsing code. The lines as written should work as expected.
A:
Since the key is some type of tuple, you may want to join the different elements before printing. We can't really tell what the significance of the key is from the snippet shown.
So you could do something like such:
.format(", ".join(f_name), f_loc)
|
Formatting the output of a key from a dictionary
|
I have a dictionary which store a string as the key, and an integer as the value. In my output I would like to have the key displayed as a string without parenthesis or commas. How would I do this?
for f_name,f_loc in dict_func.items():
print ('Function names:\n\n\t{0} -- {1} lines of code\n'.format(f_name, f_loc))
output:
Enter the file name: test.txt
line = 'def count_loc(infile):'
There were 19 lines of code in "test.txt"
Function names:
('count_loc(infile)',) -- 15 lines of code
Just incase it wasn't clear, I would like the last line of the output to be displayed as:
count_loc(infile) -- 15 lines of code
EDIT
name = re.search(func_pattern, line).groups()
name = str(name)
Using type() before my output, I verified it remains a string, but the output is as it was when name was a tuple
|
[
"I don't have Python 3 so I can't test this, but the output of f_name makes it look like it is a tuple with one element in it. So you would change .format(f_name, f_loc) to .format(f_name[0], f_loc)\nEDIT:\nIn response to your edit, try using .group() instead of .groups()\n",
"To elaborate on Peter's answer, It looks to me like you're assigning a one-item tuple as the key of your dictionary. If you're evaluating an expression in parentheses somewhere and using that as the key, be sure you don't have a stray comma in there.\nLooking at your further edited answer, it's indeed because you're using the groups() method of your regex match. That returns a tuple of (the entire matched section + all the matched groups), and since you have no groups, you want the entire thing. group() with no parameters will give you that.\n",
"I expect you have a problem with your parsing code. The lines as written should work as expected.\n",
"Since the key is some type of tuple, you may want to join the different elements before printing. We can't really tell what the significance of the key is from the snippet shown.\nSo you could do something like such:\n.format(\", \".join(f_name), f_loc)\n\n"
] |
[
4,
3,
2,
1
] |
[] |
[] |
[
"dictionary",
"python",
"python_3.x",
"string"
] |
stackoverflow_0000928330_dictionary_python_python_3.x_string.txt
|
Q:
Fedora Python Upgrade broke easy_install
Fedora Core 9 includes Python 2.5.1. I can use YUM to get latest and greatest releases.
To get ready for 2.6 official testing, I wanted to start with 2.5.4. It appears that there's no Fedora 9 YUM package, because 2.5.4 isn't an official part of FC9.
I downloaded 2.5.4, did ./configure; make; make install and wound up with two Pythons. The official 2.5.1 (in /usr/bin) and the new 2.5.4. (in /usr/local/bin).
None of my technology stack is installed in /usr/local/lib/python2.5.
It appears that I have several choices for going forward. Anyone have any preferences?
Copy /usr/lib/python2.5/* to /usr/local/lib/python2.5 to replicate my environment. This should work, unless some part of the Python libraries have /usr/bin/python wired in during installation. This is sure simple, but is there a down side?
Reinstall everything by running easy_install. Except, easy_install is (currently) hard-wired to /usr/bin/python. So, I'd have to fix easy_install first, then reinstall everything.
This takes some time, but it gives me a clean, new latest-and-greatest environment. But is there a down-side? [And why does easy_install hard-wire itself?]
Relink /usr/bin/python to be /usr/local/bin/python. I'd still have to copy or reinstall the library, so I don't think this does me any good. [It would make easy_install work; but so would editing /usr/bin/easy_install.]
Has anyone copied their library? Is it that simple?
Or should I fix easy_install and simply step through the installation guide and build a new, clean, latest-and-greatest?
Edit
Or, should I
Skip trying to resolve the 2.5.1 and 2.5.4 issues and just jump straight to 2.6?
A:
Normally, you would only have one version of a python release installed. Since 2.5.1 and 2.5.4 are from the same release, copying your libraries should work fine. What you would need to watch out for, is that you now have /usr/bin/python, and /usr/local/bin/python in your path, and some utilities may get confused.
If you need to have both micro-releases installed at once, I would keep 2.5.4 out of your path altogether, or allow it to completely clobber the other (do so at your own risk though ;)
If you go with the former, you can also point 2.5.4 to your site-packages by using the PYTHONPATH environment variable.
Ubuntu takes a different route, and this is how you can handle different major releases. The python binary is given with the version appended:
/usr/bin/python -> python2.6
/usr/bin/python2.5
/usr/bin/python2.6
Each has their own /usr/lib/python2.X directory with versions of all the modules.
And lastly, you can further customize your setup by modifying your site.py
A:
I suggest you create a virtualenv (or several) for installing packages into.
A:
I've had similar experiences and issues when installing Python 2.5 on an older release of ubuntu that supplied 2.4 out of the box.
I first tried to patch easy_install, but this led to problems with anything that wanted to use the os-supplied version of python. I was often fiddling with the tool chain to fix different errors that might crop up with every install. Installing any python software via apt, or installing any software from apt that had a python easy_install script as part of the install, was often amusing. I'm sure I could probably have been more vigilant in patching easy_install, but I gave up.
Instead, I copied the library, and everything worked. As you say, there may be issues depending on what you have installed, but I didn't run into issues. Double-checking Python's site.py module, I did see that it operates entirely on relative paths, building absolute paths dynamically; this gave me some confidence to try the "copy everything" approach. I double-checked any .pth files, then went for it.
|
Fedora Python Upgrade broke easy_install
|
Fedora Core 9 includes Python 2.5.1. I can use YUM to get latest and greatest releases.
To get ready for 2.6 official testing, I wanted to start with 2.5.4. It appears that there's no Fedora 9 YUM package, because 2.5.4 isn't an official part of FC9.
I downloaded 2.5.4, did ./configure; make; make install and wound up with two Pythons. The official 2.5.1 (in /usr/bin) and the new 2.5.4. (in /usr/local/bin).
None of my technology stack is installed in /usr/local/lib/python2.5.
It appears that I have several choices for going forward. Anyone have any preferences?
Copy /usr/lib/python2.5/* to /usr/local/lib/python2.5 to replicate my environment. This should work, unless some part of the Python libraries have /usr/bin/python wired in during installation. This is sure simple, but is there a down side?
Reinstall everything by running easy_install. Except, easy_install is (currently) hard-wired to /usr/bin/python. So, I'd have to fix easy_install first, then reinstall everything.
This takes some time, but it gives me a clean, new latest-and-greatest environment. But is there a down-side? [And why does easy_install hard-wire itself?]
Relink /usr/bin/python to be /usr/local/bin/python. I'd still have to copy or reinstall the library, so I don't think this does me any good. [It would make easy_install work; but so would editing /usr/bin/easy_install.]
Has anyone copied their library? Is it that simple?
Or should I fix easy_install and simply step through the installation guide and build a new, clean, latest-and-greatest?
Edit
Or, should I
Skip trying to resolve the 2.5.1 and 2.5.4 issues and just jump straight to 2.6?
|
[
"Normally, you would only have one version of a python release installed. Since 2.5.1 and 2.5.4 are from the same release, copying your libraries should work fine. What you would need to watch out for, is that you now have /usr/bin/python, and /usr/local/bin/python in your path, and some utilities may get confused. \nIf you need to have both micro-releases installed at once, I would keep 2.5.4 out of your path altogether, or allow it to completely clobber the other (do so at your own risk though ;)\nIf you go with the former, you can also point 2.5.4 to your site-packages by using the PYTHONPATH environment variable.\nUbuntu takes a different route, and this is how you can handle different major releases. The python binary is given with the version appended:\n/usr/bin/python -> python2.6\n/usr/bin/python2.5\n/usr/bin/python2.6\n\nEach has their own /usr/lib/python2.X directory with versions of all the modules.\nAnd lastly, you can further customize your setup by modifying your site.py\n",
"I suggest you create a virtualenv (or several) for installing packages into.\n",
"I've had similar experiences and issues when installing Python 2.5 on an older release of ubuntu that supplied 2.4 out of the box.\nI first tried to patch easy_install, but this led to problems with anything that wanted to use the os-supplied version of python. I was often fiddling with the tool chain to fix different errors that might crop up with every install. Installing any python software via apt, or installing any software from apt that had a python easy_install script as part of the install, was often amusing. I'm sure I could probably have been more vigilant in patching easy_install, but I gave up.\nInstead, I copied the library, and everything worked. As you say, there may be issues depending on what you have installed, but I didn't run into issues. Double-checking Python's site.py module, I did see that it operates entirely on relative paths, building absolute paths dynamically; this gave me some confidence to try the \"copy everything\" approach. I double-checked any .pth files, then went for it.\n"
] |
[
4,
2,
2
] |
[] |
[] |
[
"easy_install",
"fedora",
"python"
] |
stackoverflow_0000925965_easy_install_fedora_python.txt
|
Q:
Configure Apache to recover from mod_python errors
I am hosting a Django app on Apache using mod_python. Occasionally, I get some cryptic mod_python errors, usually of the ImportError variety, although not usually referring to the same module. The thing is, these seem to come up for a single forked subprocess, while the others operate fine, even when I force behavior that requires using the module that the problem process has errored on. Once the process encounters the error, it will always just serve the same traceback every time Apache chooses it to handle a request. (This is also a hassle, since my users don't necessarily report the error on the first occurrence, and once the process encounters the error.)
I know more about configuring Django than configuring Apache, but that won't get me anywhere since the request never reaches Django for processing. Ideally, I should solve the root problem, and that might involve my code, project, or machine configuration, but until then, I need help diagnosing and mitigating the problem.
Is there any way to configure the Apache logs to include a subprocess id?
Is there any way to force a subprocess to respawn if it has hit an error?
Are there any known issues relating to this that I should know about?
A:
As a workaround, and assuming you are free to install new Apache modules on the server, you might try one of
mod_scgi
mod_fastcgi
mod_wsgi
instead. I use SCGI to connect an nginx frontend webserver to my Django apps, which highlights a major benefit (decoupling from the webserver). All of these packages are available in Debian, probably on RHELx as well.
|
Configure Apache to recover from mod_python errors
|
I am hosting a Django app on Apache using mod_python. Occasionally, I get some cryptic mod_python errors, usually of the ImportError variety, although not usually referring to the same module. The thing is, these seem to come up for a single forked subprocess, while the others operate fine, even when I force behavior that requires using the module that the problem process has errored on. Once the process encounters the error, it will always just serve the same traceback every time Apache chooses it to handle a request. (This is also a hassle, since my users don't necessarily report the error on the first occurrence, and once the process encounters the error.)
I know more about configuring Django than configuring Apache, but that won't get me anywhere since the request never reaches Django for processing. Ideally, I should solve the root problem, and that might involve my code, project, or machine configuration, but until then, I need help diagnosing and mitigating the problem.
Is there any way to configure the Apache logs to include a subprocess id?
Is there any way to force a subprocess to respawn if it has hit an error?
Are there any known issues relating to this that I should know about?
|
[
"As a workaround, and assuming you are free to install new Apache modules on the server, you might try one of\n\nmod_scgi\nmod_fastcgi\nmod_wsgi \n\ninstead. I use SCGI to connect an nginx frontend webserver to my Django apps, which highlights a major benefit (decoupling from the webserver). All of these packages are available in Debian, probably on RHELx as well.\n"
] |
[
1
] |
[] |
[] |
[
"apache",
"django",
"mod_python",
"python"
] |
stackoverflow_0000926579_apache_django_mod_python_python.txt
|
Q:
How do I access the name of the class of an Object in Python?
The title says it mostly.
If I have an object in Python and want to access the name of the class it is instantiated from is there a standard way to do this?
A:
obj.__class__.__name__
|
How do I access the name of the class of an Object in Python?
|
The title says it mostly.
If I have an object in Python and want to access the name of the class it is instantiated from is there a standard way to do this?
|
[
"obj.__class__.__name__\n\n"
] |
[
19
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000928806_python.txt
|
Q:
Python File Read + Write
I am working on porting over a database from a custom MSSQL CMS to MYSQL - Wordpress. I am using Python to read a txt file with \t delineated columns and one row per line.
I am trying to write a Python script that will read this file (fread) and [eventually] create a MYSSQL ready .sql file with insert statements.
A line in the file I'm reading looks something like:
1 John Smith Developer http://twiiter.com/johns Chicago, IL
My Python script so far:
import sys
fwrite = open('d:/icm_db/wp_sql/wp.users.sql','w')
fread = open('d:/icm_db/users.txt','r')
for line in fread:
print line;
fread.close()
fwrite.close()
How can I "implode" each line so I can access each column and do business on it?
I need to generate multiple MYSQL insert statements per line I read. So... for each line read, I'd generate something like:
INSERT INTO `wp_users` (`ID`, `user_login`, `user_name`)
VALUES (line[0], 'line[2]', 'line[3]');
A:
Although this is easily doable, it does become easier with the csv module.
>>> import csv
>>> reader = csv.reader(open('C:/www/stackoverflow.txt'), delimiter='\t')
>>> for row in reader:
... print row
...
['1', 'John Smith', 'Developer', 'http://twiiter.com/johns', 'Chicago, IL']
['2', 'John Doe', 'Developer', 'http://whatever.com', 'Tallahassee, FL']
Also, as pointed out, semicolons are not needed in Python. Try to kick that habit :)
A:
Knowing the exact number of columns helps self document your code:
fwrite = open("d:/icm_db/wp_sql/wp.users.sql","w")
for line in open("d:/icm_db/users.txt"):
name, title, login, location = line.strip().split("\t")
# Double up on those single quotes to avoid nasty SQL!
safe_name = name.replace("'","''")
safe_login = name.replace("'","''")
# ID field is primary key and will auto-increment
fwrite.write( "INSERT INTO `wp_users` (`user_login`, `user_name`) " )
fwrite.write( "VALUES ('%s','%s');\n" % (safe_name,safe_login) )
A:
What you probably want is something like this:
data=line.split("\t")It'll give you a nice sequence object to work with.
(By the way, no need for semicolons in Python. There's one here: print line;)As Dave pointed out, this might leave a newline in there. Call strip() on line before splitting, like so: line.strip().split("\t")
A:
The Python Standard Library has a module for CSV (comma separated value) file reading and writing that can be made to work on tab separated files like your one. It's probably overkill for this task.
A:
fwrite = open('/home/lyrae/Desktop/E/wp.users.sql','a')
fread = open('/home/lyrae/Desktop/E/users.txt','r')
for line in fread:
line = line.split("\t")
fwrite.write("insert into wp_users ( ID, user_login, user_name ) values (%s, '%s', '%s')\n" % (line[0], line[1], line[2]))
fread.close()
fwrite.close()
Assuming users.txt is:
1 John Smith Developer http://twiiter.com/johns Chicago, IL
2 Billy bob Developer http://twiiter.com/johns Chicago, IL
3 John Smith Developer http://twiiter.com/johns Chicago, IL
wp.users.sql will look like:
insert into wp_users ( ID, user_login, user_name ) values (1, 'John Smith', 'Developer')
insert into wp_users ( ID, user_login, user_name ) values (2, 'Billy bob', 'Developer')
insert into wp_users ( ID, user_login, user_name ) values (3, 'John Smith', 'Developer')
Assuming only 1 tab separates the id, name, position
|
Python File Read + Write
|
I am working on porting over a database from a custom MSSQL CMS to MYSQL - Wordpress. I am using Python to read a txt file with \t delineated columns and one row per line.
I am trying to write a Python script that will read this file (fread) and [eventually] create a MYSSQL ready .sql file with insert statements.
A line in the file I'm reading looks something like:
1 John Smith Developer http://twiiter.com/johns Chicago, IL
My Python script so far:
import sys
fwrite = open('d:/icm_db/wp_sql/wp.users.sql','w')
fread = open('d:/icm_db/users.txt','r')
for line in fread:
print line;
fread.close()
fwrite.close()
How can I "implode" each line so I can access each column and do business on it?
I need to generate multiple MYSQL insert statements per line I read. So... for each line read, I'd generate something like:
INSERT INTO `wp_users` (`ID`, `user_login`, `user_name`)
VALUES (line[0], 'line[2]', 'line[3]');
|
[
"Although this is easily doable, it does become easier with the csv module.\n>>> import csv\n>>> reader = csv.reader(open('C:/www/stackoverflow.txt'), delimiter='\\t')\n>>> for row in reader:\n... print row\n...\n['1', 'John Smith', 'Developer', 'http://twiiter.com/johns', 'Chicago, IL']\n['2', 'John Doe', 'Developer', 'http://whatever.com', 'Tallahassee, FL']\n\nAlso, as pointed out, semicolons are not needed in Python. Try to kick that habit :)\n",
"Knowing the exact number of columns helps self document your code:\nfwrite = open(\"d:/icm_db/wp_sql/wp.users.sql\",\"w\")\n\nfor line in open(\"d:/icm_db/users.txt\"):\n name, title, login, location = line.strip().split(\"\\t\")\n\n # Double up on those single quotes to avoid nasty SQL!\n safe_name = name.replace(\"'\",\"''\")\n safe_login = name.replace(\"'\",\"''\")\n\n # ID field is primary key and will auto-increment\n fwrite.write( \"INSERT INTO `wp_users` (`user_login`, `user_name`) \" )\n fwrite.write( \"VALUES ('%s','%s');\\n\" % (safe_name,safe_login) )\n\n",
"What you probably want is something like this:\ndata=line.split(\"\\t\")It'll give you a nice sequence object to work with.\n(By the way, no need for semicolons in Python. There's one here: print line;)As Dave pointed out, this might leave a newline in there. Call strip() on line before splitting, like so: line.strip().split(\"\\t\")\n",
"The Python Standard Library has a module for CSV (comma separated value) file reading and writing that can be made to work on tab separated files like your one. It's probably overkill for this task.\n",
"fwrite = open('/home/lyrae/Desktop/E/wp.users.sql','a')\nfread = open('/home/lyrae/Desktop/E/users.txt','r')\n\nfor line in fread:\n line = line.split(\"\\t\")\n fwrite.write(\"insert into wp_users ( ID, user_login, user_name ) values (%s, '%s', '%s')\\n\" % (line[0], line[1], line[2]))\n\nfread.close()\nfwrite.close()\n\nAssuming users.txt is:\n1 John Smith Developer http://twiiter.com/johns Chicago, IL\n2 Billy bob Developer http://twiiter.com/johns Chicago, IL\n3 John Smith Developer http://twiiter.com/johns Chicago, IL\n\nwp.users.sql will look like:\ninsert into wp_users ( ID, user_login, user_name ) values (1, 'John Smith', 'Developer')\ninsert into wp_users ( ID, user_login, user_name ) values (2, 'Billy bob', 'Developer')\ninsert into wp_users ( ID, user_login, user_name ) values (3, 'John Smith', 'Developer')\n\nAssuming only 1 tab separates the id, name, position\n"
] |
[
10,
1,
0,
0,
0
] |
[] |
[] |
[
"file",
"python"
] |
stackoverflow_0000928918_file_python.txt
|
Q:
Tkinter: Changing a variable within a function
I know this kind of question gets asked all the time but either i've been unable to come across the answer i need, or i've been unable to understand it when i did.
I want to be able to do something like:
spam = StringVar()
spam.set(aValue)
class MyScale(Scale):
def __init__(self,var,*args,**kwargs):
Scale.__init__(self,*args,**kwargs)
self.bind("<ButtonRelease-1>",self.getValue)
self.set(var.get())
def getValue(self,event):
## spam gets changed to the new value set
## by the user manipulating the scale
var.set(self.get)
eggs = MyScale(spam,*args,**kwargs)
eggs.pack()
Of course, i get back "NameError: global name 'var' is not defined."
How do i get around the inability to pass arguments to getValue? I've been warned against using global variables but is that my only option? Is it setting up a separate scale class for each variable i want to change? I get the feeling i'm missing something thats right under my nose...
edit:
is this what you mean?
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Python26\lib\lib-tk\Tkinter.py", line 1410, in __call__
return self.func(*args)
File "C:\...\interface.py", line 70, in getValue
var.set(self.get)
NameError: global name 'var' is not defined
Sorry, I've only been programming a month and some of the jargon still escapes me.
A:
Please give this a shot.
Lots of example code out there generously uses globals, like your "var" variable.
I have used your var argument to act as a pointer back to the original spam object; assigned to self.var_pointer within the MyScale class.
The code below will change the value of 'spam' (and 'eggs') on the scale's ButtonRelease.
You can check out the value by typing eggs.get() or spam.get() to see the changed value.
from Tkinter import *
root = Tk()
aValue = "5"
spam = StringVar()
spam.set(aValue)
class MyScale(Scale):
def __init__(self,var,*args,**kwargs):
self.var_pointer = var
Scale.__init__(self,*args,**kwargs)
self.bind("<ButtonRelease-1>",self.getValue)
self.set(var.get())
def getValue(self,event):
## spam gets changed to the new value set
## by the user manipulating the scale
self.var_pointer.set(self.get())
eggs = MyScale(spam)
eggs.pack(anchor=CENTER)
A:
Let's look at this method function
def getValue(self,event):
## spam gets changed to the new value set
## by the user manipulating the scale
var.set(self.get)
The var.set(self.get) line has exactly two local variables available:
self
event
The variable var is not local to this method function. Perhaps it was used elsewhere in the class or script, but it's not local here.
It may, possibly, be global, but that's a bad practice.
I'm not sure why you'd think the variable var would be known in this context.
|
Tkinter: Changing a variable within a function
|
I know this kind of question gets asked all the time but either i've been unable to come across the answer i need, or i've been unable to understand it when i did.
I want to be able to do something like:
spam = StringVar()
spam.set(aValue)
class MyScale(Scale):
def __init__(self,var,*args,**kwargs):
Scale.__init__(self,*args,**kwargs)
self.bind("<ButtonRelease-1>",self.getValue)
self.set(var.get())
def getValue(self,event):
## spam gets changed to the new value set
## by the user manipulating the scale
var.set(self.get)
eggs = MyScale(spam,*args,**kwargs)
eggs.pack()
Of course, i get back "NameError: global name 'var' is not defined."
How do i get around the inability to pass arguments to getValue? I've been warned against using global variables but is that my only option? Is it setting up a separate scale class for each variable i want to change? I get the feeling i'm missing something thats right under my nose...
edit:
is this what you mean?
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Python26\lib\lib-tk\Tkinter.py", line 1410, in __call__
return self.func(*args)
File "C:\...\interface.py", line 70, in getValue
var.set(self.get)
NameError: global name 'var' is not defined
Sorry, I've only been programming a month and some of the jargon still escapes me.
|
[
"Please give this a shot.\nLots of example code out there generously uses globals, like your \"var\" variable.\nI have used your var argument to act as a pointer back to the original spam object; assigned to self.var_pointer within the MyScale class.\nThe code below will change the value of 'spam' (and 'eggs') on the scale's ButtonRelease.\nYou can check out the value by typing eggs.get() or spam.get() to see the changed value.\nfrom Tkinter import *\nroot = Tk()\n\naValue = \"5\"\nspam = StringVar()\nspam.set(aValue)\n\nclass MyScale(Scale):\n def __init__(self,var,*args,**kwargs):\n self.var_pointer = var\n Scale.__init__(self,*args,**kwargs)\n self.bind(\"<ButtonRelease-1>\",self.getValue)\n self.set(var.get())\n def getValue(self,event):\n ## spam gets changed to the new value set \n ## by the user manipulating the scale\n self.var_pointer.set(self.get())\n\neggs = MyScale(spam)\neggs.pack(anchor=CENTER)\n\n",
"Let's look at this method function\n def getValue(self,event):\n ## spam gets changed to the new value set \n ## by the user manipulating the scale\n var.set(self.get)\n\nThe var.set(self.get) line has exactly two local variables available:\n\nself\nevent\n\nThe variable var is not local to this method function. Perhaps it was used elsewhere in the class or script, but it's not local here.\nIt may, possibly, be global, but that's a bad practice.\nI'm not sure why you'd think the variable var would be known in this context.\n"
] |
[
2,
1
] |
[] |
[] |
[
"python",
"tkinter"
] |
stackoverflow_0000928520_python_tkinter.txt
|
Q:
str.startswith() not working as I intended
I can't see why this won't work. I am performing lstrip() on the string being passed to the function, and trying to see if it starts with """. For some reason, it gets caught in an infinite loop
def find_comment(infile, line):
line_t = line.lstrip()
if not line_t.startswith('"""') and not line_t.startswith('#'):
print (line, end = '')
return line
elif line.lstrip().startswith('"""'):
while True:
if line.rstrip().endswith('"""'):
line = infile.readline()
find_comment(infile, line)
else:
line = infile.readline()
else:
line = infile.readline()
find_comment(infile, line)
And my output:
Enter the file name: test.txt
import re
def count_loc(infile):
Here is the top of the file i am reading in for reference:
import re
def count_loc(infile):
""" Receives a file and then returns the amount
of actual lines of code by not counting commented
or blank lines """
loc = 0
func_records = {}
for line in infile:
(...)
A:
You haven't provided and exit path from the recursive loop. A return statement should do the trick.
(...)
while True:
if line.rstrip().endswith('"""'):
line = infile.readline()
return find_comment(infile, line)
else:
line = infile.readline()
A:
while True is an infinite loop. You need to break once you're done.
A:
not line_t.startswith('"""') or not line_t.startswith('#')
This expression evaluates to True no matter what string line_t denotes. Do you want 'and' instead of 'or'? Your question isn't clear to me.
A:
if not line_t.startswith('"""') or not line_t.startswith('#'):
This if will always be satisfied -- either the line doesn't start with """, or it doesn't start with # (or both). You probably meant to use and where you used or.
A:
As long as lines start or end with a comment, the code below should work.
However, keep in mind that the docstrings can start or end in the middle of a line of code.
Also, you'll need to code for triple single-quotes as well as docstrings assigned to variables which aren't really comments.
Does this get you closer to an answer?
def count_loc(infile):
skipping_comments = False
loc = 0
for line in infile:
# Skip one-liners
if line.strip().startswith("#"): continue
# Toggle multi-line comment finder: on and off
if line.strip().startswith('"""'):
skipping_comments = not skipping_comments
if line.strip().endswith('"""'):
skipping_comments = not skipping_comments
continue
if skipping_comments: continue
print line,
|
str.startswith() not working as I intended
|
I can't see why this won't work. I am performing lstrip() on the string being passed to the function, and trying to see if it starts with """. For some reason, it gets caught in an infinite loop
def find_comment(infile, line):
line_t = line.lstrip()
if not line_t.startswith('"""') and not line_t.startswith('#'):
print (line, end = '')
return line
elif line.lstrip().startswith('"""'):
while True:
if line.rstrip().endswith('"""'):
line = infile.readline()
find_comment(infile, line)
else:
line = infile.readline()
else:
line = infile.readline()
find_comment(infile, line)
And my output:
Enter the file name: test.txt
import re
def count_loc(infile):
Here is the top of the file i am reading in for reference:
import re
def count_loc(infile):
""" Receives a file and then returns the amount
of actual lines of code by not counting commented
or blank lines """
loc = 0
func_records = {}
for line in infile:
(...)
|
[
"You haven't provided and exit path from the recursive loop. A return statement should do the trick.\n (...)\n while True:\n if line.rstrip().endswith('\"\"\"'):\n line = infile.readline()\n return find_comment(infile, line)\n else:\n line = infile.readline()\n\n",
"while True is an infinite loop. You need to break once you're done.\n",
"not line_t.startswith('\"\"\"') or not line_t.startswith('#')\n\nThis expression evaluates to True no matter what string line_t denotes. Do you want 'and' instead of 'or'? Your question isn't clear to me.\n",
"if not line_t.startswith('\"\"\"') or not line_t.startswith('#'):\n\nThis if will always be satisfied -- either the line doesn't start with \"\"\", or it doesn't start with # (or both). You probably meant to use and where you used or.\n",
"As long as lines start or end with a comment, the code below should work.\nHowever, keep in mind that the docstrings can start or end in the middle of a line of code.\nAlso, you'll need to code for triple single-quotes as well as docstrings assigned to variables which aren't really comments.\nDoes this get you closer to an answer?\ndef count_loc(infile):\n skipping_comments = False\n loc = 0 \n for line in infile:\n # Skip one-liners\n if line.strip().startswith(\"#\"): continue\n # Toggle multi-line comment finder: on and off\n if line.strip().startswith('\"\"\"'):\n skipping_comments = not skipping_comments\n if line.strip().endswith('\"\"\"'):\n skipping_comments = not skipping_comments\n continue\n if skipping_comments: continue\n print line,\n\n"
] |
[
4,
2,
1,
1,
1
] |
[] |
[] |
[
"python",
"python_3.x",
"string"
] |
stackoverflow_0000929169_python_python_3.x_string.txt
|
Q:
Why doesn't inspect.getsource return the whole class source?
I have this code in my forms.py:
from django import forms
from formfieldset.forms import FieldsetMixin
class ContactForm(forms.Form, FieldsetMixin):
full_name = forms.CharField(max_length=120)
email = forms.EmailField()
website = forms.URLField()
message = forms.CharField(max_length=500, widget=forms.Textarea)
send_notification = forms.BooleanField(required=False)
fieldsets = ((u'Personal Information',
{'fields': ('full_name', 'email', 'website'),
'description': u'Your personal information will not ' \
u'be shared with 3rd parties.'}),
(None,
{'fields': ('message',),
'description': u'All HTML will be stripped out.'}),
(u'Preferences',
{'fields': ('send_notification',)}))
When I try to extract the code programmatically with inspect it leaves out fieldsets:
In [1]: import inspect
In [2]: import forms
In [3]: print inspect.getsource(forms)
from django import forms
from formfieldset.forms import FieldsetMixin
class ContactForm(forms.Form, FieldsetMixin):
full_name = forms.CharField(max_length=120)
email = forms.EmailField()
website = forms.URLField()
message = forms.CharField(max_length=500, widget=forms.Textarea)
send_notification = forms.BooleanField(required=False)
fieldsets = ((u'Personal Information',
{'fields': ('full_name', 'email', 'website'),
'description': u'Your personal information will not ' \
u'be shared with 3rd parties.'}),
(None,
{'fields': ('message',),
'description': u'All HTML will be stripped out.'}),
(u'Preferences',
{'fields': ('send_notification',)}))
In [4]: print inspect.getsource(forms.ContactForm)
class ContactForm(forms.Form, FieldsetMixin):
full_name = forms.CharField(max_length=120)
email = forms.EmailField()
website = forms.URLField()
message = forms.CharField(max_length=500, widget=forms.Textarea)
send_notification = forms.BooleanField(required=False)
In [5]:
This doesn't seem to be an issue with blank lines. I've tested without the blank line in between and I've put additional blank lines in between other attributes. Results don't change.
Any ideas why inspect is returning only the part before fieldsets and not the whole source of the class?
A:
edit: revised based on comments:
Inside inspect.getsource(forms.ContactForm) the method BlockFinder.tokeneater() is used to determine where the ContactForm block stops. Besides others, it checks for tokenize.DEDENT, which it finds right before fieldsets in your version stored at github. The line contains only a line break, so inspect thinks the current block has ended.
If you insert 4 spaces, it works for me again. I cannot argue on the rationale behind this, maybe performance.
class ContactForm(forms.Form):
full_name = forms.CharField(max_length=120)
email = forms.EmailField()
website = forms.URLField()
message = forms.CharField(max_length=500, widget=forms.Textarea)
send_notification = forms.BooleanField(required=False)
# <-- insert 4 spaces here
fieldsets = ((u'Personal Information',
{'fields': ('full_name', 'email', 'website'),
'description': u'Your personal information will not ' \
u'be shared with 3rd parties.'}),
(None,
{'fields': ('message',),
'description': u'All HTML will be stripped out.'}),
(u'Preferences',
{'fields': ('send_notification',)}))
The reason that inspect.getsource(forms) works differently is because inspect in that case does not have to determine the class definition's start and end. It simply outputs the whole file.
A:
Works for me. I don't have "from formfieldset.forms import FieldsetMixin" in my code. Maybe that is causing an issue..
|
Why doesn't inspect.getsource return the whole class source?
|
I have this code in my forms.py:
from django import forms
from formfieldset.forms import FieldsetMixin
class ContactForm(forms.Form, FieldsetMixin):
full_name = forms.CharField(max_length=120)
email = forms.EmailField()
website = forms.URLField()
message = forms.CharField(max_length=500, widget=forms.Textarea)
send_notification = forms.BooleanField(required=False)
fieldsets = ((u'Personal Information',
{'fields': ('full_name', 'email', 'website'),
'description': u'Your personal information will not ' \
u'be shared with 3rd parties.'}),
(None,
{'fields': ('message',),
'description': u'All HTML will be stripped out.'}),
(u'Preferences',
{'fields': ('send_notification',)}))
When I try to extract the code programmatically with inspect it leaves out fieldsets:
In [1]: import inspect
In [2]: import forms
In [3]: print inspect.getsource(forms)
from django import forms
from formfieldset.forms import FieldsetMixin
class ContactForm(forms.Form, FieldsetMixin):
full_name = forms.CharField(max_length=120)
email = forms.EmailField()
website = forms.URLField()
message = forms.CharField(max_length=500, widget=forms.Textarea)
send_notification = forms.BooleanField(required=False)
fieldsets = ((u'Personal Information',
{'fields': ('full_name', 'email', 'website'),
'description': u'Your personal information will not ' \
u'be shared with 3rd parties.'}),
(None,
{'fields': ('message',),
'description': u'All HTML will be stripped out.'}),
(u'Preferences',
{'fields': ('send_notification',)}))
In [4]: print inspect.getsource(forms.ContactForm)
class ContactForm(forms.Form, FieldsetMixin):
full_name = forms.CharField(max_length=120)
email = forms.EmailField()
website = forms.URLField()
message = forms.CharField(max_length=500, widget=forms.Textarea)
send_notification = forms.BooleanField(required=False)
In [5]:
This doesn't seem to be an issue with blank lines. I've tested without the blank line in between and I've put additional blank lines in between other attributes. Results don't change.
Any ideas why inspect is returning only the part before fieldsets and not the whole source of the class?
|
[
"edit: revised based on comments:\nInside inspect.getsource(forms.ContactForm) the method BlockFinder.tokeneater() is used to determine where the ContactForm block stops. Besides others, it checks for tokenize.DEDENT, which it finds right before fieldsets in your version stored at github. The line contains only a line break, so inspect thinks the current block has ended.\nIf you insert 4 spaces, it works for me again. I cannot argue on the rationale behind this, maybe performance.\nclass ContactForm(forms.Form):\n full_name = forms.CharField(max_length=120)\n email = forms.EmailField()\n website = forms.URLField()\n message = forms.CharField(max_length=500, widget=forms.Textarea)\n send_notification = forms.BooleanField(required=False)\n # <-- insert 4 spaces here\n fieldsets = ((u'Personal Information',\n {'fields': ('full_name', 'email', 'website'),\n 'description': u'Your personal information will not ' \\\n u'be shared with 3rd parties.'}),\n (None,\n {'fields': ('message',),\n 'description': u'All HTML will be stripped out.'}),\n (u'Preferences',\n {'fields': ('send_notification',)}))\n\nThe reason that inspect.getsource(forms) works differently is because inspect in that case does not have to determine the class definition's start and end. It simply outputs the whole file.\n",
"Works for me. I don't have \"from formfieldset.forms import FieldsetMixin\" in my code. Maybe that is causing an issue..\n"
] |
[
1,
0
] |
[] |
[] |
[
"code_inspection",
"inspect",
"python"
] |
stackoverflow_0000929472_code_inspection_inspect_python.txt
|
Q:
Django ORM Query to limit for the specific key instance
Projectfundingdetail has a foreign key to project.
The following query gives me the list of all projects that have any projectfundingdetail under 1000. How do I limit it to latest projectfundingdetail only.
projects_list.filter(projectfundingdetail__budget__lte=1000).distinct()
I have defined the following function,
def latest_funding(self):
return self.projectfundingdetail_set.latest(field_name='end_date')
But I cant use the following as latest_funding is not a database field
projects_list.filter(latest_funding__budget__lte=1000).distinct()
So what query should I use to get all projects that have only their latest projectfundingdetail under 1000.
A:
This query is harder than it looks at first glance. AFAIK the Django ORM does not provide any way to generate efficient SQL for this query, because the efficient SQL requires a correlated subquery. (I'd love to be corrected on this!) You can generate some ugly SQL with this query:
Projectfundingdetail.objects.annotate(latest=Max('project__projectfundingdetail__end_date')).filter(end_date=F('latest')).filter(budget__lte==1000).select_related()
But this requires to join from Projectfundingdetail to Project and back again, which is inefficient (though perhaps adequate for your needs).
The other way to do this is to write raw SQL and encapsulate it in a manager method. It looks a little bit scary but works great. If you assign the manager as "objects" attribute on Projectfundingdetail, you can use it like this to get the latest funding details for each project:
>>> Projectfundingdetail.objects.latest_by_project()
And it returns a normal QuerySet, so you can add on further filters:
>>> Projectfundingdetail.objects.latest_by_project().filter(budget__lte=1000)
Here's the code:
from django.db import connection, models
qn = connection.ops.quote_name
class ProjectfundingdetailManager(models.Manager):
def latest_by_project(self):
project_model = self.model._meta.get_field('project').rel.to
names = {'project': qn(project_model._meta.db_table),
'pfd': qn(self.model._meta.db_table),
'end_date': qn(self.model._meta.get_field('end_date').column),
'project_id': qn(self.model._meta.get_field('project').column),
'pk': qn(self.model._meta.pk.column),
'p_pk': qn(project_model._meta.pk.column)}
sql = """SELECT pfd.%(pk)s FROM %(project)s AS p
JOIN %(pfd)s AS pfd ON p.%(p_pk)s = pfd.%(project_id)s
WHERE pfd.%(end_date)s =
(SELECT MAX(%(end_date)s) FROM %(pfd)s
WHERE %(project_id)s = p.%(p_pk)s)
""" % names
cursor = connection.cursor()
cursor.execute(sql)
return self.model.objects.filter(id__in=[r[0] for r
in cursor.fetchall()])
About half of that code (the "names" dictionary) is only necessary to be robust against the possibility of nonstandard database table and column names. You could also just hardcode the table and column names into the SQL if you're confident they won't ever change.
|
Django ORM Query to limit for the specific key instance
|
Projectfundingdetail has a foreign key to project.
The following query gives me the list of all projects that have any projectfundingdetail under 1000. How do I limit it to latest projectfundingdetail only.
projects_list.filter(projectfundingdetail__budget__lte=1000).distinct()
I have defined the following function,
def latest_funding(self):
return self.projectfundingdetail_set.latest(field_name='end_date')
But I cant use the following as latest_funding is not a database field
projects_list.filter(latest_funding__budget__lte=1000).distinct()
So what query should I use to get all projects that have only their latest projectfundingdetail under 1000.
|
[
"This query is harder than it looks at first glance. AFAIK the Django ORM does not provide any way to generate efficient SQL for this query, because the efficient SQL requires a correlated subquery. (I'd love to be corrected on this!) You can generate some ugly SQL with this query:\nProjectfundingdetail.objects.annotate(latest=Max('project__projectfundingdetail__end_date')).filter(end_date=F('latest')).filter(budget__lte==1000).select_related()\n\nBut this requires to join from Projectfundingdetail to Project and back again, which is inefficient (though perhaps adequate for your needs).\nThe other way to do this is to write raw SQL and encapsulate it in a manager method. It looks a little bit scary but works great. If you assign the manager as \"objects\" attribute on Projectfundingdetail, you can use it like this to get the latest funding details for each project:\n>>> Projectfundingdetail.objects.latest_by_project()\n\nAnd it returns a normal QuerySet, so you can add on further filters:\n>>> Projectfundingdetail.objects.latest_by_project().filter(budget__lte=1000)\n\nHere's the code:\nfrom django.db import connection, models\nqn = connection.ops.quote_name\n\nclass ProjectfundingdetailManager(models.Manager):\n def latest_by_project(self):\n project_model = self.model._meta.get_field('project').rel.to\n\n names = {'project': qn(project_model._meta.db_table),\n 'pfd': qn(self.model._meta.db_table),\n 'end_date': qn(self.model._meta.get_field('end_date').column),\n 'project_id': qn(self.model._meta.get_field('project').column),\n 'pk': qn(self.model._meta.pk.column),\n 'p_pk': qn(project_model._meta.pk.column)}\n\n sql = \"\"\"SELECT pfd.%(pk)s FROM %(project)s AS p \n JOIN %(pfd)s AS pfd ON p.%(p_pk)s = pfd.%(project_id)s\n WHERE pfd.%(end_date)s =\n (SELECT MAX(%(end_date)s) FROM %(pfd)s \n WHERE %(project_id)s = p.%(p_pk)s)\n \"\"\" % names\n\n cursor = connection.cursor()\n cursor.execute(sql)\n return self.model.objects.filter(id__in=[r[0] for r\n in cursor.fetchall()])\n\nAbout half of that code (the \"names\" dictionary) is only necessary to be robust against the possibility of nonstandard database table and column names. You could also just hardcode the table and column names into the SQL if you're confident they won't ever change.\n"
] |
[
3
] |
[] |
[] |
[
"django",
"django_models",
"django_orm",
"orm",
"python"
] |
stackoverflow_0000929468_django_django_models_django_orm_orm_python.txt
|
Q:
How can I count unique terms in a plaintext file case-insensitively?
This can be in any high-level language that is likely to be available on a typical unix-like system (Python, Perl, awk, standard unix utils {sort, uniq}, etc). Hopefully it's fast enough to report the total number of unique terms for a 2MB text file.
I only need this for quick sanity-checking, so it doesn't need to be well-engineered.
Remember, case-insensitve.
Thank you guys very much.
Side note: If you use Python, please don't use version 3-only code. The system I'm running it on only has 2.4.4.
A:
In Perl:
my %words;
while (<>) {
map { $words{lc $_} = 1 } split /\s/);
}
print scalar keys %words, "\n";
A:
Using bash/UNIX commands:
sed -e 's/[[:space:]]\+/\n/g' $FILE | sort -fu | wc -l
A:
In Python 2.4 (possibly it works on earlier systems as well):
#! /usr/bin/python2.4
import sys
h = set()
for line in sys.stdin.xreadlines():
for term in line.split():
h.add(term)
print len(h)
In Perl:
$ perl -ne 'for (split(" ", $_)) { $H{$_} = 1 } END { print scalar(keys%H), "\n" }' <file.txt
A:
Using just standard Unix utilities:
< somefile tr 'A-Z[:blank:][:punct:]' 'a-z\n' | sort | uniq -c
If you're on a system without Gnu tr, you'll need to replace "[:blank:][:punct:]" with a list of all the whitespace and punctuation characters you'd like to consider to be separators of words, rather than part of a word, e.g., "\t.,;".
If you want the output sorted in descending order of frequency, you can append "| sort -r -n" to the end of this.
Note that this will produce an irrelevant count of whitespace tokens as well; if you're concerned about this, after the tr you can use sed to filter out the empty lines.
A:
Here is a Perl one-liner:
perl -lne '$h{lc $_}++ for split /[\s.,]+/; END{print scalar keys %h}' file.txt
Or to list the count for each item:
perl -lne '$h{lc $_}++ for split /[\s.,]+/; END{printf "%-12s %d\n", $_, $h{$_} for sort keys %h}' file.txt
This makes an attempt to handle punctuation so that "foo." is counted with "foo" while "don't" is treated as a single word, but you can adjust the regex to suit your needs.
A:
Simply (52 strokes):
perl -nE'@w{map lc,split/\W+/}=();END{say 0+keys%w}'
For older perl versions (55 strokes):
perl -lne'@w{map lc,split/\W+/}=();END{print 0+keys%w}'
A:
A shorter version in Python:
print len(set(w.lower() for w in open('filename.dat').read().split()))
Reads the entire file into memory, splits it into words using whitespace, converts each word to lower case, creates a (unique) set from the lowercase words, counts them and prints the output.
Also possible using a one liner:
python -c "print len(set(w.lower() for w in open('filename.dat').read().split()))"
A:
Here is an awk oneliner.
$ gawk -v RS='[[:space:]]' 'NF&&!a[toupper($0)]++{i++}END{print i}' somefile
'NF' means 'if there is a charactor'.
'!a[topuuer[$0]++]' means 'show only
uniq words'.
|
How can I count unique terms in a plaintext file case-insensitively?
|
This can be in any high-level language that is likely to be available on a typical unix-like system (Python, Perl, awk, standard unix utils {sort, uniq}, etc). Hopefully it's fast enough to report the total number of unique terms for a 2MB text file.
I only need this for quick sanity-checking, so it doesn't need to be well-engineered.
Remember, case-insensitve.
Thank you guys very much.
Side note: If you use Python, please don't use version 3-only code. The system I'm running it on only has 2.4.4.
|
[
"In Perl:\nmy %words; \nwhile (<>) { \n map { $words{lc $_} = 1 } split /\\s/); \n} \nprint scalar keys %words, \"\\n\";\n\n",
"Using bash/UNIX commands:\nsed -e 's/[[:space:]]\\+/\\n/g' $FILE | sort -fu | wc -l\n\n",
"In Python 2.4 (possibly it works on earlier systems as well):\n#! /usr/bin/python2.4\nimport sys\nh = set()\nfor line in sys.stdin.xreadlines():\n for term in line.split():\n h.add(term)\nprint len(h)\n\nIn Perl:\n$ perl -ne 'for (split(\" \", $_)) { $H{$_} = 1 } END { print scalar(keys%H), \"\\n\" }' <file.txt\n\n",
"Using just standard Unix utilities:\n< somefile tr 'A-Z[:blank:][:punct:]' 'a-z\\n' | sort | uniq -c\n\nIf you're on a system without Gnu tr, you'll need to replace \"[:blank:][:punct:]\" with a list of all the whitespace and punctuation characters you'd like to consider to be separators of words, rather than part of a word, e.g., \"\\t.,;\".\nIf you want the output sorted in descending order of frequency, you can append \"| sort -r -n\" to the end of this.\nNote that this will produce an irrelevant count of whitespace tokens as well; if you're concerned about this, after the tr you can use sed to filter out the empty lines.\n",
"Here is a Perl one-liner:\nperl -lne '$h{lc $_}++ for split /[\\s.,]+/; END{print scalar keys %h}' file.txt\n\nOr to list the count for each item:\nperl -lne '$h{lc $_}++ for split /[\\s.,]+/; END{printf \"%-12s %d\\n\", $_, $h{$_} for sort keys %h}' file.txt\n\nThis makes an attempt to handle punctuation so that \"foo.\" is counted with \"foo\" while \"don't\" is treated as a single word, but you can adjust the regex to suit your needs. \n",
"Simply (52 strokes):\nperl -nE'@w{map lc,split/\\W+/}=();END{say 0+keys%w}'\n\nFor older perl versions (55 strokes):\nperl -lne'@w{map lc,split/\\W+/}=();END{print 0+keys%w}'\n\n",
"A shorter version in Python:\nprint len(set(w.lower() for w in open('filename.dat').read().split()))\n\nReads the entire file into memory, splits it into words using whitespace, converts each word to lower case, creates a (unique) set from the lowercase words, counts them and prints the output.\nAlso possible using a one liner:\npython -c \"print len(set(w.lower() for w in open('filename.dat').read().split()))\"\n\n",
"Here is an awk oneliner.\n$ gawk -v RS='[[:space:]]' 'NF&&!a[toupper($0)]++{i++}END{print i}' somefile\n\n\n'NF' means 'if there is a charactor'.\n'!a[topuuer[$0]++]' means 'show only \nuniq words'.\n\n"
] |
[
6,
5,
4,
4,
4,
3,
3,
0
] |
[] |
[] |
[
"awk",
"count",
"perl",
"python",
"unix"
] |
stackoverflow_0000914382_awk_count_perl_python_unix.txt
|
Q:
How do I call template defs with names only known at runtime in the Python template language Mako?
I am trying to find a way of calling def templates determined by the data available in the context.
Edit: A simpler instance of the same question.
It is possible to emit the value of an object in the context:
# in python
ctx = Context(buffer, website='stackoverflow.com')
# in mako
<%def name="body()">
I visit ${website} all the time.
</%def>
Produces:
I visit stackoverflow.com all the time.
I would like to allow a customization of the output, based upon the data.
# in python
ctx = Context(buffer, website='stackoverflow.com', format='text')
# in mako
<%def name="body()">
I visit ${(format + '_link')(website)} all the time. <-- Made up syntax.
</%def>
<%def name='html_link(w)'>
<a href='http://${w}'>${w}</a>
</%def>
<%def name='text_link(w)'>
${w}
</%def>
Changing the format attribute in the context should change the output from
I visit stackoverflow.com all the time.
to
I visit <a href='http://stackoverflow.com'>stackoverflow.com</a> all the time.
The made up syntax I have used in the body def is obviously wrong. What would I need to dynamically specify a template, and then call it?
A:
Takes some playing with mako's local namespace, but here's a working example:
from mako.template import Template
from mako.runtime import Context
from StringIO import StringIO
mytemplate = Template("""
<%def name='html_link(w)'>
<a href='http://${w}'>${w}</a>
</%def>
<%def name='text_link(w)'>
${w}
</%def>
<%def name="body()">
I visit ${getattr(local, format + '_link')(website)} all the time.
</%def>
""")
buf = StringIO()
ctx = Context(buf, website='stackoverflow.com', format='html')
mytemplate.render_context(ctx)
print buf.getvalue()
As desired, this emits:
I visit
<a href='http://stackoverflow.com'>stackoverflow.com</a>
all the time.
A:
How about if you first generate the template (from another template :), and then run that with your data?
|
How do I call template defs with names only known at runtime in the Python template language Mako?
|
I am trying to find a way of calling def templates determined by the data available in the context.
Edit: A simpler instance of the same question.
It is possible to emit the value of an object in the context:
# in python
ctx = Context(buffer, website='stackoverflow.com')
# in mako
<%def name="body()">
I visit ${website} all the time.
</%def>
Produces:
I visit stackoverflow.com all the time.
I would like to allow a customization of the output, based upon the data.
# in python
ctx = Context(buffer, website='stackoverflow.com', format='text')
# in mako
<%def name="body()">
I visit ${(format + '_link')(website)} all the time. <-- Made up syntax.
</%def>
<%def name='html_link(w)'>
<a href='http://${w}'>${w}</a>
</%def>
<%def name='text_link(w)'>
${w}
</%def>
Changing the format attribute in the context should change the output from
I visit stackoverflow.com all the time.
to
I visit <a href='http://stackoverflow.com'>stackoverflow.com</a> all the time.
The made up syntax I have used in the body def is obviously wrong. What would I need to dynamically specify a template, and then call it?
|
[
"Takes some playing with mako's local namespace, but here's a working example:\nfrom mako.template import Template\nfrom mako.runtime import Context\nfrom StringIO import StringIO\n\nmytemplate = Template(\"\"\"\n<%def name='html_link(w)'>\n<a href='http://${w}'>${w}</a>\n</%def>\n<%def name='text_link(w)'>\n${w}\n</%def>\n<%def name=\"body()\">\nI visit ${getattr(local, format + '_link')(website)} all the time.\n</%def>\n\"\"\")\n\nbuf = StringIO()\nctx = Context(buf, website='stackoverflow.com', format='html')\nmytemplate.render_context(ctx)\nprint buf.getvalue()\n\nAs desired, this emits:\nI visit \n<a href='http://stackoverflow.com'>stackoverflow.com</a>\n all the time.\n\n",
"How about if you first generate the template (from another template :), and then run that with your data?\n"
] |
[
1,
0
] |
[] |
[] |
[
"mako",
"python",
"templates"
] |
stackoverflow_0000923837_mako_python_templates.txt
|
Q:
SDL or PyGame international input
So basically, how is non-western input handled in SDL or OpenGL games or applications? Googling for it reveals http://sdl-im.csie.net/ but that doesn't seem to be maintained or available anymore. Just to view the page I had to use the Google cache.
To clarify, I'm not having any kind of issue in terms of the application displaying text in non-western languages to users. This is a solved problem. There are many unicode fonts available, and many different ways to process text into glyphs and then into display surfaces.
I run a-foul in the opposite direction. Even if my program could safely handle text data in any arbitrary encoding, there's no way for users to actually type their name if it happens to include a character that requires more than one keystroke to produce.
A:
You are interested in SDL_EnableUNICODE(). When you enable unicode translation, you can use the unicode field of SDL_keysym structure to get the unicode character based on the key user typed.
Generally I think whenever you do text input (e.g. user focuses on a textbox) you should use the unicode field and not attempt to do translation yourself.
Here's something we did in YATC. Not really a shining example of how things should be done, but demonstrates the use of the unicode field.
A:
Usually everybody just ends up using unicode for the text to internationalize their apps.
I don't remember SDL or neither OpenGL implemented anything that'd prevent you from implementing international input/output, except they are neither helping at that.
There's utilities over OpenGL you can use to render with .ttf fonts.
A:
It appears there is now a Google summer of code project on this topic, for both X11 and for MacOS X
|
SDL or PyGame international input
|
So basically, how is non-western input handled in SDL or OpenGL games or applications? Googling for it reveals http://sdl-im.csie.net/ but that doesn't seem to be maintained or available anymore. Just to view the page I had to use the Google cache.
To clarify, I'm not having any kind of issue in terms of the application displaying text in non-western languages to users. This is a solved problem. There are many unicode fonts available, and many different ways to process text into glyphs and then into display surfaces.
I run a-foul in the opposite direction. Even if my program could safely handle text data in any arbitrary encoding, there's no way for users to actually type their name if it happens to include a character that requires more than one keystroke to produce.
|
[
"You are interested in SDL_EnableUNICODE(). When you enable unicode translation, you can use the unicode field of SDL_keysym structure to get the unicode character based on the key user typed.\nGenerally I think whenever you do text input (e.g. user focuses on a textbox) you should use the unicode field and not attempt to do translation yourself.\nHere's something we did in YATC. Not really a shining example of how things should be done, but demonstrates the use of the unicode field.\n",
"Usually everybody just ends up using unicode for the text to internationalize their apps.\nI don't remember SDL or neither OpenGL implemented anything that'd prevent you from implementing international input/output, except they are neither helping at that.\nThere's utilities over OpenGL you can use to render with .ttf fonts.\n",
"It appears there is now a Google summer of code project on this topic, for both X11 and for MacOS X\n"
] |
[
3,
1,
0
] |
[] |
[] |
[
"internationalization",
"python",
"sdl"
] |
stackoverflow_0000394618_internationalization_python_sdl.txt
|
Q:
Python regex - conditional matching?
I don't know if that's the right word for it, but I am trying to come up with some regex's that can extract coefficients and exponents from a mathematical expression. The expression will come in the form 'axB+cxD+exF' where the lower case letters are the coefficients and the uppercase letters are the exponents. I have a regex that can match to both of them, but I'm wondering if I can use 2 regexs, one to match the coefficients and one for the exponents. Is there a way to match a number with a letter on one side of it without matching the letter? EG, in '3x3+6x2+2x1+8x0' I need to get
['3', '6', '2', '8']
and
['3', '2', '1', '0']
A:
You can use positive look-ahead to match something that is followed by something else. To match the coefficients, you can use:
>>> s = '3x3+6x2+2x1+8x0'
>>> re.findall(r'\d+(?=x)', s)
['3', '6', '2', '8']
From the documentation of the re module:
(?=...)
Matches if ... matches next, but doesn’t consume any of the string.
This is called a lookahead assertion.
For example, Isaac (?=Asimov) will
match 'Isaac ' only if it’s followed
by 'Asimov'.
For the exponents, you can use positive look-behind instead:
>>> s = '3x3+6x2+2x1+8x0'
>>> re.findall(r'(?<=x)\d+', s)
['3', '2', '1', '0']
Again, from the docs:
(?<=...) Matches if the current position in the string is preceded by a match for
... that ends at the current position.
This is called a positive lookbehind
assertion. (?<=abc)def will find a
match in abcdef, since the lookbehind
will back up 3 characters and check if
the contained pattern matches.
A:
>>> import re
>>> equation = '3x3+6x2+2x1+8x0'
>>> re.findall(r'x([0-9]+)', equation)
['3', '2', '1', '0']
>>> re.findall(r'([0-9]+)x', equation)
['3', '6', '2', '8']
A:
Yet another way to do it, without regex:
>>> eq = '3x3+6x2+2x1+8x0'
>>> op = eq.split('+')
['3x3', '6x2', '2x1', '8x0']
>>> [o.split('x')[0] for o in op]
['3', '6', '2', '8']
>>> [o.split('x')[1] for o in op]
['3', '2', '1', '0']
|
Python regex - conditional matching?
|
I don't know if that's the right word for it, but I am trying to come up with some regex's that can extract coefficients and exponents from a mathematical expression. The expression will come in the form 'axB+cxD+exF' where the lower case letters are the coefficients and the uppercase letters are the exponents. I have a regex that can match to both of them, but I'm wondering if I can use 2 regexs, one to match the coefficients and one for the exponents. Is there a way to match a number with a letter on one side of it without matching the letter? EG, in '3x3+6x2+2x1+8x0' I need to get
['3', '6', '2', '8']
and
['3', '2', '1', '0']
|
[
"You can use positive look-ahead to match something that is followed by something else. To match the coefficients, you can use:\n>>> s = '3x3+6x2+2x1+8x0'\n>>> re.findall(r'\\d+(?=x)', s)\n['3', '6', '2', '8']\n\nFrom the documentation of the re module:\n\n(?=...)\n Matches if ... matches next, but doesn’t consume any of the string.\n This is called a lookahead assertion.\n For example, Isaac (?=Asimov) will\n match 'Isaac ' only if it’s followed\n by 'Asimov'.\n\nFor the exponents, you can use positive look-behind instead:\n>>> s = '3x3+6x2+2x1+8x0'\n>>> re.findall(r'(?<=x)\\d+', s)\n['3', '2', '1', '0']\n\nAgain, from the docs:\n\n(?<=...) Matches if the current position in the string is preceded by a match for\n ... that ends at the current position.\n This is called a positive lookbehind\n assertion. (?<=abc)def will find a\n match in abcdef, since the lookbehind\n will back up 3 characters and check if\n the contained pattern matches.\n\n",
">>> import re\n>>> equation = '3x3+6x2+2x1+8x0'\n>>> re.findall(r'x([0-9]+)', equation)\n['3', '2', '1', '0']\n>>> re.findall(r'([0-9]+)x', equation)\n['3', '6', '2', '8']\n\n",
"Yet another way to do it, without regex: \n>>> eq = '3x3+6x2+2x1+8x0'\n>>> op = eq.split('+')\n['3x3', '6x2', '2x1', '8x0']\n>>> [o.split('x')[0] for o in op]\n['3', '6', '2', '8']\n>>> [o.split('x')[1] for o in op]\n['3', '2', '1', '0']\n\n"
] |
[
4,
1,
1
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0000930834_python_regex.txt
|
Q:
Django Shell shortcut in Windows
I'm trying to write a bat file so I can quickly launch into the Interactive Shell for one of my Django projects.
Basically I need to write a python script that can launch "manage.py shell" and then be able to print from mysite.myapp.models import *
The problem is manage.py shell cannot take additional arguments and launching into "manage.py shell" exits the parent script, so I am unable to then execute additional commands.
A:
First download django-extensions from google code. search for "django command-extensions"
Download and install it by running setup.py install from within the folder (it has a file called "setup.py")
You will then be able to run manage.py shell_plus instead of manage.py shell, giving you an enhanced version of the python shell which will load all your models automatically
Now the batch file:
make a new file "run_django.bat" on your desktop (for instance), then enter to it
@echo off
cd [path/to/project]
manage.py shell_plus
save the file. anytime you click it, it will start your shell with all your models loaded
|
Django Shell shortcut in Windows
|
I'm trying to write a bat file so I can quickly launch into the Interactive Shell for one of my Django projects.
Basically I need to write a python script that can launch "manage.py shell" and then be able to print from mysite.myapp.models import *
The problem is manage.py shell cannot take additional arguments and launching into "manage.py shell" exits the parent script, so I am unable to then execute additional commands.
|
[
"First download django-extensions from google code. search for \"django command-extensions\"\nDownload and install it by running setup.py install from within the folder (it has a file called \"setup.py\")\nYou will then be able to run manage.py shell_plus instead of manage.py shell, giving you an enhanced version of the python shell which will load all your models automatically\nNow the batch file:\nmake a new file \"run_django.bat\" on your desktop (for instance), then enter to it\n@echo off\ncd [path/to/project]\nmanage.py shell_plus\n\nsave the file. anytime you click it, it will start your shell with all your models loaded\n"
] |
[
2
] |
[] |
[] |
[
"batch_file",
"command_line",
"django",
"python",
"windows_xp"
] |
stackoverflow_0000930641_batch_file_command_line_django_python_windows_xp.txt
|
Q:
Python AppEngine; get user info and post parameters?
Im checking the examples google gives on how to start using python; especifically the code posted here; http://code.google.com/appengine/docs/python/gettingstarted/usingdatastore.html
The thing that i want to lean is that, here:
class Guestbook(webapp.RequestHandler):
def post(self):
greeting = Greeting()
if users.get_current_user():
greeting.author = users.get_current_user()
greeting.content = self.request.get('content')
greeting.put()
self.redirect('/')
They are saving a comment, IF the user is logged in, we save the user; if not its empty and when we get it out of the db we actually check for that here:
if greeting.author:
self.response.out.write('<b>%s</b> wrote:' % greeting.author.nickname())
else:
self.response.out.write('An anonymous person wrote:')
So what i would like is to use the User object to get the information, like this:
class Guestbook(webapp.RequestHandler):
def post(self):
user = users.get_current_user()
if user:
greeting = Greeting()
if users.get_current_user():
greeting.author = users.get_current_user()
greeting.content = self.request.get('content')
greeting.put()
self.redirect('/')
else:
self.redirect(users.create_login_url(self.request.uri))
So, what i would like to do with that code is to send the user to the login url(if he is not logged in); and then to come back with whatever he had in post and actually post it. But what happens is that it doesnt even get to that action coz is nothing in the post.
I know i could put something in the session and check for it in the get action of the guestbook, but i wanted to check if someone could come up with a better solution!
Thanks
A:
The problem is, self.redirect cannot "carry along" the payload of a POST HTTP request, so (from a post method) the redirection to the login-url &c is going to misbehave (in fact I believe the login URL will use get to continue when it's done, and that there's no way to ask it to do a post instead).
If you don't want to stash that POST payload around somewhere (session or otherwise), you can make your code work by changing the def post in your snippet above to def get, and of course the action="post" in the HTML written in other parts of the example that you haven't snipped to action="get". There are other minor workarounds (you could accept post for "normal" messages from logged-in users and redirect them to the get that perhaps does something simpler if they weren't logged in yet), but I think this info is sufficient to help you continue from here, right?
|
Python AppEngine; get user info and post parameters?
|
Im checking the examples google gives on how to start using python; especifically the code posted here; http://code.google.com/appengine/docs/python/gettingstarted/usingdatastore.html
The thing that i want to lean is that, here:
class Guestbook(webapp.RequestHandler):
def post(self):
greeting = Greeting()
if users.get_current_user():
greeting.author = users.get_current_user()
greeting.content = self.request.get('content')
greeting.put()
self.redirect('/')
They are saving a comment, IF the user is logged in, we save the user; if not its empty and when we get it out of the db we actually check for that here:
if greeting.author:
self.response.out.write('<b>%s</b> wrote:' % greeting.author.nickname())
else:
self.response.out.write('An anonymous person wrote:')
So what i would like is to use the User object to get the information, like this:
class Guestbook(webapp.RequestHandler):
def post(self):
user = users.get_current_user()
if user:
greeting = Greeting()
if users.get_current_user():
greeting.author = users.get_current_user()
greeting.content = self.request.get('content')
greeting.put()
self.redirect('/')
else:
self.redirect(users.create_login_url(self.request.uri))
So, what i would like to do with that code is to send the user to the login url(if he is not logged in); and then to come back with whatever he had in post and actually post it. But what happens is that it doesnt even get to that action coz is nothing in the post.
I know i could put something in the session and check for it in the get action of the guestbook, but i wanted to check if someone could come up with a better solution!
Thanks
|
[
"The problem is, self.redirect cannot \"carry along\" the payload of a POST HTTP request, so (from a post method) the redirection to the login-url &c is going to misbehave (in fact I believe the login URL will use get to continue when it's done, and that there's no way to ask it to do a post instead).\nIf you don't want to stash that POST payload around somewhere (session or otherwise), you can make your code work by changing the def post in your snippet above to def get, and of course the action=\"post\" in the HTML written in other parts of the example that you haven't snipped to action=\"get\". There are other minor workarounds (you could accept post for \"normal\" messages from logged-in users and redirect them to the get that perhaps does something simpler if they weren't logged in yet), but I think this info is sufficient to help you continue from here, right?\n"
] |
[
3
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0000930578_google_app_engine_python.txt
|
Q:
Scalable web application with lot of image servings
I started working on a web application. This application needs lot of image handling. I started off with PHP as it was the easiest and cheapest to host. I have used the .NET framework for some of my previous applications and I'm very comfortable with Python.
But I'm not at all comfortable using PHP now, so I have decided to use something easier for me.
Can anyone help me understand if the .NET framework or Python (currently web.py looks good to me)
has some edge over others, considering a lot of image manipulation and let's say about 200 requests per second?
Also I would appreciate if someone can suggest a proper host for either of them.
EDIT:
Sorry for the confusion. By image handling I mean the users of the application are allowed to upload pictures that would be stored in the flat file system while their entries are in the database.
By image manipulation, I mean I would need to create thumbnails for these images too which would be used in the application.
A:
Please buy Schlossnagle's book, Scalable Internet Architectures.
You should not be serving the images from Python (or PHP or .Net) but from Apache and Squid. Same is true for Javascript and CSS files -- they're static media, and Python should never touch them.
You should only be processing the HTML portion of the transaction.
This, BTW, is the architecture you get with things like Django. Static media is handled outside Python. Python handles validation and the HTML part of the processing.
Turns out that you'll spend much of your time fussing around with Squid and Apache trying to get things to go quickly. Python (and the Django framework) are fast enough if you limit their responsibilities.
A:
As mentioned previously, any number of development platforms will work, it really depends on your approach to caching the content.
If you are comfortable with Python I would recommend Django. There is a large development community and a number of large applications and sites running on the framework.
Django internally supports caching through use of memcached. You are able to customize quite greatly how and what you want to cache, while being able to keep many of the settings for the caching in your actual Django application (I find this nice when using third party hosting services where I do not have complete control of the system).
Here are a few links that may help:
Django framework - General information on the Django framework.
Memcached with Django - Covers how to configure caching specifically for a Django project.
Memcached website
The Django Book - A free online book to learn Django (it also covers caching and scaling quetsions).
Scaling Chapter
Caching Chapter
There are a number of hosting companies that offer both shared and dedicated hosting plans. I would visit http://djangohosting.org/ to determine which host may work best for your need. I have used WebFaction quite a bit and have been extremely pleased with their service.
A:
If you want performance when serving images, you have to take the FaceBook approach of 'never go to disk unless absolutely necessary' - meaning use as much caching as possible between your image servers and the end user. There are many products that can help you out both commercial and free, including just configuring your webservers correctly - google and see what works for your cost and platform.
A:
From what You have written, either .NET or Python would be a good choice for You. Personally, I would go for Python. Why?
It is free.
It is scalable.
With Python Imaging Library You can do almost anything with images.
Hey, it is python - less code, same result.
To be honest Your choice is not important - just choose the one You feel comfortable with and stick with it.You mentioned web.py - this site is made with web.py: colr.org - and it is made with 1304 lines of code, not counting external libraries.
A:
I do not think switching languages will help much with your problem, It's just the architecture you chose initially only works for small amounts of data. I would recommend you to visit http://highscalability.com , It's time you started looking how the big guys scale their applications.
A:
I would also recommend looking at MogileFS. It is a distributed file system which runs on Unix based OS's. I know that digg use it to store their avatar images.
It's from the same guys who created memcached and live journal.
|
Scalable web application with lot of image servings
|
I started working on a web application. This application needs lot of image handling. I started off with PHP as it was the easiest and cheapest to host. I have used the .NET framework for some of my previous applications and I'm very comfortable with Python.
But I'm not at all comfortable using PHP now, so I have decided to use something easier for me.
Can anyone help me understand if the .NET framework or Python (currently web.py looks good to me)
has some edge over others, considering a lot of image manipulation and let's say about 200 requests per second?
Also I would appreciate if someone can suggest a proper host for either of them.
EDIT:
Sorry for the confusion. By image handling I mean the users of the application are allowed to upload pictures that would be stored in the flat file system while their entries are in the database.
By image manipulation, I mean I would need to create thumbnails for these images too which would be used in the application.
|
[
"Please buy Schlossnagle's book, Scalable Internet Architectures.\nYou should not be serving the images from Python (or PHP or .Net) but from Apache and Squid. Same is true for Javascript and CSS files -- they're static media, and Python should never touch them. \nYou should only be processing the HTML portion of the transaction.\nThis, BTW, is the architecture you get with things like Django. Static media is handled outside Python. Python handles validation and the HTML part of the processing.\nTurns out that you'll spend much of your time fussing around with Squid and Apache trying to get things to go quickly. Python (and the Django framework) are fast enough if you limit their responsibilities.\n",
"As mentioned previously, any number of development platforms will work, it really depends on your approach to caching the content.\nIf you are comfortable with Python I would recommend Django. There is a large development community and a number of large applications and sites running on the framework.\nDjango internally supports caching through use of memcached. You are able to customize quite greatly how and what you want to cache, while being able to keep many of the settings for the caching in your actual Django application (I find this nice when using third party hosting services where I do not have complete control of the system).\nHere are a few links that may help:\n\nDjango framework - General information on the Django framework.\nMemcached with Django - Covers how to configure caching specifically for a Django project.\nMemcached website\nThe Django Book - A free online book to learn Django (it also covers caching and scaling quetsions).\n\n\nScaling Chapter\nCaching Chapter\n\n\nThere are a number of hosting companies that offer both shared and dedicated hosting plans. I would visit http://djangohosting.org/ to determine which host may work best for your need. I have used WebFaction quite a bit and have been extremely pleased with their service.\n",
"If you want performance when serving images, you have to take the FaceBook approach of 'never go to disk unless absolutely necessary' - meaning use as much caching as possible between your image servers and the end user. There are many products that can help you out both commercial and free, including just configuring your webservers correctly - google and see what works for your cost and platform.\n",
"From what You have written, either .NET or Python would be a good choice for You. Personally, I would go for Python. Why?\n\nIt is free.\nIt is scalable.\nWith Python Imaging Library You can do almost anything with images.\nHey, it is python - less code, same result.\n\nTo be honest Your choice is not important - just choose the one You feel comfortable with and stick with it.You mentioned web.py - this site is made with web.py: colr.org - and it is made with 1304 lines of code, not counting external libraries. \n",
"I do not think switching languages will help much with your problem, It's just the architecture you chose initially only works for small amounts of data. I would recommend you to visit http://highscalability.com , It's time you started looking how the big guys scale their applications. \n",
"I would also recommend looking at MogileFS. It is a distributed file system which runs on Unix based OS's. I know that digg use it to store their avatar images. \nIt's from the same guys who created memcached and live journal.\n"
] |
[
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
".net",
"python",
"scalability"
] |
stackoverflow_0000919248_.net_python_scalability.txt
|
Q:
How do I inspect the scope of a function where Python raises an exception?
I've recently discovered the very useful '-i' flag to Python
-i : inspect interactively after running script, (also PYTHONINSPECT=x)
and force prompts, even if stdin does not appear to be a terminal
this is great for inspecting objects in the global scope, but what happens if the exception was raised in a function call, and I'd like to inspect the local variables of the function? Naturally, I'm interested in the scope of where the exception was first raised, is there any way to get to it?
A:
At the interactive prompt, immediately type
>>> import pdb
>>> pdb.pm()
pdb.pm() is the "post-mortem" debugger. It will put you at the scope where the exception was raised, and then you can use the usual pdb commands.
I use this all the time. It's part of the standard library (no ipython necessary) and doesn't require editing debugging commands into your source code.
The only trick is to remember to do it right away; if you type any other commands first, you'll lose the scope where the exception occurred.
A:
In ipython, you can inspect variables at the location where your code crashed without having to modify it:
>>> %pdb on
>>> %run my_script.py
A:
use ipython: http://mail.scipy.org/pipermail/ipython-user/2007-January/003985.html
Usage example:
from IPython.Debugger import Tracer; debug_here = Tracer()
#... later in your code
debug_here() # -> will open up the debugger at that point.
"Once the debugger activates, you can use all of its regular commands to
step through code, set breakpoints, etc. See the pdb documentation
from the Python standard library for usage details."
|
How do I inspect the scope of a function where Python raises an exception?
|
I've recently discovered the very useful '-i' flag to Python
-i : inspect interactively after running script, (also PYTHONINSPECT=x)
and force prompts, even if stdin does not appear to be a terminal
this is great for inspecting objects in the global scope, but what happens if the exception was raised in a function call, and I'd like to inspect the local variables of the function? Naturally, I'm interested in the scope of where the exception was first raised, is there any way to get to it?
|
[
"At the interactive prompt, immediately type\n>>> import pdb\n>>> pdb.pm()\n\npdb.pm() is the \"post-mortem\" debugger. It will put you at the scope where the exception was raised, and then you can use the usual pdb commands.\nI use this all the time. It's part of the standard library (no ipython necessary) and doesn't require editing debugging commands into your source code.\nThe only trick is to remember to do it right away; if you type any other commands first, you'll lose the scope where the exception occurred.\n",
"In ipython, you can inspect variables at the location where your code crashed without having to modify it:\n>>> %pdb on\n>>> %run my_script.py\n\n",
"use ipython: http://mail.scipy.org/pipermail/ipython-user/2007-January/003985.html\nUsage example:\nfrom IPython.Debugger import Tracer; debug_here = Tracer()\n\n#... later in your code\ndebug_here() # -> will open up the debugger at that point.\n\n\"Once the debugger activates, you can use all of its regular commands to\nstep through code, set breakpoints, etc. See the pdb documentation\nfrom the Python standard library for usage details.\"\n"
] |
[
7,
5,
4
] |
[] |
[] |
[
"debugging",
"python"
] |
stackoverflow_0000906649_debugging_python.txt
|
Q:
Debug some Python code
Edit: You can get the full source here: http://pastebin.com/m26693
Edit again: I added some highlights to the pastebin page. http://pastebin.com/m10f8d239
I'm probably going to regret asking such a long question, but I'm stumped with this bug and I could use some guidance. You're going to have to run this code (edit: not anymore. I couldn't include all of the code -- it was truncated) to help in order to really see what's going on, unless you're God or something, then by all means figure it out without running it. Actually I kind of hope that I can explain it well enough so that's not necessary, and I do apologize if I don't accomplish that.
First I'll give you some output. (Edit: There's new output below)
argc 1 [<__main__.RESULT instance at 0x94f91ec>]
(<__main__.RESULT instance at 0x9371f8c>, <__main__.RESULT instance at 0x94f91ec>)
None
bar
internal error: unknown result type 0
argc 1 [<__main__.RESULT instance at 0x94f92ac>]
(<__main__.RESULT instance at 0x94f91ac>, <__main__.RESULT instance at 0x94f92ac>)
None
bar
internal error: unknown result type 0
argc 1 [<__main__.RESULT instance at 0x94f91ec>]
(<__main__.RESULT instance at 0x94f91ec>,)
String: 'bar'
We have 3 divisions in the output. Notice that argc is always 1. At the point where that is printed, an argument list has been built to be passed to a plugin (Plugins are simply commands in the expression interpreter. Most of this code is the expression interpreter.) The list of a single RESULT instance representation that follows argc is the argument list. The next line is the argument list once it reaches the Python method being called. Notice it has two arguments at this point. The first of these two is trash. The second one is what I wanted. However, as you can see on the lines starting "argc 1" that argument list is always 1 RESULT wide. Where's the stray argument coming from?
Like I said, there are 3 divisions in the output. The first division is a class by itself. The second division is a subclassed class. And the third division is no class/instance at all. The only division that outputs what I expected is the 3rd. Notice it has 1 argument member both before the call and within the call, and the last line is the intended output. It simply echoes/returns the argument "bar".
Are there any peculiarities with variable argument lists that I should be aware of? What I mean is the following:
def foo(result, *argv):
print argv[0]
I really think the bug has something to do with this, because that is where the trash seems to come from -- in between the call and the execution arrival in the method.
Edit: Ok, so they limit the size of these questions. :) I'll try my best to show what's going on. Here's the relevant part of EvalTree. Note that there's only 2 divisions in this code. I messed up that other file and deleted it.
def EvalTree(self, Root):
type = -1
number = 0.0
freeme = 0
if Root.Token == T_NUMBER or Root.Token == T_STRING:
return 0
elif Root.Token == T_VARIABLE:
self.CopyResult(Root.Result, Root.Variable.value)
return 0
elif Root.Token == T_FUNCTION:
argc = Root.Children
param = resizeList([], argc, RESULT)
print "argc", argc
for i in range(argc):
self.EvalTree(Root.Child[i])
param[i] = Root.Child[i].Result
self.DelResult(Root.Result)
Root.Function.func(Root.Result, *param) # I should have never ever programmed Lua ever.
return 0
Here's the Plugin's class.
class Foo:
def __init__(self, visitor):
visitor.AddFunction("foo", -1, self.foo)
def foo(self, result, *argv):
print argv
Here's where it's all executed.
if __name__ == "__main__":
evaluator = Evaluator()
expression = "foo2('bar')"
#expression = "uptime('test')"
evaluator.SetVariableString("test", "Foo")
def func(self, result, *arg1):
print arg1
evaluator.SetResult(result, R_STRING, evaluator.R2S(arg1[0]))
evaluator.AddFunction('foo2', -1, func)
result = RESULT(0, 0, 0, None)
tree = evaluator.Compile(expression)
if tree != -1:
evaluator.Eval(tree, result)
if result.type == R_NUMBER:
print "Number: %g" % (evaluator.R2N(result))
elif result.type == R_STRING:
print "String: '%s'" % (result.string) #(evaluator.R2S(result))
elif result.type == (R_NUMBER | R_STRING):
print "String: '%s' Number: (%g)" % (evaluator.R2S(result), evaluator.R2N(result))
else:
print "internal error: unknown result type %d" % (result.type)
expression = "foo('test')"
result = RESULT(0, 0, 0, None)
tree = evaluator.Compile(expression)
if tree != -1:
evaluator.Eval(tree, result)
if result.type == R_NUMBER:
print "Number: %g" % (evaluator.R2N(result))
elif result.type == R_STRING:
print "String: '%s'" % (result.string) #(evaluator.R2S(result))
elif result.type == (R_NUMBER | R_STRING):
print "String: '%s' Number: (%g)" % (evaluator.R2S(result), evaluator.R2N(result))
else:
print "internal error: unknown result type %d" % (result.type)
This is the new output:
argc 1
(<__main__.RESULT instance at 0x9ffcf4c>,)
String: 'bar'
argc 1
(<__main__.RESULT instance at 0xa0030cc>, <__main__.RESULT instance at 0xa0030ec>)
internal error: unknown result type 0
A:
It appears that your code was truncated, so I can't look through it.
Given that you only get the extra argument on methods defined in a class, though, might it be the self variable? Every method on a Python class receives self as the first parameter, and if you don't account for it, you'll get things wrong.
In other words, should this:
def foo(result, *argv):
print argv[0]
actually be this:
def foo(self, result, *argv):
print argv[0]
If so, then the value traditionally held by self will be assigned to result, and your result value will be in the first position of argv.
If that's not it, you'll need to give more code. At the very least, the code that actually runs the tests.
A:
In class Foo, when you call
def __init__(self, visitor):
visitor.AddFunction("foo", -1, self.foo)
...you are adding what's called a "bound" method argument (that is, self.foo). It is like a function that already has the self argument specified. The reason is, when you call self.foo(bar, baz), you don't specify "self" again in the argument list. If you call
def __init__(self, visitor):
visitor.AddFunction("foo", -1, Foo.foo)
...you'd get the same result as with your free function. However, I don't think this is quite what you want. Besides, EvalTree passes its own self as the first arg to the function. I think what you might want is to declare foo like this:
class Foo:
def __init__(self, visitor):
visitor.AddFunction("foo", -1, self.foo)
def foo(self, tree, result, *argv):
print argv
|
Debug some Python code
|
Edit: You can get the full source here: http://pastebin.com/m26693
Edit again: I added some highlights to the pastebin page. http://pastebin.com/m10f8d239
I'm probably going to regret asking such a long question, but I'm stumped with this bug and I could use some guidance. You're going to have to run this code (edit: not anymore. I couldn't include all of the code -- it was truncated) to help in order to really see what's going on, unless you're God or something, then by all means figure it out without running it. Actually I kind of hope that I can explain it well enough so that's not necessary, and I do apologize if I don't accomplish that.
First I'll give you some output. (Edit: There's new output below)
argc 1 [<__main__.RESULT instance at 0x94f91ec>]
(<__main__.RESULT instance at 0x9371f8c>, <__main__.RESULT instance at 0x94f91ec>)
None
bar
internal error: unknown result type 0
argc 1 [<__main__.RESULT instance at 0x94f92ac>]
(<__main__.RESULT instance at 0x94f91ac>, <__main__.RESULT instance at 0x94f92ac>)
None
bar
internal error: unknown result type 0
argc 1 [<__main__.RESULT instance at 0x94f91ec>]
(<__main__.RESULT instance at 0x94f91ec>,)
String: 'bar'
We have 3 divisions in the output. Notice that argc is always 1. At the point where that is printed, an argument list has been built to be passed to a plugin (Plugins are simply commands in the expression interpreter. Most of this code is the expression interpreter.) The list of a single RESULT instance representation that follows argc is the argument list. The next line is the argument list once it reaches the Python method being called. Notice it has two arguments at this point. The first of these two is trash. The second one is what I wanted. However, as you can see on the lines starting "argc 1" that argument list is always 1 RESULT wide. Where's the stray argument coming from?
Like I said, there are 3 divisions in the output. The first division is a class by itself. The second division is a subclassed class. And the third division is no class/instance at all. The only division that outputs what I expected is the 3rd. Notice it has 1 argument member both before the call and within the call, and the last line is the intended output. It simply echoes/returns the argument "bar".
Are there any peculiarities with variable argument lists that I should be aware of? What I mean is the following:
def foo(result, *argv):
print argv[0]
I really think the bug has something to do with this, because that is where the trash seems to come from -- in between the call and the execution arrival in the method.
Edit: Ok, so they limit the size of these questions. :) I'll try my best to show what's going on. Here's the relevant part of EvalTree. Note that there's only 2 divisions in this code. I messed up that other file and deleted it.
def EvalTree(self, Root):
type = -1
number = 0.0
freeme = 0
if Root.Token == T_NUMBER or Root.Token == T_STRING:
return 0
elif Root.Token == T_VARIABLE:
self.CopyResult(Root.Result, Root.Variable.value)
return 0
elif Root.Token == T_FUNCTION:
argc = Root.Children
param = resizeList([], argc, RESULT)
print "argc", argc
for i in range(argc):
self.EvalTree(Root.Child[i])
param[i] = Root.Child[i].Result
self.DelResult(Root.Result)
Root.Function.func(Root.Result, *param) # I should have never ever programmed Lua ever.
return 0
Here's the Plugin's class.
class Foo:
def __init__(self, visitor):
visitor.AddFunction("foo", -1, self.foo)
def foo(self, result, *argv):
print argv
Here's where it's all executed.
if __name__ == "__main__":
evaluator = Evaluator()
expression = "foo2('bar')"
#expression = "uptime('test')"
evaluator.SetVariableString("test", "Foo")
def func(self, result, *arg1):
print arg1
evaluator.SetResult(result, R_STRING, evaluator.R2S(arg1[0]))
evaluator.AddFunction('foo2', -1, func)
result = RESULT(0, 0, 0, None)
tree = evaluator.Compile(expression)
if tree != -1:
evaluator.Eval(tree, result)
if result.type == R_NUMBER:
print "Number: %g" % (evaluator.R2N(result))
elif result.type == R_STRING:
print "String: '%s'" % (result.string) #(evaluator.R2S(result))
elif result.type == (R_NUMBER | R_STRING):
print "String: '%s' Number: (%g)" % (evaluator.R2S(result), evaluator.R2N(result))
else:
print "internal error: unknown result type %d" % (result.type)
expression = "foo('test')"
result = RESULT(0, 0, 0, None)
tree = evaluator.Compile(expression)
if tree != -1:
evaluator.Eval(tree, result)
if result.type == R_NUMBER:
print "Number: %g" % (evaluator.R2N(result))
elif result.type == R_STRING:
print "String: '%s'" % (result.string) #(evaluator.R2S(result))
elif result.type == (R_NUMBER | R_STRING):
print "String: '%s' Number: (%g)" % (evaluator.R2S(result), evaluator.R2N(result))
else:
print "internal error: unknown result type %d" % (result.type)
This is the new output:
argc 1
(<__main__.RESULT instance at 0x9ffcf4c>,)
String: 'bar'
argc 1
(<__main__.RESULT instance at 0xa0030cc>, <__main__.RESULT instance at 0xa0030ec>)
internal error: unknown result type 0
|
[
"It appears that your code was truncated, so I can't look through it.\nGiven that you only get the extra argument on methods defined in a class, though, might it be the self variable? Every method on a Python class receives self as the first parameter, and if you don't account for it, you'll get things wrong.\nIn other words, should this:\ndef foo(result, *argv):\n print argv[0]\n\nactually be this:\ndef foo(self, result, *argv):\n print argv[0]\n\nIf so, then the value traditionally held by self will be assigned to result, and your result value will be in the first position of argv.\nIf that's not it, you'll need to give more code. At the very least, the code that actually runs the tests.\n",
"In class Foo, when you call \n\n def __init__(self, visitor):\n visitor.AddFunction(\"foo\", -1, self.foo)\n\n...you are adding what's called a \"bound\" method argument (that is, self.foo). It is like a function that already has the self argument specified. The reason is, when you call self.foo(bar, baz), you don't specify \"self\" again in the argument list. If you call \n\n def __init__(self, visitor):\n visitor.AddFunction(\"foo\", -1, Foo.foo)\n\n...you'd get the same result as with your free function. However, I don't think this is quite what you want. Besides, EvalTree passes its own self as the first arg to the function. I think what you might want is to declare foo like this:\n\nclass Foo:\n def __init__(self, visitor):\n visitor.AddFunction(\"foo\", -1, self.foo)\n def foo(self, tree, result, *argv):\n print argv\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"debugging",
"python"
] |
stackoverflow_0000931211_debugging_python.txt
|
Q:
Getting friends within a specified degree of separation
all. I'm a very, very new programmer. My language of choice at the moment is Python, and I feel like I have a decent feel for it. I'm just now starting to learn about recursion. (By the way, if anyone could recommend a good guide on this, please let me know!) Just so you all know, this question is very elementary, and the code I'm posting is horribly, horribly wrong.
Anyway, I'm trying to write a function that will get all the friends within a specified degree. If I pass it 0 as the degree, I just want myself. If I pass it 1, I want me and all my friends. 2, I want me, my friends, and all their friends, and so on.
I've tried quite a few different ways of doing this, but none work. I try to visualize how it should work in theory, and I can't quite get that either because I'm so inexperienced in this area. Maybe a kind soul here can show me all the ways in which this code fails and then explain how to do it properly and/or recommend a good guide on the subject. Here goes:
def getFriends(self,degree,friendList):
if degree == 0:
friendList.append(self)
return friendList
else:
friendList = friendList.append(self)
for each in self.friends:
each.getFriends(degree-1,friendList)
It doesn't work, and I know I've done stupid, stupid things. Someone please slap me and point me in the correct direction!
Thanks.
A:
friendList = friendList.append(self)
This sets friendList to None, unconditionally, as that's the invariable return value of any list's append method -- so, fix that weirdness first...!-)
Once you've fixed that, you still need to fix the function so that it always ends with return of something -- "falling off the end" returns None. E.g.:
def getFriends(self,degree, friendList):
if degree == 0:
friendList.append(self)
return friendList
else:
friendList.append(self)
for each in self.friends:
each.getFriends(degree-1, friendList)
return friendList
which can and clearly should be refactored to eliminate the duplication (DRY, Don't Repeat Yourself, is THE heart of programming...):
def getFriends(self,degree, friendList):
friendList.append(self)
if degree > 0:
for each in self.friends:
each.getFriends(degree-1, friendList)
return friendList
PS: that (the alist=alist.append(...) issue) precisely how I got back in touch with my wife Anna in 2002 (we'd been not-quite-sweetheart friends many years before but had lost track of each other) -- she started studying Python, used exactly this erroneous construct, couldn't understand why it failed -- looked around the Python community, saw and recognized my name, mailed me asking about it... less than two years later we were married, and soon after she was the first woman member of the Python Software Foundation and my co-author in "Python Cookbook" 2nd ed. So, of course, I've got an incredible sweet spot for this specific Python error...;-).
A:
You can move friendList.append(self) to the line before the if - you need it in both cases. You also don't need to assign the result to friendlist - it's a bug.
In your algorithm, you will likely to add the same people twice - if A is a friend of B and B is a friend of A. So, you need to keep a set of friends that you've processed already. Before processing, check this set and don't do anything if the person has been processed already.
A:
Is your identation correct? The body of the method should be indented relative to it's definition
A:
There's no return statement in the else clause. So if degree != 0, this method will always return None. You want to append the result of each recursive getFriends call to your friendList, and then return friendList.
By the way, if you want to make this algorithm faster, there are well established methods for doing this with either graph algorithms or matrix manipulation. For example, if you represent friendship relationships with an adjacency matrix A, and you want to find all people who are within n degrees of separation of each other, you can compute B=A^n. If B[i][j] > 0, then i and j are within n degrees of separation of each other. Matrix multiplication is easy with a package like NumPy.
A:
(Sorry, I can't comment on Alex's answer... yet)
I don't really like the idea that getFriends returns a value that is never used. It works, for sure, but it looks a bit intriguing ;)
Also, the first call to getFriends would be self.getFriends(degree, []) which is confusing: when getting a list of friends, why would you pass as an argument an empty list, right?
For clarity, I think that I would prefer this slightly different version, using the _getFriends helper function:
def getFriends(self, degree):
friendList = []
self._getFriends(degree, friendList)
return friendList
def _getFriends(self, degree, friendList):
friendList.append(self)
if degree:
for friend in self.friends:
friend._getFriends(degree-1, friendList)
|
Getting friends within a specified degree of separation
|
all. I'm a very, very new programmer. My language of choice at the moment is Python, and I feel like I have a decent feel for it. I'm just now starting to learn about recursion. (By the way, if anyone could recommend a good guide on this, please let me know!) Just so you all know, this question is very elementary, and the code I'm posting is horribly, horribly wrong.
Anyway, I'm trying to write a function that will get all the friends within a specified degree. If I pass it 0 as the degree, I just want myself. If I pass it 1, I want me and all my friends. 2, I want me, my friends, and all their friends, and so on.
I've tried quite a few different ways of doing this, but none work. I try to visualize how it should work in theory, and I can't quite get that either because I'm so inexperienced in this area. Maybe a kind soul here can show me all the ways in which this code fails and then explain how to do it properly and/or recommend a good guide on the subject. Here goes:
def getFriends(self,degree,friendList):
if degree == 0:
friendList.append(self)
return friendList
else:
friendList = friendList.append(self)
for each in self.friends:
each.getFriends(degree-1,friendList)
It doesn't work, and I know I've done stupid, stupid things. Someone please slap me and point me in the correct direction!
Thanks.
|
[
"friendList = friendList.append(self)\n\nThis sets friendList to None, unconditionally, as that's the invariable return value of any list's append method -- so, fix that weirdness first...!-)\nOnce you've fixed that, you still need to fix the function so that it always ends with return of something -- \"falling off the end\" returns None. E.g.:\ndef getFriends(self,degree, friendList):\n if degree == 0:\n friendList.append(self)\n return friendList\n else:\n friendList.append(self)\n for each in self.friends:\n each.getFriends(degree-1, friendList)\n return friendList\n\nwhich can and clearly should be refactored to eliminate the duplication (DRY, Don't Repeat Yourself, is THE heart of programming...):\ndef getFriends(self,degree, friendList):\n friendList.append(self)\n if degree > 0:\n for each in self.friends:\n each.getFriends(degree-1, friendList)\n return friendList\n\nPS: that (the alist=alist.append(...) issue) precisely how I got back in touch with my wife Anna in 2002 (we'd been not-quite-sweetheart friends many years before but had lost track of each other) -- she started studying Python, used exactly this erroneous construct, couldn't understand why it failed -- looked around the Python community, saw and recognized my name, mailed me asking about it... less than two years later we were married, and soon after she was the first woman member of the Python Software Foundation and my co-author in \"Python Cookbook\" 2nd ed. So, of course, I've got an incredible sweet spot for this specific Python error...;-).\n",
"You can move friendList.append(self) to the line before the if - you need it in both cases. You also don't need to assign the result to friendlist - it's a bug.\nIn your algorithm, you will likely to add the same people twice - if A is a friend of B and B is a friend of A. So, you need to keep a set of friends that you've processed already. Before processing, check this set and don't do anything if the person has been processed already.\n",
"Is your identation correct? The body of the method should be indented relative to it's definition\n",
"There's no return statement in the else clause. So if degree != 0, this method will always return None. You want to append the result of each recursive getFriends call to your friendList, and then return friendList.\nBy the way, if you want to make this algorithm faster, there are well established methods for doing this with either graph algorithms or matrix manipulation. For example, if you represent friendship relationships with an adjacency matrix A, and you want to find all people who are within n degrees of separation of each other, you can compute B=A^n. If B[i][j] > 0, then i and j are within n degrees of separation of each other. Matrix multiplication is easy with a package like NumPy.\n",
"(Sorry, I can't comment on Alex's answer... yet)\nI don't really like the idea that getFriends returns a value that is never used. It works, for sure, but it looks a bit intriguing ;)\nAlso, the first call to getFriends would be self.getFriends(degree, []) which is confusing: when getting a list of friends, why would you pass as an argument an empty list, right?\nFor clarity, I think that I would prefer this slightly different version, using the _getFriends helper function:\ndef getFriends(self, degree):\n friendList = []\n self._getFriends(degree, friendList)\n return friendList\n\ndef _getFriends(self, degree, friendList):\n friendList.append(self)\n if degree:\n for friend in self.friends:\n friend._getFriends(degree-1, friendList) \n\n"
] |
[
14,
1,
1,
1,
1
] |
[] |
[] |
[
"python",
"recursion"
] |
stackoverflow_0000931323_python_recursion.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.